id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
214710041
pes2o/s2orc
v3-fos-license
Effect of CAR, LDR, NPL, and NIM on ROA in Devisa National Public Private Banks Registered on the IDX 2013-2017 Period The research on Foreign Exchange BUSN is aimed at knowing the effect of CAR, LDR, NPL and NIM on ROA in Devisa National Public Private Banks Registered On The IDX 2013-2017 Period. The type of research used is associative research . Using quantitative methods. The population in this study is BUSN consists of 26 Banks. The sampling technique for research uses a purposive sampling method. The number of recognized research samples is 7 bank sample. The data analysis method used is descriptive statistics, multiple linear regression analysis and classical assumption tests. Testing the hypothesis used is the statistical test t and F statistical test. The results showed that partially CAR had a non-significant negative effect on ROA, LDR has no significant positive effect on ROA, NPL has a significant negative effect on ROA, and NIM has a significant positive effect on ROA. In addition, the results of the CAR, LDR, NPL and NIM studies simultaneously influence ROA. I. INTRODUCTION The disruption of the banking intermediation function after the banking crisis in Indonesia resulted in slow investment activities and economic growth. Profit is the main parameter in assessing the success of banking management. Because most of the funds come from public savings. So that it can be stated in the current year's net profit at the national private foreign exchange commercial banks in Indonesia. Because net income for the year is the main parameter in assessing the success of banking management in profitability. So most of the funds come from public savings, if the national private foreign exchange banks continue to get the decline, it can affect the confidence of their customers and vice versa. Return On Assets (ROA) becomes a measurement tool in knowing the profitability of banking companies, because it is more representative in measuring the ability at the level of profitability. Return on bank assets in the future will be depressed and the ratio will experience pressure. Bank ROA is depressed due to several factors, first is the world interest rates are still low. So that makes banks not easy to raise lending rates. Second, the regulatory burden is quite heavy because it makes banks have to set aside a significant portion of liquid assets. Thus, it is expected to press Return on Assets. The third factor influencing bank ROA ratios is credit risk. The data analysis method used is descriptive statistics, multiple linear regression analysis and classical assumption tests. Testing the hypothesis used is the statistical test t and F statistical test. III. RESULTS Based on the data normality test, by using a histogram chart it can be that the data are normally distributed as indicated by points spreading around the diagonal line. In the multicollinearity test, which is used to determine the regression model the results obtained are: In the autocorrelation test the following results were obtained: Looking for autocorrelation using the Durbin-Watson test, with tables using a significance value of 0.05 or 5%, with a total sample of 35 (n), as well as a total of independent variables 4 (k = 4), then in the Durbin-Watson table get an upper bound value ( du) which is 1,726. Based on the table about the results of the durbin-watson test bound that the durbin-watson value of 1.777 is greater than the upper limit (du) of 1.726 smaller than 4-du (4-1,726) of 2,274, H_o is accepted and it is stated that there is no positive autocorrelation value or negative in the decision table (du <d <4 -du) = (1,726 <1,777 <2,274) so it can be concluded that no autocorrelation. Multiple regression test results. These include CAR (X1), LDR (X2), NPL (X3), NIM (X4) to ROA (Y) as follows: The results of multiple regression tests, have a constant value α = -0.645 and the regression coefficient b_1 = -0.033, b_2 = 0.017, b_3 = -0.090, b_4 = 0.164. Then the values of constants and regression coefficients (α, b_1, b_2, b_3, b_4) are included in the multiple linear regression equation, as follows: (1) So, the regression equation is to be. As follows: The coefficient of determination (R_2) is a measure of the closeness of the relationship between CAR, LDR, NPL and NIM to ROA. Meanwhile, the coefficient of multiple determination with the symbol R_2 is the suitability of multiple linear lines to one variable. Adjusted R Square value is 0.579. So, it is known that the effect of CAR, LDR, NPL and NIM simultaneously (together) on ROA is relatively large at 57.9%. While the rest (100% -57.9% = 42.1%) is shown in other independent variables not observed in the study. Hypothesis testing uses the statistical test t (partial), to find out each independent variable on the dependent variable. The results of the statistical t test showed that the CAR variable (X1) had an insignificant negative effect on ROA, because the sig value of 0.262 with a value of Pvalue> 0.05 so (0.262> 0.05) the meaning was not significant. Meanwhile, the value of t-count is -1,144 and t- F statistical test results show that the value of sig (significant) is equal to 0,000. Because the sig value (significant) is 0,000 <0.05, it has a significant effect and the calculated F_ value is 12,700. F_tabel = (k; n -k), i.e. F_tabel = (4; 35 -4 = 31). So F_tabel is 2.68. F_count 12,700> Ftable 2.68. The calculations on decision making in the F test can be concluded that the hypothesis is accepted (hypothesis H_5 is accepted). Then CAR (X1), LDR (X2), NPL (X3) and NIM (X4) simultaneously (together) affect ROA (Y). A. Effect of CAR on ROA The results of the statistical t test showed that the CAR variable (X1) had a non-significant negative effect on ROA, This study is in line with the results of research on Heri Susanto and Nur Kholis [1], Mario Christiano, Parengkuan Tommy and Ivonne Saerang [2], showing that CAR (Capital Adequacy Ratio) has a significant effect on ROA (Return On Asset). B. Effect of LDR on ROA The results of the statistical t test showed that the LDR (X2) variable had a non-significant positive effect on ROA, The results of this study are not in line with the theory put forward by Taswan [3] that the higher level of LDR shows the worse the condition of bank liquidity, because the placement on credit is also financed from deposits which are withdrawn at any time. Therefore, it is recommended that the most appropriate LDR ratio between 89% to 115%. So that each bank tries to follow the maximum lending limit guidelines, because this limit is not intended to limit the expansion of the bank's credit in question, but rather on the distribution or distribution of credit. Then it can be concluded that the size of the LDR has no direct effect on ROA, but banks are allowed to expand credit, as long as it is able to offset the amount of funds received with a ratio between 89% to 115%. C. Effect of NPL on ROA The results of the statistical t test showed that the NPL variable (X3) had a significant negative effect on ROA, The results of this study are in line with the theory put forward by Taswan [4] that the higher the NPL ratio shows the worse the credit quality. According to Bank Indonesia Regulation Number 6/10 / PBI / 2004 dated 12 April 2004 concerning the Rating System for Commercial Banks, stipulates that the ratio of non-performing loans (NPL) is 5%, indicating that the higher the NPL value is above 5%, then the bank can said to be unhealthy. Then it can be concluded that the size of the NPL (Non Performing Loan) directly affects ROA (Return On Assets). D. Effect of NIM on ROA The results of the statistical t test showed that the NIM (X4) variable had a significant positive effect on ROA, The results of this study are in line with the theory put forward by Taswan [4] that the greater the NIM ratio the better the bank's performance in generating interest income. Therefore, banks can be said to be healthy if their NIMs are above 6%. Then it can be concluded that the size of the NIM directly affects ROA E. Effect of CAR, LDR, NPL and NIM on ROA Seen from the F test results shown in the ANOVA table, it can be found that together, the CAR, LDR, NPL and NIM variables influence ROA, This is in line with the results of research on Heri Susanto and Nur Kholis [1], Mario Christiano, Parengkuan Tommy and Ivonne Saerang [2], showing that CAR, LDR, NPL and NIM simultaneously have a significant influence on ROA. Based on this research, it shows that the CAR, LDR, NPL and NIM variables influence ROA, because it indicates that the smaller the CAR, LDR, NPL and NIM, the smaller the value of ROA or the greater the CAR, LDR, NPL and NIM, the greater the value ROA V. CONCLUSION Based on the results and discussion, it can be concluded that CAR has no significant negative effect, while LDR has no significant positive effect on ROA. while the NPL variable has a negative effect and the NIM has a positive effect and both have significant effects on ROA This study still has limitations, namely The results of the study, only on one type of bank namely BUSN Foreign Exchange (National Private Foreign Exchange Commercial Bank) which is listed on the Indonesia Stock Exchange (IDX). These limitations can be said not to give perfect results. Therefore, to further maximize research for further researchers in conducting research it is recommended to increase the number of banks that are more diverse in the types of banks to be studied.
2020-03-29T14:25:26.852Z
2020-03-16T00:00:00.000
{ "year": 2020, "sha1": "b66ee1278fd3d346ead35715e3149221d3467f63", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/aebmr.k.200305.041", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b66ee1278fd3d346ead35715e3149221d3467f63", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
211024940
pes2o/s2orc
v3-fos-license
Economic Evaluation of Hepatitis C Treatment Extension to Acute Infection and Early-Stage Fibrosis Among Patients Who Inject Drugs in Developing Countries: A Case of China We aimed to assess the cost-effectiveness of (1) treating acute hepatitis C virus (HCV) vs. deferring treatment until the chronic phase and (2) treating all chronic patients vs. only those with advanced fibrosis; among Chinese genotype 1b treatment-naïve patients who injected drugs (PWID), using a combination Daclatasvir (DCV) plus Asunaprevir (ASV) regimen and a Peg-interferon (PegIFN)-based regimen, respectively. A decision-analytical model including the risk of HCV reinfection simulated lifetime costs and quality-adjusted life-years (QALYs) of three treatment timings, under the DCV+ASV and PegIFN regimen, respectively: Treating acute infection (“Treat at acute”), treating chronic patients of all fibrosis stages (“Treat at F0 (no fibrosis)”), treating only advanced-stage fibrosis patients (“Treat at F3 (numerous septa without cirrhosis)”). Incremental cost-effectiveness ratios (ICERs) were used to compare scenarios. “Treat at acute” compared with “Treat at F0” was cost-saving (cost: DCV+ASV regimen—US$14,486.975 vs. US$16,224.250; PegIFN-based regimen—US$19,734.794 vs. US$22,101.584) and more effective (QALY: DCV+ASV regimen—14.573 vs. 14.566; PegIFN-based regimen—14.148 vs. 14.116). Compared with “Treat at F3”; “Treat at F0” exhibited an ICER of US$3780.20/QALY and US$15,145.98/QALY under the DCV+ASV regimen and PegIFN-based regimen; respectively. Treatment of acute HCV infection was highly cost-effective and cost-saving compared with deferring treatment to the chronic stage; for both DCV+ASV and PegIFN-based regimens. Early treatment for chronic patients with DCV+ASV regimen was highly cost-effective. Introduction Hepatitis C virus (HCV) infection remains the leading cause of liver cirrhosis or hepatocellular carcinoma [1], and has a substantial negative impact on patients' quality of life and functioning [2][3][4]. studies in developed and developing countries focused only on either DAAs or PegIFN-based therapy for HCV. They did not indicate how to adjust treatment strategy to local conditions in the presence of both drugs. Thus, this study used the case of China to conduct a model-based analysis to examine the cost-effectiveness of different treatment timings under DCV+ASV and PegIFN-based regimens, respectively. With a hypothesis that early treatment was more effective, we first compared treatment at acute with deferring treatment until stage F0, to determine the cost-effectiveness of treating acute HCV, under DCV+ASV and PegIFN-based regimens, respectively. Then, we compared treatment at stage F0 with until stage F3, to assess the cost-effectiveness of early treatment for chronic HCV among PWID, under DCV+ASV and PegIFN-based regimens, respectively. Model Overview Based on natural history of HCV infection, we considered regression of liver damage and HCV reinfection among PWID. We developed a Markov cohort, state-transition model to evaluate the health outcomes and costs of treatment at different timing with DCV+ASV or PegIFN-based regimen. This model has been received by decision makers and clinicians [5,29]. The disease stage-transition reflect progression through the acute phase or 5 METAVIR (Meta-analysis of Histological Data in Viral Hepatitis) liver fibrosis stages (F0, no fibrosis; F1, portal fibrosis with septa; F2, portal fibrosis with rare septa; F3, numerous septa without cirrhosis; F4, compensated cirrhosis) to advanced liver disease (DC, decompensated cirrhosis; HCC, hepatocellular carcinoma; LT, liver transplantation; and Post-LT) and regression of liver damage or reinfection after patients cleared the virus. We simulated the clinical course of the patients and projected long-term outcomes such as quality-adjusted life years (QALYs) and costs. The model ran in a monthly cycle length until all patients died. Additional details were provided in Figure 1, and input parameters were summarized in Supplementary Table S1. and cost-effective compared with the PegIFN-based regimen [28], but the cost-effectiveness of initiating treatment at an early fibrosis stage compared with advanced fibrosis stage was not reported. Furthermore, previous studies in developed and developing countries focused only on either DAAs or PegIFN-based therapy for HCV. They did not indicate how to adjust treatment strategy to local conditions in the presence of both drugs. Thus, this study used the case of China to conduct a model-based analysis to examine the costeffectiveness of different treatment timings under DCV+ASV and PegIFN-based regimens, respectively. With a hypothesis that early treatment was more effective, we first compared treatment at acute with deferring treatment until stage F0, to determine the cost-effectiveness of treating acute HCV, under DCV+ASV and PegIFN-based regimens, respectively. Then, we compared treatment at stage F0 with until stage F3, to assess the cost-effectiveness of early treatment for chronic HCV among PWID, under DCV+ASV and PegIFN-based regimens, respectively. Model Overview Based on natural history of HCV infection, we considered regression of liver damage and HCV reinfection among PWID. We developed a Markov cohort, state-transition model to evaluate the health outcomes and costs of treatment at different timing with DCV+ASV or PegIFN-based regimen. This model has been received by decision makers and clinicians [5,29]. The disease stage-transition reflect progression through the acute phase or 5 METAVIR (Meta-analysis of Histological Data in Viral Hepatitis) liver fibrosis stages (F0, no fibrosis; F1, portal fibrosis with septa; F2, portal fibrosis with rare septa; F3, numerous septa without cirrhosis; F4, compensated cirrhosis) to advanced liver disease (DC, decompensated cirrhosis; HCC, hepatocellular carcinoma; LT, liver transplantation; and Post-LT) and regression of liver damage or reinfection after patients cleared the virus. We simulated the clinical course of the patients and projected long-term outcomes such as quality-adjusted life years (QALYs) and costs. The model ran in a monthly cycle length until all patients died. Additional details were provided in Figure1, and input parameters were summarized in Supplementary Table S1. for all strategies, within each strategy, the fibrosis state at which the treatment that was initiated was selected. The square in the Markov model represents initial cohorts. HCV, Hepatitis C Virus; DCV, Daclatasvir; ASV, Asunaprevir; IFN, interferon; RBV, ribavirin; SVR, sustained virologic response; F0, no fibrosis; F1, portal fibrosis with septa; F2, portal fibrosis with rare septa; F3, numerous septa without cirrhosis; F4, compensated cirrhosis; DC, decompensated cirrhosis; HCC, hepatocellular carcinoma; LT, liver transplantation. Natural History of HCV Infection Acute HCV patient who failed to spontaneously clear the virus or be cured would progress to F0 fibrosis stage after six months. Patients at F0 who failed to be cured progressed through different stages of liver fibrosis (F0-F4). Patients with F4 could further progress to DC, HCC. Transition probabilities between states were obtained from published systematic reviews and observational studies. Patients with DC and HCC were eligible for liver transplantation. The likelihood of LT was estimated from previous studies as 0.008 (95% CI: 0.006-0.01) (Supplementary Figure S1). Patient Cohort Our base-case cohort was representative of newly diagnosed and treatment-naïve genotype 1b HCV RNA positive PWID in China. Patients coinfected with HIV or HBV were excluded. The mean age of the cohort at the baseline was 20.7 years old [30]. Distribution of HCV stages at baseline was calculated on the basis of previous investigation in China [12,13]: Acute 25%, F0 22.5%, F1 17.2%, F2 7.5%, F3 9%, and F4 18.8%. The model did not distinguish patients from viral concentration, sex, or race, although these factors may affect treatment outcomes [29]. Progression, Regression and Reinfection after SVR Patients who cleared virus entered the recovered states, and some patients could experience regression of liver fibrosis. Despite being cured, patients in the F4_SVR state risked progression to DC or HCC, but with a greatly reduced rate [31,32]. To make a conservative estimation of the costeffectiveness of treatment in PWID, we stated that reinfection would occur at a high proportion of 19.0% (range: 0.6-19.0%) [8,33]. Those re-infected patients would spontaneously re-clear the virus Natural History of HCV Infection Acute HCV patient who failed to spontaneously clear the virus or be cured would progress to F0 fibrosis stage after six months. Patients at F0 who failed to be cured progressed through different stages of liver fibrosis (F0-F4). Patients with F4 could further progress to DC, HCC. Transition probabilities between states were obtained from published systematic reviews and observational studies. Patients with DC and HCC were eligible for liver transplantation. The likelihood of LT was estimated from previous studies as 0.008 (95% CI: 0.006-0.01) (Supplementary Figure S1). Patient Cohort Our base-case cohort was representative of newly diagnosed and treatment-naïve genotype 1b HCV RNA positive PWID in China. Patients coinfected with HIV or HBV were excluded. The mean age of the cohort at the baseline was 20.7 years old [30]. Distribution of HCV stages at baseline was calculated on the basis of previous investigation in China [12,13]: Acute 25%, F0 22.5%, F1 17.2%, F2 7.5%, F3 9%, and F4 18.8%. The model did not distinguish patients from viral concentration, sex, or race, although these factors may affect treatment outcomes [29]. Progression, Regression and Reinfection after SVR Patients who cleared virus entered the recovered states, and some patients could experience regression of liver fibrosis. Despite being cured, patients in the F4_SVR state risked progression to DC or HCC, but with a greatly reduced rate [31,32]. To make a conservative estimation of the cost-effectiveness of treatment in PWID, we stated that reinfection would occur at a high proportion of 19.0% (range: 0.6-19.0%) [8,33]. Those re-infected patients would spontaneously re-clear the virus with a proportion of 52% (95% CI: 33-73%) [34], then other patients would re-enter the Markov HCV progression to the chronic HCV state to receive treatment. Those who failed treatment were not eligible for re-treatment. Previous studies had suggested that broadly expanded treatment could provide substantial health gains due to the reduced the reinfection risks [18]. In this study, due to the lack of data in China, we made a relatively conservative assumption of the expanding treatment's potential for reducing HCV reinfection as 5% (range: 2.5-7.5%) [5], one-way sensitivity analysis would be conducted to determine its impact on cost-effectiveness. Mortality Mortality rates were shown in Supplementary Table S1. Besides the age-specific background mortality for the general population from China 2017 Life Tables [35], overdose of drug-related mortality from a systematic review and meta-analysis of cohort studies was also included [9]. Mortality for patients with acute HCV, stage F0 to F4, and clearing the virus was assumed to be the background mortality rate plus overdose of drug related mortality rate. Patients with DC and HCC had excessive liver-related mortality [36,37]. Patients who receive a liver transplant and who were after transplantation could also die from transplant-related complications [38]. Treatment Strategies Three options of initiating treatment at different timing under two therapy options (DCV+ASV and PegIFN-based regimen) were modeled ( Figure 1A). (i) Treat at acute. In this scenario, all acute infections could be immediately treated. With DCV+ASV, the course was 12 weeks, referred to as "DCV+ASV (treat at acute)" arm. With PegIFN-α as monotherapy, the course was 24 weeks, referred to as "PegIFN (treat at acute)" arm. Due to the lack of data on DCV+ASV in the treatment of acute HCV infection, considering acutely-infected patients have historically been treated with a shorter duration, we conservatively assumed that the duration of treating acute infections using DCV+ASV was 12 weeks based on a published review [17]. (ii) Treat at F0. In this scenario, the acute-infected patients must progress to stage F0 to be treated. All patients who had developed chronic HCV infection could be treated regardless of the stage of fibrosis (F0-F4). With DCV+ASV, the course was 24 weeks, referred to as "DCV+ASV (treat at F0)" arm. With PegIFN+RBV, the course was 48 weeks, referred to as "PegIFN+RBV (treat at F0)" arm. (iii) Treat at F3. In this scenario, only those patients at F3 and F4 stages could be treated. Those at acute phase or stage F0-F2 had to progress to stage F3 to be treated. All patients at F3 and F4 stages could be treated. With DCV+ASV, the course was 24 weeks, referred to as "DCV+ASV (treat at F3)" arm. With PegIFN+RBV, the course was 48 weeks, referred to as "PegIFN+RBV (treat at F3)" arm. The goal of treatment was an undetectable serum level of HCV RNA 12 or 24 weeks after the completion of therapy, named a sustained virologic response (SVR). Due to data unavailability, we conservatively estimated that the SVR rate in acute treatment was equivalent to that in chronic treatment using the DCV+ASV regimen. The SVR of perIFN monotherapy was between 71% and 94% in patients with acute HCV mono-infection based on a published review [17]. Discontinuation of therapy was considered given the poor adherence to treatment among PWID [39][40][41][42]. Costs and Health State Utility Values The societal perspective of Chinese was adopted to calculate all direct medical costs for HCV management and therapy (Supplementary Table S1). All outcomes were presented on per cohort basis. All costs were expressed as US dollars using official exchange rates as of 2018 (US$1 = 6.62 CNY) [43]. Regimen costs of pegIFN-based and DCV+ASV regimens were derived from previous economic evaluations in a Chinese setting [42,44]. The annual direct medical costs of managing patients at stages F0-F4, DC, and HCC were obtained from a real-world study in China [44], which included the costs of outpatient visits and post-treatment monitoring. The annual costs of liver transplantation and post-liver transplantation were gained from literature focused on health costs among chronic hepatitis B infection in China [45]. HCV-RNA and genotype tests were used to confirm infection. HCV-RNA test was also used to determine whether spontaneous clearance had occurred, and the relevant costs were obtained from local charges. The model included health state utility values by fibrosis stages with and without SVR and disutility during treatment. Utility values were obtained from previous literatures based on SF-36 [28,[46][47][48]. We assumed the utility value for the acute phase of HCV was equal to that for the F0 states, and post SVR or spontaneously cleared virus patients from acute phase was equal to that for the F0_SVR patients. Model Outcomes and Statistical Analysis Statistical analysis and Markov model were performed using TreeAge Pro 2018, and graph-plotting was done with Excel software. All future costs and QALYs were discounted at 5% (3-5%) per year. Incremental cost-effectiveness ratios (ICERs) as the ratio of the difference in costs between treatment strategies divided by the difference in QALYs were calculated. A strategy producing an ICER of US$29,295 per QALY, as 3-times per capita gross domestic product (GDP) of China in 2018 [43], was considered as cost-effective. A strategy producing an ICER of US$9765 per QALY, as one-time per capita GDP of China in 2018, was considered as being highly cost-effective. One-way sensitivity analysis was conducted to determine the effects of parameters on the ICER. Probabilistic sensitivity analysis based on a second-order Monte Carlo simulation with 1000 iterations was then conducted to ascertain the model stability. Results were reported as cost-effectiveness acceptability curves. The range, distribution, and source for each parameter were shown in Supplementary Table S1. Cost-Effectiveness of Treating Acute HCV among PWID Treating acute infection was cost-saving and more effective compared with delayment to the F0 stage (Table 1). With DCV+ASV regimen, early treatment at F0 compared with waiting until stage F3 increased QALY by 0.459 and costs by US$1735.488, the corresponding ICER was US$3780.20/QALY, which was below 1-time per capita GDP (US$9765/QALY). This indicated that early treatment with DCV+ASV for chronic HCV was highly cost-effective. With PegIFN+RBV regimen, early treatment at F0 compared with delayed treatment at F3 had a QALY gain of 0.345 but with a higher cost of US$5225.364, the corresponding ICER was US$15,145.98/QALY, which was below 3-times per capita GDP (US$29,295/QALY) but above 1-time per capita GDP. This indicated that early treatment with PegIFN+RBV for chronic HCV was cost-effective, but the cost-effectiveness was not high. Cost-Effectiveness of Treating Acute HCV among PWID Treating acute infection compared with deferring treatment until stage F0 had substantially lower costs and more QALYs across all parameters' ranges, regardless of the drug regime-DCV+ASV or PegIFN. The ICER was most sensitive to the reinfection rate after clearing the virus. With the reduction of reinfection rate, treating acute infection cost less and gained more QALYs (Supplementary Figure S2). Cost-Effectiveness of Early Treatment at F0 Stage for Chronic HCV among PWID With the DCV+ASV regimen, the ICER of treatment at stage F0 vs. at stage F3 was most sensitive to the reinfection reduction rate from treatment, the costs of DCV+ASV, the reinfection rate after clearing virus, and the utility of F2. While the reduction probability was set at 5%, the ICER was above one-time per capita GDP but not higher than three-times per capita GDP. With the PegIFN+RBV regimen, the ICER of treatment at F0 vs. at F3 was most sensitive to the SVR of PegIFN+RBV, the utility of F2, the reinfection reduction rate from treatment, the costs of PegIFN+RBV, the utility of F0_SVR, the reinfection rate after clearing virus and the re-clearance proportion within six months after reinfection. The ICER was less than one-time per capita GDP, while the reinfection rate was lower than about 12.8%. The ICER was above three-times per capita GDP when SVR of PegIFN+RBV was below about 0.4, utility of F2 was above 0.993, and the utility of F0_SVR was below approximately 0.981 ( Supplementary Figures S3 and S4). Probabilistic Sensitivity Analysis Probabilistic sensitivity analysis demonstrated that the base-case analysis was stable. Monte-Carlo simulations were shown in Figures 2 and 3 and Supplementary Figures S5 and S6 as the likelihood of a strategy to be considered cost-effective at different willingness-to-pay (WTP). Treatment at the acute stage was cost-effective in 100% of simulations compared with deferring treatment until stage F0, no matter whether it was with the DCV+ASV or PegIFN-based regimen (Supplementary Figure S5). For chronic HCV, at a WTP threshold of one-time per capita GDP, early treatment at stage F0 compared with delayed treatment at stage F3 was cost-effective in 100% of simulations under DCV+ASV regimen, but delayed treatment at stage F3 was cost-effective in 54.6% of simulations under PegIFN+RBV regimen; at a WTP threshold of three-times per capita GDP, early treatment at stage F0 compared with delayed treatment at stage F3 was cost-effective in 87.4% of simulations under PegIFN+RBV regimen (Supplementary Figure S6). Treatment at the acute stage was cost-effective in 100% of simulations compared with deferring treatment until stage F0, no matter whether it was with the DCV+ASV or PegIFN-based regimen (Supplementary Figure S5). For chronic HCV, at a WTP threshold of one-time per capita GDP, early treatment at stage F0 compared with delayed treatment at stage F3 was cost-effective in 100% of simulations under DCV+ASV regimen, but delayed treatment at stage F3 was cost-effective in 54.6% of simulations under PegIFN+RBV regimen; at a WTP threshold of three-times per capita GDP, early treatment at stage F0 compared with delayed treatment at stage F3 was cost-effective in 87.4% of simulations under PegIFN+RBV regimen (Supplementary Figure S6). Compared to the other strategies, treatment at acute using DCV+ASV was the cost-effective strategy, with a high probability of 100% at a WTP threshold of one-time per capita GDP (Figure 3). When only PegIFN-based regimen was available, treatment at acute was also the cost-effective option compared to the other strategies, with a probability of 88.8% at the threshold of one-time per capita GDP and 98.9% at the threshold of three-times per capita GDP (Figure 3). treatment until stage F0, no matter whether it was with the DCV+ASV or PegIFN-based regimen (Supplementary Figure S5). For chronic HCV, at a WTP threshold of one-time per capita GDP, early treatment at stage F0 compared with delayed treatment at stage F3 was cost-effective in 100% of simulations under DCV+ASV regimen, but delayed treatment at stage F3 was cost-effective in 54.6% of simulations under PegIFN+RBV regimen; at a WTP threshold of three-times per capita GDP, early treatment at stage F0 compared with delayed treatment at stage F3 was cost-effective in 87.4% of simulations under PegIFN+RBV regimen (Supplementary Figure S6). Compared to the other strategies, treatment at acute using DCV+ASV was the cost-effective strategy, with a high probability of 100% at a WTP threshold of one-time per capita GDP (Figure 3). When only PegIFN-based regimen was available, treatment at acute was also the cost-effective option compared to the other strategies, with a probability of 88.8% at the threshold of one-time per capita GDP and 98.9% at the threshold of three-times per capita GDP (Figure 3). Compared to the other strategies, treatment at acute using DCV+ASV was the cost-effective strategy, with a high probability of 100% at a WTP threshold of one-time per capita GDP (Figure 3). When only PegIFN-based regimen was available, treatment at acute was also the cost-effective option compared to the other strategies, with a probability of 88.8% at the threshold of one-time per capita GDP and 98.9% at the threshold of three-times per capita GDP (Figure 3). Discussion This study used a Markov model to assess the cost-effectiveness of treating acute HCV and early treatment for chronic HCV, under DCV+ASV and PegIFN-based regimen, respectively. This study demonstrated that treatment at an acute stage compared with deferring until chronic stage was highly cost-effective or cost-saving, for both DCV+ASV and PegIFN-based regimens. With the threshold of one-time per capita GDP of China, early treatment at F0 stage was cost-effective compared with delayed treatment at F3 stage using DCV+ASV but not cost-effective when using the PegIFN-based regimen. When the threshold was set at three-times per capita GDP of China, early treatment at F0 stage was cost-effective for both the DCV+ASV and PegIFN-based regimens. This provided new evidences for improving current treatment guidelines that suggested deferring treatment to the chronic stage for an acute infection. This was the first study which included the risk of HCV reinfection in the model to assess the cost-effectiveness of early treatment vs. delayed treatment among chronic HCV in developing countries. Treatment at the acute stage was the most cost-effective option, especially using DCV+ASV. In line with a study conducted in the United States [5], treatment at an acute stage was highly cost-effective and cost-saving regardless of the reinfection rate or the costs of treatment. Actually, estimates for the efficiency and costs of acute treatment in this study were likely to be conservative, and treating acute HCV in developing countries may in fact be more cost-effective than we predicted. Thus, acute-infected individuals should not be deprived of being treated since spontaneous clearance of the virus may occur in some patients. It may be time to revise treatment guidelines to recommend treating acute HCV rather than deferring treatment to the chronic stage, regardless of the population (such as PWID, general population), the regimen (DAAs or PegIFN-based), and the setting (resource-limited settings and resource-abundant settings). However, early diagnosis was hard to reach, especially for PWID, because of discrimination, criminalization, and stigma associated with abusing drugs [49]. Optimizing the impact of effective treatment might require more interventions to facilitate access to early HCV detection, including promoting health awareness, addressing discrimination and stigma, regular testing for HCV, and so on [50,51]. Consistent with previous cost-effectiveness studies of treating chronic HCV among PWID in developed countries [18,19,23,26], we found that early treatment at F0 was slightly effective and more expensive than delayed treatment at the F3 stage. Whether early treatment was cost-effective or not compared with delayed treatment depended on the national per capita GDP. For example, in Australia, early treatment at F0 was considered cost-effective compared with delayed treatment at F3 with the threshold of AUD$50,000 per QALY, no matter using DAAs [19] or PegIFN+RBV regimen [20]. In this study, with the threshold of one-time per capita GDP of China (US$9765 per QALY), early treatment at F0 compared with delayed treatment at F3 stage was considered as cost-effective using DCV+ASV but not cost-effective when using the PegIFN-based regimen. This result could provide evidence for China and other resource-limited countries to optimize the allocation of medical resources. In those resource-limited settings, especially for low-and middle-income countries, when DAAs were not available, prioritized treatment for chronic patients with advanced-stage fibrosis may be a better option. Moreover, this study confirmed that the cost-effectiveness of early treatment for chronic patients was sensitive to "reinfection rate after clearing virus" and "reinfection reduction rate from treatment", which were two input variables for the model. The lower the reinfection rate, the more cost-effective "treat at F0" was. The reduction rate reflected the potential for reducing the risk of reinfection and secondary transmission from other infected individuals [18]. We also found that with the improvement of the potential, "treat at F0" was more cost-effective. This also suggested that expanding access to HCV treatment should be combined with harm reduction programs such as needle exchange and opiate substitute treatment, since they would complement HCV treatment by reducing reinfection risk for PWID [18]. The cost-effectiveness of early treatment in chronic patients was also subjected to treatment costs, but early treatment was still highly cost-effective even when the weekly costs of DCV+ASV was US$431.4. Actually, the price of highly effective DAAs has decreased substantially [18]. Early treatment using DAAs might be more cost-effective in the future. This study has some limitations. First, some of the model inputs were obtained from literatures published worldwide, which may not reflect China-specific data. Second, our model focused on the overall simulation on population-level natural history, thus individual heterogeneity was only represented by varying some parameters in sensitivity analyses. Third, the model did not consider those patients with repeated treatment due to poor response. Fourth, we did not consider other genotype patients, and other DAAs approved in China, such as ombitasvir/paritaprevir/ritonavir+dasabuvir for genotype 1b, and sofosbuvir+velpatasvir/daclatasvir for all genotypes [28]. The efficiency and costs of these regimens were similar to DCV+ASV, but their treatment course was shorter. Therefore, the cost-effectiveness of other DAAs regimens for treating HCV in PWID would be consistent with DCV+ASV. Fifth, indirect medical costs were not considered, which may overestimate the cost-effectiveness of HCV treatment in PWID. Finally, our model only considered the patients mono-infected with HCV, excluding those coinfected HBV or HIV. Despite these limitations, we believe the conclusion would not be changed regardless. Conclusions In conclusion, treatment of acute HCV infection was highly cost-effective and cost-saving compared with deferring treatment to the chronic stage, for both DCV+ASV and PegIFN-based regimens. For patients who have been chronically infected, early treatment for chronic patients with DCV+ASV regimen was highly cost-effective. In those resource-limited settings where DCV+ASV or other DAAs were not available, prioritized treatment for those with advanced-stage fibrosis may be a better option. In the future, some real-world studies are needed to confirm and quantify the effects of HCV treatment in mathematical modeling studies. Similarly, it is also important to further examine the cost-effective of HCV treatment in other developing countries, especially in low-and middle-income countries. Conflicts of Interest: The authors declare no conflict of interest.
2020-01-30T09:05:38.809Z
2020-01-28T00:00:00.000
{ "year": 2020, "sha1": "74ad4007001a0f68c9064c30293e0e28be5fbe3d", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/ijerph/ijerph-17-00800/article_deploy/ijerph-17-00800-v2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14f5292e9e07ed971690c87046d2c216d1140357", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5911346
pes2o/s2orc
v3-fos-license
Topical Delivery of Flurbiprofen from Pluronic Lecithin Organogel 1. Elmas M, Trans B, Kaya S, Bas AL, Yazar E, Yarsan E. Pharmacokinetics of enrofloxacin after intravenous and intramuscular administration in angora goats. Can J Vet Res 2001;65:64-7. 2. Lizondo M, Pons M, Gallardo JE. Physicochemical properties of enrofloxacin. J Pharm Biomed Anal 1997;15:1845‐9. 3. Zhao L, Li P, Yalkowsky SH. Solubilization of fluasterone. J Pharm Sci 1999;88:967-9. 4. Millard JW, Alvarez-Nunez FA, Yalkowsky SH. Solubilization by cosolvents. Establishing useful constants for the log-linear model. Int J Pharm 2002;245:153-66. 5. Strickley RG. Solubilizing excipients in oral and injectable formulations. Pharm Res 2004;21:201-30. 6. Saha RN, Sanjeev C, Priya KP, Sreekhar C, Shashikanth G. Solubility enhancement of nimesulide and ibuprofen by solid dispersion technique. Indian J Pharm Sci 2002;64:529-34. 7. Ran Y, Zhao L, Xu Q, Yalkowsky SH. Solubilization of cyclosporin A. AAPS Pharm Sci Tech 2002;2:2. 8. Seedher N, Bhatia S. Solubility enhancement of cox-2 inhibitors using various solvent systems. AAPS Pharm Sci Tech 2003;4:33. 9. Mall S, Buckton G, Rawlins DA. Dissolution behaviour of sulphonamides into sodium dodecyl sulphate micelles: A thermodynamic approach. J Pharm Sci 1996;85:75-8. 10. Rangel-Yagui CO, Pessoa A. Jr. Tavares LC. Micellar solubilization of drugs. J Pharm Sci 2005;8:147-63. 11. Torchilin VP. Structure and design of polymeric surfactant-based drug delivery systems. J Control Release 2001;73:137-72. Topical drug treatment aims at providing high concentration of the drug at the site of application so as to avoid systemic adverse effects associated with oral administration of drug.Organogel is a vehicle base for the delivery of drugs through the dermal and transdermal route.Organogels are formed by specific kind of small organic molecules, which in many solvents very effectively get self assembled into three dimensional networks there by turning a liquid into a gel [1] .Its micellar structure can contain both water and oil soluble ingredients; it shows excellent drug permeability by diffusion through the lipid intracellular matrix and by slight disorganization of skin.Pluronic and lecithin have become very popular in the topical delivery of drugs.A number of studies have shown that pluronic lecithin organogels (PLOs) have the unique capacity to deliver the drugs through the skin [1,2] and particular medications such as NSAIDs, hormones, antiemetics, opoids and local anesthetics [3] to a specific site when other routes of administration are not viable.Flurbiprofen, a propionic acid derivative is effective antiinflammatory and analgesic recommended in the management of patients with osteoarthritis, rheumatoid arthritis and ankylosing spondylitis.It has a logP/hydrophobicity 4.078, having half-life of 4.7-5.7 h and molecular weight of 244.261 g/mol.These properties make it a potential candidate for topical delivery. Flurbiprofen and Soya Lecithin were received as gratis samples from FDC Ltd, Mumbai and Phospoholipid GmbH, Nattermannallee, Germany, respectively.Pluronic F-127 was procured from Sigma Aldrich Chemie GmbH, Steinheim, Germany.Isopropyl palmitate, polyethylene glycol-600, sorbic acid and potassium sorbate were supplied by Loba Chemie, Mumbai, India.All other chemicals were of analytical grade and used as received. The various formulations of PLO [4,5] (Table 1) were developed with different compositions.Oil phase was prepared by mixing soya lecithin and sorbic acid in appropriate quantity of isopropyl palmitate.The mixture was kept overnight at room temperature in order to dissolve its constituents.Aqueous phase was prepared by dispersing weighed amount of Pluronic F-127 and potassium sorbate in cold water.The dispersion was stored in refrigerator overnight for effective dissolution of Pluronic F-127.The next day, active ingredient flurbiprofen was dissolved in polyethylene glycol-600 and mixed with the lecithin-isopropyl palmitate solution; polyethylene glycol-600 was used for solubilization of flurbiprofen.Finally, aqueous phase (70%) was slowly added in oil phase (30%) with stirring at 400 rpm using mechanical stirrer. The organogels prepared were evaluated for appearance and feel psychorheologically, drug content and content uniformity at 247 nm in ethanol, pH, viscosity using Brookfield Viscometer and in vitro diffusion/permeation using Keshary-Chien diffusion cell.The drug content of different formulations of organogel was determined by taking a standard curve of flurbiprofen in ethanol.For this, accurately weighed 50.0 mg of drug was transferred in a 50 ml volumetric flask, dissolved in ethanol and volume was made up with ethanol.Two millilitres of the solution was pipette out and diluted to 100 ml with ethanol.Then aliquots were further diluted with ethanol to get concentration of 2, 4, 6, 8, 10, 12, 14, 16, 18 and 20 μg/ml.Absorbance were recorded spectrophotometrically and standard curve of flurbiprofen in ethanol was plotted at λ max 247 nm.Further for determining drug content, each formulation (0.5 g) was taken in a 50 ml volumetric flask, diluted with ethanol and shaken to dissolve the drug in ethanol.The solution was filtered through Whatman filter paper No. 42, one ml of the above filtrate was pipette out and diluted to 10 ml with ethanol.The content of the drug was estimated spectrophotometrically by using standard curve plotted at λ max 247 nm. To test the pattern of release of drug from formulations, in vitro diffusion studies [4,6,7] were carried out.The developed formulations were subjected to in vitro diffusion through dialysis membrane-70, with molecular weight cut off 12000-14000 D and dehaired abdominal skin of Wistar albino rats was used as a semi permeable membrane using modified Keshary-Chien diffusion cell.The receptor compartment was filled with saline phosphate buffer (0.2 M, pH 7.4) and methanol (90:10).Methanol was added in medium to maintained sink condition.The whole assembly was maintained at 37±1° and receptor solution was stirred with a magnetic stirrer at 100 rpm throughout the experiment.Aliquots (1 ml) were withdrawn at regular interval of 1 h for a period of 8 h and replaced with equal volume of fresh medium equilibrated at 37±1°.All the samples were suitably diluted with medium and analyzed spectrophotometrically at 247 nm for flurbiprofen content.Viscosities [4,6] of the formulated organogels were determined using Brookfield Viscometer with Spindle no.7 (Model: RV DV-E 230) at 25° with the spindle speed of 10 rpm.The pH of formulated organogels was determined using pH meter.The electrode was immersed in organogels and readings were recorded on pH meter.All the formulations showed drug content in the range of 96-99% indicating uniform distribution of drug throughout the base. The viscosity of all the formulations was found to be in the range 2910-3455 poise.The increase in viscosity with increase in lecithin concentration is might be due to formation of complex network.The results revealed that maximum in vitro cumulative percent drug release of flurbiprofen in 8 h was observed from FL2 formulation.Further increase in concentration of lecithin decreased cumulative percent drug release which might be due to extensive formation of network like structure with very high viscosity.Also from the in vitro diffusion studies it was found that the permeation of flurbiprofen through dialysis membrane-70 (fig. 1) was more as compared to rat skin (fig.2).The pH of all the formulations was around the skin pH and found to be in the range of 5.9 to 6.5.All the formulations were smooth in feel and free from grittiness which increases the patient compliance.The data obtained is shown in Table 2. From above studies it may be concluded that formulation FL2, containing 3% lecithin is an effective formulation for topical delivery of flurbiprofen as it showed higher cumulative percent drug release and drug content.Imidazolinone ring system is of biological and chemical interest since long.The imidazolinones [1] are associated with a wide range of therapeutic activities [2][3][4][5][6][7] such as anticonvulsant, sedative and hypnotic, potent CNS depressant, antihistamine, antifilarial, bactericidal, fungicidal, antiinflammatory, MAO inhibitory, antiparkinsonian, antihypertensive and anthelmintic.Recently some new imidazolinone derivatives have been reported as antiinflammatory, herbicidal and hypertensive activities.Some workers have recognized 5-imidazolone as having anticancer activity [8] .The therapeutic importance of the compounds inspired us to synthesize some potential imidazolinones [9][10][11][12][13] . Desai et al. [14] have synthesized 4-benzylidene-2-phenyloxazole-5-one based on the methods descried in the literature which is a special type of Perkin condensation in which reaction between aldehyde and benzoylglycine proceeds first followed by ring closer.It is observed that aldehyde condenses under the influence of a base with the reactive methylene group in the azalactone which is formed by the dehydration of benzoylglycine, when the latter reacts with Ac 2 O in presence of sodium acetate.In view of these observations, we have synthesized imidazol-5-ones (Scheme I, Table 1).
2018-04-03T03:30:41.700Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "3dc0c43b5a10c389d700ebc81a472d893bfdae00", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc2810062", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "48fd64237f62a892c716b5233f321a265928ebe5", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
108613391
pes2o/s2orc
v3-fos-license
Avian Use of Rice-Baited Trays Attached to Cages with Live Decoy Blackbirds in Central North Dakota A bstrAct : The Compound DRC-1339 Concentrate – Staging-Areas label is approved in North Dakota for use in non-crop staging areas near blackbird roosts. Potential blackbird damage affects sunflower planting patterns and reduces profits. One option to manage damage is to reduce the local blackbird population using DRC-1339 bait. The challenges are to limit nontarget bird hazards while attracting large numbers of blackbirds. During fall 2007, we assessed the nontarget bird risks of using rice baits on elevated bait trays attached to the top of decoy traps. During random visits to bait sites, we recorded 968 individual birds and 12 avian species. Blackbirds accounted for 95% of all tray visits. Sparrow species were the most prevalent of the non-blackbirds. Strategic placement of the bait trays near large roosts will be necessary for this technique to be successful. Ultimately, Wildlife Services might use DRC-1339-treated rice baits on bait trays for managing local blackbird damage. K avicide, blackbirds, bait trays, decoy traps, DRC-1339, North Dakota, sunflower Vertebr. Pest Conf. M. Timm M. Eds.) Univ. Calif., Davis. 2008. Pp.118-121. INTRODUCTION Blackbird depredation of sunflower has been a continuous problem since the 1970s (Otis and Kilburn 1988, Blackwell et al. 2003, Peer et al. 2003. Sunflower growers consistently place blackbirds in the top tier of problems associated with growing sunflower in the northern Great Plains (Kleingartner 2003). Many non-lethal tactics have been employed in an attempt to protect ripening sunflower from foraging flocks of blackbirds (Linz and Hanzel 1997). Thinning cattail-choked wetlands to reduce roosting habitat, using pyrotechniques to frighten feeding birds, planting lure plots to lure birds away from commercial plots, applying taste repellents, and adapting cultural methods such as block planting to synchronize ripening are just a few such tactics ). The numbers of blackbirds migrating through the northern Great Plains can overwhelm non-lethal techniques, especially if an alternative food source is not available (Avery 2003). One avicide, compound DRC-1339 (3-chloro-p-toluidine hydrochloride), is registered for use as an avicide in the U.S. and North Dakota (USDA 1993). The avicide is usually mixed with brown rice at a ratio of 1:25 (treated rice kernel to untreated rice kernels). Normally, the rice mixture is broadcast on the ground in harvested or ripening crops . Resident and migratory birds are plentiful in ripening sunflower fields, however, causing a potential risk to nontarget species with the use of DRC-1339 . One potential method of avoiding nontargets is to put live blackbirds (decoys) in cages in areas devoid of habitat to attract free-living blackbirds to bait trays. The intent is to reduce large concentrations of blackbirds that cannot be otherwise dispersed by non-lethal means. The objective of this study is to identify and quantify the avian species visiting the bait trays. Our goal is to develop an effective and environmentally safe method for managing locally abundant blackbird populations. METHODS We based our study site selection on historical knowledge of sunflower planting patterns, crop phenology, and blackbird damage to sunflower in North Dakota. Decoy traps fitted with bait trays were placed on private lands near gravel roads and observed for bird activity. There were 51 total sites ( Figure 1) during the course of the study in the following counties: Barnes (5), Griggs (5), Nelson (9), Ramsey (8), Stutsman (17), and Walsh (7). We used modified Australian crow traps (decoy trap), made of 2.5×5-cm (1×2-in) woven wire with 1.6×1.6×2m (4×4×6-ft) sides, with a 0.5-m (1.5-ft) drop box with a single 5-cm (2-in) slit for birds to enter the traps. We attached a 0.6×1.2-m (2×4-ft) plywood roof to the top of the decoy trap. A 5×5-cm (2×2-in) wood rim was placed around the edges of the roof. A second rim was placed about 12 cm (4.5 in) from the edges of the roof to reduce loss of rice due to wind dispersal. A small experimental group of traps were designed to have 1.6-m (4-ft) heights, and one as short as 0.5 m (1.5 ft) in height. We randomly selected half of the plywood roofs and placed 5×10-cm (2×4-in) woven wire guards over the trays to test their efficacy for excluding doves and pheasants. These traps contained captive blackbirds that were initially captured with mist nets. An average of 5.8 redwinged blackbirds (See Table 1 for scientific names), common grackles, and yellow-headed blackbirds were maintained in the decoy traps. Fresh food and water were provided as needed by study participants. We randomly selected 50% of the gravel roads located near our observation points and applied untreated brown rice along 1-m-wide strips. Rice was spread at a rate of 900 g (5 cups)/50 m along the roadside in close proximity to the tray site. Additional rice was added every 5 days at the same rate. Study participants randomly visited the study sites (decoy traps/ bait trays) for 1-hr intervals throughout daylight hours to record behavior (perching, feeding), numbers, species (closest determined taxonomic group), and ages (when possible) of blackbirds and non-blackbirds on the gravel roads and bait trays. The observer parked the vehicle about 50 m from the decoy trap and immediately estimated the number of blackbirds in various habitats (e.g., sunflower, corn, gravel road, trees) within 0.4 km (0.25 mile). After a 10-min quiet period, 1-min counts were made alternating between the gravel road and bait trays, with 2 min between observations. At the end of the 1-hr observation period, the observer again estimated the number of blackbirds within 0.4 km. Binoculars and spotting scopes were used for observations. These data, along with date, time, and weather conditions, were recorded on data sheets printed on rain-resistant paper. We discovered during the first few weeks of the study that predators (raccoons, foxes, weasels, and hawks) could easily access the decoy birds. We tried to reduce predation by retro-fitting the sides and bottoms of the cages with small mesh wire to deter entry. This proved to be somewhat successful but did not solve the problem. Ultimately, we used Figure 2. Comparison of peak blackbird and non-blackbird activity at rice-baited tray sites in central North Dakota between 15 August and 12 October 2007. three strands of electrified smooth-wire fence around the base of each trap. The fencer was powered with either 6v deep cycle batteries and fencers, D-cell fencers, or solar charged 6v fencers. This measure of exclusion proved to be highly effective. Where cages were set side by side, one cage was used as a capture site and the other as a holding cage, but for the most part, traps became holding cages for decoy birds. We maintained about ½ cup (90 g) of rice on the trays. When blackbird use was high, rice levels were increased to 1 cup (180 g) per tray. The rice quantity was checked at least every 3 days. RESULTS We observed the bait stations for 524 h between 15 August and 12 October, with 156 h of observation in Nelson, Ramsey, and Walsh counties and 368 h in Stutsman, Griggs, and Barnes counties. Of the original 51 sites, 22 had only blackbirds present; 4 had only non-blackbirds present; and 18 sites were not visited by any birds. Two sites with the most abundant blackbirds without nonblackbirds present averaged 9.8 and 5.6 birds per visits/observation. The average daily use of tray sites by blackbirds increased until early October. This trend was not observed in non-blackbirds, with a peak average of 1 non-blackbird per hour of observation occurring on 25 August 2007. The core non-blackbird use of trays occurred between 21 August and 29 August 2007 (Figure 2). There were 968 recorded individual visits to trays by 12 different species, and a few birds only identified to family (Table 1). Of these visits, 920 were individual blackbird visits to trays: 851 red-winged blackbirds, 12 yellow-headed blackbirds, 10 European starlings, 30 brown-headed cowbirds, and 17 common grackles. Blackbirds and granivorous nonblackbirds accounted for 95% and 4% of tray visits, respectively. Sparrow species were the most prevalent of visitors, accounting for 94% of the non-blackbirds. When blackbirds visited trays, 84% of them fed on the rice, whereas 54% of non-blackbirds ate rice (Figure 3). DISCUSSION Our first field season yielded invaluable experience that will be used to improve the efficacy and environmental safety of the bait tray-caged decoy bird concept. First, we plan to place bait sites only around large wetland roosts; preferably near trees. Blackbirds loafing around the wetlands appear more likely to visit the bait stations, whereas perch sites provide an opportunity for the birds to observe the trays and decoys. We speculate that sites near sunflower fields were not as active as the sites near cattail roosts because the birds prefer to feed in sunflower over visiting the bait trays. Second, we will use electric fence to deter ground predators at all sites. This will reduce the labor required to replenish the cages with decoys. Third, we plan to clear vegetation in about a 20-m radius around the bait site. We reason that small granivorous birds like to feed on the ground in the dense vegetation to avoid predators. Fourth, we plan to reduce the tray heights from 2 m to 1.6 m so that the free-living blackbirds are not as exposed to avian predators and high winds. Our limited observations suggest that lower tray heights will result in bird landing on the ground around the tray with little use of the actual bait tray. Fifth, we plan to group cages to create the atmosphere of a large feeding flock. Additional data are needed before the usefulness of this bait concept can be assessed with reasonable confidence. We caution that the use of an avicide likely will not solve the sunflower depredation problem. Rather, growers must be encouraged to develop an integrated pest management plan that should include roost management, bird harassment, and early harvest.
2019-01-05T06:18:01.362Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "b1c22f7095f9a25327dd5e6872883a1c9a3b1e5e", "oa_license": null, "oa_url": "https://cloudfront.escholarship.org/dist/prd/content/qt70j6g4vk/qt70j6g4vk.pdf?t=pj6ovp", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f3a56265c448406ea245cca476d994936250b5da", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Engineering", "Biology" ] }
218875499
pes2o/s2orc
v3-fos-license
Research on Phase Combination and Signal Timing Based on Improved K-Medoids Algorithm for Intersection Signal Control Aiming at the problem of intersection signal control, a method of traffic phase combination and signal timing optimization based on the improved K-medoids algorithm is proposed. Firstly, the improvement of the traditional K-medoids algorithm embodies in two aspects, namely, the selection of the initial medoids and the parameter k, which will be applied to the cluster analysis of historical saturation data. The algorithm determines the initial medoids based on a set of probabilities calculated from the distance and determines the number of clusters k based on an exponential function, weight adjustment, and elbow ideas. Secondly, a phase combination model is established based on the saturation and green split data, and the signal timing is optimized through a bilevel programming model. Finally, the algorithm is evaluated over a certain intersection in Hangzhou, and results show that this algorithm can reduce the average vehicle delay and queue length and improve the traffic capacity of the intersection in the peak hour. Introduction With the rapid development of urban construction and socioeconomy, traffic congestion, one of China's urban diseases, not only brings tremendous pressure to urban traffic management but also seriously affects the harmonious development of cities. Many modern transportation facilities and applications can benefit from better performance of signal timing schemes [1][2][3][4]. For example, space-time road resources can be allocated more reasonably, the accuracy of traffic speed prediction can be improved [2], and the optimized signal cycle time and green split scheme can help make better-coordinated control [4]. In [5,6], the authors studied the application of mobile crowdsourcing (MCS) in smart cities. In [7,8], the authors integrate geographic and temporal influences into points of interest (POI) recommendations to help people find points of interest. In recent years, several algorithms have been presented in the literature for traffic signal phase combination and timing optimization. In [9], the authors studied the dynamic prediction traffic signal control framework for a single intersection and optimized the signal timing according to the predicted arrival flow. In [10], a queuing and dissipation model of the intersection traffic flow was presented, which provided a theoretical basis for optimizing the intersection phase and timing. In [11], the authors considered an adaptive traffic signal control method based on fuzzy logic. This method optimized the phase duration and phase sequence. The results showed that the average queue length, the maximum queue length, and the parking rate were significantly shortened, but only lower queue lengths were considered. In [12], fog calculation was used to process traffic data, and a phase combination method based on a genetic algorithm was presented. The authors in [13] studied dynamic programming algorithms to optimize signal timing and phase, thereby, reducing average vehicle latency. In [14], the Artificial Bee Colony algorithm was adopted to optimize the signal cycle time and the green split, reducing the average vehicle delay and the average queue length, but the algorithm needed to obtain the vehicle speed online and calculate it. In [15], the authors considered a dynamic phase control method based on traffic flow, but it needed real-time detection and calculation of road conditions, resulting in poor practical application effect. In [16], the clustering algorithm was applied to process vehicle motion information, which was the basis for subsequent optimization, but only optimized the signal timing, excluding phase combination. In [17], a traffic signal segmentation algorithm based on the two-dimensional clustering was presented. It matched the best timing scheme for the current traffic conditions through the clustering analysis. However, the intersection traffic flow model cannot distinguish between a left turn and straight vehicles. In [18], the authors studied the interval data-based K-means clustering method, and the clustering results can accurately describe the trend of traffic state evolution at an urban intersection. In [19], the K-means clustering algorithm was used to group traffic flows and divide the traffic condition level and provides a theoretical basis for matching the most suitable traffic signal control scheme in different situations. In [20], the author studies a dynamic traffic control method that predicts congestion by the clustering thought. In [21], a traffic signal control method based on the K-means clustering algorithm was presented, and the number of clusters was defined as two. The authors in [22] studied the improved affinity propagation (AP) clustering algorithm, which provided efficient and accurate traffic state information for traffic signal control. The average waiting time was effectively reduced. In [23], the authors studied the K-means clustering method to optimize the best switching time of time-of-day (TOD) control scheme, but the number of clusters needed to be specified in advance, which largely affected the effectiveness of the method. Similarly, the authors in [24] used the Kohonen cluster and K-means cluster to optimize TOD breakpoints and proved that K-means had a better performance. However, it was still necessary to specify the number of clusters and the initial cluster centers in advance, which was easy to fall into local optimum. The existing researches mainly have the following shortcomings: (1) the intersection traffic flow model is established without considering all of the flow directions (2) the practical value of online data acquisition and frequent signal switching solutions is not high (3) the number of clusters depends heavily on prior or empirical knowledge To solve the problems above, this paper proposes a traffic phase combination and signal timing optimization method based on the improved K-medoids algorithm. Firstly, the improved K-medoids algorithm is used to cluster the historical saturation data, which can select the number of schemes k more quickly and accurately. Then, the phase combination model is established since K-medoids correspond to k pairs of saturation and green split data, which can combine the flow direction with similar traffic demand to improve the utilization of green time. Finally, the bilevel programming model is used to optimize the signal cycle time and green split of each phase, so that the timing scheme can be further optimized based on the phase combination. After clustering, each medoid composing a scheme library corresponds to a traffic scheme. In experiments, we choose an appropriate traffic scheme according to the Euclidean distance between the actual traffic saturation and medoids. The paper is organized as follows: Section 2 introduces the traditional K-medoids clustering algorithm and its improvement. Section 3 designs the phase combination and signal timing optimization algorithm. Section 4 provides experimental results and comparisons with the traditional K-medoids algorithm. Section 5 provides conclusions and describes directions for future research. Improved K-Medoids Algorithm In this section, we first introduce the traditional k-medoids algorithm, then, to find better initial medoids and the appropriate parameter k, an improvement is introduced. Finally, we apply the improved k-medoids algorithm to the traffic saturation dataset into k clusters, and each cluster corresponds to one set of traffic scheme. Traditional K-Medoids Algorithm. Clustering is an unsupervised learning algorithm that partitions the origin data into several clusters, where the data in the same cluster are similar to each other but different from the data in other clusters. K-medoids algorithm is a partition-based clustering algorithm. Compared with K-means clustering, it is less sensitive to outliers. Among many k-medoids algorithms, partitioning around medoids (PAM) is one of the most classical and powerful [25]. K-medoids algorithm first randomly selects k representative data points as the initial medoids, each medoid corresponds to one cluster. Secondly, Euclidean distance is applied to calculate the distance between all data and the chosen medoid, each data point will be assigned to the most similar medoid. Thirdly, such a new medoid in each cluster is found to minimize the criterion function within the cluster. The algorithm will stop until all of the medoids are equal to the previous ones, otherwise, assign each data to the nearest medoid and generate k new clusters. The Euclidean distance d ðx i ,y i Þ is used to measure the similarity between all of the data points and the medoids, which can be calculated as follows: where x i and y i are both n-dimensional data objects. The criterion function in within-cluster can be calculated as: where B i is the cluster after clustering, b j is the data point in the cluster B i , and c i is the medoid of the cluster B i . Wireless Communications and Mobile Computing The criterion function is described as follows: where k is the number of clusters. 2.2. The Improvement of K-Medoids Algorithm. For the K-medoids clustering algorithm, the number of clusters and the initialization have a great influence on the clustering process and results. In [26], a density peak clustering algorithm is proposed. This algorithm can select medoids and confirm the correct number of clusters. In [27], the author studied the K-medoids clustering algorithm based on a subset of candidate medoids and gradually increasing the number of clusters, thereby, improving the clustering performance of the algorithm. In order to reduce the negative impact when the initial medoids have a low dispersion degree, this paper proposes an initial point probability selection method based on the Euclidean distance. In addition, in order to reduce the artificial dependence for selecting initial medoids and avoid the excessive gap between each cluster, this paper proposes an optimization for selecting an optimal parameter k based on exponential function, weight adjustment, and elbow idea. Improved Method for Selecting Initial Medoids. After selecting a point in sample data as the first medoid c 1 randomly, the Euclidean distance d ðb h ,c i Þ is applied to calculate the distance between each point b h and the nearest medoid c i , and the probability p h that point b h will be selected as the next cluster medoid can be calculated as: where B is the dataset, and the probability set P can be obtained as follows: where n is the number of samples in the dataset. The roulette wheel method is used to select the cluster medoid c i ði ≥ 2Þ (see Figure 1): Step 1. We generate a random number r between ½0, 1Þ, if r belongs to the interval ½p 1 + p 2 +⋯+p i−1 , p 1 + p 2 +⋯+p i−1 + p i Þ in P, point b i will be the second cluster medoid c 2 . Step 2. We recalculate the probabilities that each point in the dataset will be selected as the next medoid. Step 3. We select the next medoid according to the probability set P and the roulette wheel method. The steps mentioned above will be repeated until k centers are selected. The purpose is to make the initial medoids more discrete, which are closer to the real cluster centers. The number of iterations can be reduced, but settle the problem of trapping in a local optimum. Improved Method for Selecting the Number of Clusters. The traditional criterion function in each cluster is the sum of all data within the cluster, which will make a big difference among clusters, and the classification will also be uneven. To settle the problem, this paper uses the exponential function e x to optimize the criterion function calculation method. The criterion function in within-cluster can be calculated as: In order to avoid exponential explosion, the weight coefficient t is employed, and the criterion function S can be calculated as follows: With the optimization, the criterion function S can be calculated for different k. Following the increasement of parameter k, S will decrease. According to the elbow idea, S drops dramatically at the beginning, then, S reaches an elbow, finally, the curve of S turns to a plateau. The value k corresponding to the elbow is regarded as the optimal number of clusters. Clustering with Saturation Data. Traffic saturation data is a collection of saturation at intersections, a single piece of data can be described as: where n is the number of intersections. The improved K-medoids algorithm described in Section 2.2 is then applied to the traffic saturation data, which divides the data into k clusters, and the initial cluster medoids are selected according to the distance probability p h . The phase and timing optimization can be performed according to the cluster medoids, and each cluster corresponds to one set of 3 Wireless Communications and Mobile Computing traffic scheme, which means there will be k sets of initial traffic schemes. Phase Combination and Signal Timing In order to improve the adaptability of the traffic schemes for maching different traffic conditions, we establish the phase combination model and optimize the signal timing using the bilevel programming model. Two traffic flows are conflicting if there is a collision point of the vehicle travel path in these two directions. For example, the traffic flow in the east-west direction and the south-north direction are conflict, while the traffic flow in the east-west direction and the west-east direction are compatible. The conflict matrix can be constructed as follows: where φ ij indicates whether the flow direction and j is conflict. If not, the value is 0, otherwise, 1. The distance matrix is used to represent the difference between traffic flows, which can be constructed based on the saturation of flow directions, green signal split data, and the conflict matrix: where the element d ij in the matrix can be calculated as follows: where y i is the traffic flow ratio of the flow direction i, which reflecting the traffic demand not affected by the signal control scheme. x i is the saturation of the flow direction i, and λ i is the initial green split of the flow direction i. Since the distance between the flow direction i and j is the same as the distance between the flow direction j and i, the distance matrix is symmetric, that is, d ij = d ji . To ensure the balance of traffic flows in each phase, we optimize the phase combination according to the distance matrix between flow directions to make the combination more rational. For a typical crossroad, four-phase schemes are usually used, each phase consists of two flow directions, and the same flow direction traffic must be released only once in one cycle. Considering the symmetry of the distance matrix and all-zero values on the main diagonal, only the lower triangle needs to be processed. Algorithm 1 shows the optimization of the phase combination. If the distance between two flows is equal to or greater than 1, these two flows are physically conflicting. Hence, we select all the flow pairs with their distances less than 1 to form the D first vector. If one scheme in the D first contains all flow direction and each direction c i only appears once, it will be saved as D each to D all . Then, we calculate the sum of the distances in D each and insert it into the S all as S each , and the index z of the minimum S min in S all is selected. Finally, we choose the optimal scheme D final according to z in D all . For example, there are two schemes here (see Figure 2): Scheme A takes east left movement and east through movement as one phase, west left movement and west through movement as another phase. Scheme B takes east left movement and west left movement as one phase, east through movement and west through movement as another phase, the distances of above four combinations are 0.2, 0.1, 0.3, and 0.4, respectively. The scheme A is chosen because the sum of the first two values is smaller than that of the last two values. Input: The distance matrix D Output: The final phase combination scheme D f inal Begin 1. for i = 1 to n, do 2, For j = 1 to ði − 1Þ, do 3. If d ij < 1 Then 4. Wireless Communications and Mobile Computing Traffic Signal Timing Bi-Level Programming Model. The bilevel programming model is a system optimization model with a two-tier hierarchical structure. The upper and lower levels have their own objective functions and constraints [28,29]. The objective functions and constraints of the upper-level problem are not only related to the upper decision variables but also depending on the optimal solution of the lower level problem, while the optimal solution of the lower level problem is affected by the upper decision variables. We establish a traffic signal timing optimization algorithm based on the bilevel programming model. The framework of the traffic signal timing optimization algorithm is shown in Figure 3. Establishment of the Bilevel Programming Model. The signal cycle time is the key control parameter that determines the quality of traffic signal control in traffic signal timing, and the saturation can reflect the rationality of the signal cycle time to some extent. We establish the upper-level programming model with saturation as the decision target, which can be calculated as: where x À is the average saturation of each phase, andx is the target average saturation. Under the condition of fixed signal cycle time constraints, the mean square error (MSE) of the saturation is used to evaluate the rationality of green split distribution. With the MSE, the lower-level programming model can be established as: where N is the number of signal phases. The saturation of each phase can be calculated as: where f i is the arrival traffic flow for phase i, q i is the average of each flow direction saturated flow in phase i, and λ i is the initial green split of phase i. Solution of the Bilevel Programming Model. The singlestep action set with signal cycle time changes is designed to obtain the optimal signal cycle time of the upper-level programming model, the action set can be expressed as follows: where a 1 , in seconds, is the adjustment step size for cycle time. The three elements in action1 represent three operations, including addition, subtraction and invariance, respectively. For example, if the initial signal cycle time is T, the action1 is ½a 1 ,−a 1 , 0, and the signal cycle time after each adjustment according to action1 will be ½T + a 1 , T − a 1 , T. Algorithm 3 shows the process for green split optimization. Considered the premise of the green split optimization algorithm that the signal cycle time is fixed, the sum of all elements in the action matrix is zero. According to the initial scheme of green split, the initial timing scheme is obtained by multiplying the signal cycle time. Each action of Equation (16) is executed, respectively, and then σ value of the corresponding action can be saved into σ all according to Equation (13) and (14). Then we select the minimum σ min in σ all , if its corresponding action is not ½0, 0, 0, 0, the action will be taken, and the green timing scheme after execution is updated as the initial scheme g 0 for the next iteration. The algorithm will loop until the action corresponding to σ min is ½0, 0, 0, 0, and the green time of each signal phase at this time is converted into green split, and the optimal green split scheme λ f is output. We complete the green split optimization in the lowerlevel programming model, which will be fed back to the upper level. While in the upper level, the signal cycle time is optimized heuristically and iteratively under the restriction Optimize the signal cycle time Optimize the green split of each signal phase e scheme is optimal or the cycle has reached the upper limit? Take the current timing scheme as the initial scheme Output the current timing scheme Input the initial timing scheme Yes No Figure 3: The framework of the traffic signal timing optimization algorithm. Input: The initial signal cycle time T 0 , the average arrival traffic flow for each phase f = ½ f 1 , f 2 ,⋯,f n , the average of each flow direction saturated flow in each phase q = ½q 1 , q 2 ,⋯,q n , the initial green split of each phase λ = ½λ 1 , λ 2 ,⋯,λ n and the action set action1 Output: The optimal signal cycle time T f Begin 1. w 0 ⟵ The index of 0 in action1 ; 2. w ⟵ A number! = w 0 ; 3. While w! = w 0 , do 4. For each a i in action1, do 5. For each TN i in TNow, do 9. w ⟵ The index of J min in J all ; 13. End If End While End Algorithm 2: Signal cycle time optimization. 6 Wireless Communications and Mobile Computing of the green split, until the scheme is optimal or the cycle reaches the upper limit. According to the traffic laws and regulations in our country, the right turn movement can pass the intersection at any time without being controlled by the signal light; thus, only the left turn and the straight vehicles are considered in the simulation. Figure 5 shows the simulation structure of the intersection. Simulation Experiment and Result Analysis The traffic flow data were provided by the traffic control department of Xiaoshan District, Hangzhou, from 7 : 00 a.m. to 9 : 00 a.m. on November 20th, 2018. The original data was the traffic flow data of the signal cycle time and the timing scheme of the corresponding time, which was processed into saturation data set for clustering, and then, timing For each a i in action2, do 7. End Table 1, which has been converted into the hourly traffic flow to the inlet, and the through flow of each flow direction is also recorded. In this table, "E, " "S," "W," and "N" refer to eastbound, southbound, westbound, and northbound, respectively. "L" and "S" mean left turn and straight vehicles. For example, "LE" represents the traffic flow of the left turn in the eastbound movement. The signal timing scheme generated by the improved K-medoids clustering algorithm is compared with the scheme generated by the traditional one to ensure the fairness of the experiment. In order to avoid the exponential explosion and make the criterion function E and S be in the same order of magnitude, the weight coefficient t is set as 11000. Additionally, we set the target average saturation δ x to 70 according to the actual intersection traffic demand. In order to avoid missing the optimal timing scheme due to overlarge step size, the signal cycle time adjustment step a 1 and the green time adjustment step a 2 are both set as 1. In addition, the proposed algorithm is compared to the fixed phase scheme and the traffic flow and vector angle based on the optimization scheme [17]. Analysis of Results. The criterion functions of different k using traditional and improved K-medoids algorithm are shown, expressed by E and S, respectively. As k increases, the criterion functions decrease, and the rate of decline also stabilizes. In both cases, the optimal k is 3, while using the Time LE SE LS SS LW SW LN SN 7 : 00 128 242 168 476 92 186 266 368 7 : 30 202 364 150 980 96 198 320 798 8 : 00 124 240 184 758 88 164 238 662 8 : 30 138 275 143 752 98 150 282 760 9 : 00 118 220 186 576 102 148 224 448 Saturation flow 1529 1641 1347 2360 1286 1606 1722 Figure 6 shows the curves of both algorithms, which is more intuitive. Table 2 shows the different performances of the traditional and improved K-medoids algorithm. As for the number of clusters, in different iterations, the traditional K-medoids may reach the elbow when k is in range of 3 and 6, which is ambiguous to identify, while the improved K-medoids can always reach the elbow when k = 3. In addition, the improved K-medoids runs faster than the traditional version, and that may because we optimize the selection of initial optimizing, which reduces the number of interactions. Average vehicle delay and average queue length are used to evaluate the performance of the proposed algorithm. Figures 7 and 8 show the curves of optimized phase and timing schemes under different conditions compared to fixed schemes that optimize only timing and vector angle-based schemes. The outperformance of our proposed method can be seen in all time periods. Table 3 shows the averaged values of the above two evaluation indexes, we can see that the proposed method outperforms the fixed phase method with improvements of 2.462 s (7.07%), and 1.542 m (11.38%) on the vehicle delay and the queue length, and also shows improvements of 3.924 s (10.81%) and 1.656 m (12.16%) compared to the traffic flow and vector angle-based optimization scheme. Table 4 shows the delay comparison of three optimization schemes in SS, SW, and LE. We can see that the method proposed by us has a great improvement on the average vehicle delays in each flow direction compared to the traffic flow and vector angle-based optimization scheme. In our proposed method, the average vehicle delays of SW and LE is different from that of the fixed phase method, this is because the phase of SW and LE has changed. Compared with the fixed phase method, the average vehicle delays of LE in our method are reduced, but the average vehicle delays of SW are increased. The main reason is that our method improves the overall traffic capacity of the intersection rather than the single flow direction. Conclusions In this paper, we optimize the traditional K-medoids clustering algorithm in terms of the clustering number and initial medoids selection. In order to match the changes of traffic flow in different time periods adaptively, the phase combination optimization model is established to optimize the phase, and the bilevel programming model aims to optimize the signal timing, which can maximize the utilization of green time. The proposed algorithm is optimized for each flow direction. Whereas the flow saturation may be different when the overall situation is similar, we will study the difference of different flow saturation to achieve the optimal control effect of the intersection. Conflicts of Interest The authors declare that there is no conflict of interest regarding the publication of this paper.
2020-05-21T00:12:14.483Z
2020-05-09T00:00:00.000
{ "year": 2020, "sha1": "8ef7418b89da871d72e563c7af1c437d76dff258", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/wcmc/2020/3240675.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3ad2628ec7ed92657b7174e3e7da3e862453c820", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
253795027
pes2o/s2orc
v3-fos-license
Droplet-Dispensed Graphene Oxide as Capacitive Sensing Elements for Flexible Pressure-Pulse Sensing Array We report a novel flexible capacitive pressure-pulse sensor array developed by integrating droplet-dispensed graphene oxide (GO) sensing elements and flexible electronics. The utilization of droplet-dispensing technology enables the fabrication multiple capacitive sensing elements rapidly while producing sensitive pressure sensors with excellent repeatability. The dispensed droplet volume (GO aqueous dispersion) ranged from around 33.5 to 65.4 pL with diameter ranging from 40 to 50 μm. The size (i.e., footprint and dielectric material thickness) of a sensing element can be controlled by the total GO dispersed per droplet. The fabrication process and preliminary characterization of these GO capacitive sensors are discussed in this paper. Thus far, we have shown that these sensors have a sensitivity of ∼10−3 kPa−1, with the relative permittivity of the dispensed GO being ∼6 (measured at a frequency of 600 kHz). We have also demonstrated that the printed sensing elements can be used for human wrist pulse sensing. Hence the technology described in this paper could potentially be used in wearable electronics for healthcare applications. I. INTRODUCTION Flexible electronics have become crucial because of their capability to monitor human health and physiological signals. However, the fabrication of high reliability, high sensitivity, and high repeatability sensors still relies on the microelectromechanical system (MEMS) fabrication process that requires sophisticated technology and MEMS knowledge, in addition to expensive machines. Apart from the complicated fabrication processes in MEMS, the materials used in MEMS fabrication mainly comprise silicon and other metals, such as copper and titanium, which intrinsically increase the hardness and decrease the flexibility of the sensors. 3D printing technology, the counterpart of MEMS fabrication, allows for quick prototyping and low-cost but effective sensor fabrication for flexible electronics. In the first part of the current literature review, MEMS fabrication and 3D printing technology are compared from different aspects to justify our use of 3D printers as our major tool. Pressure sensors involve many sensing mechanisms, including piezoresistive, piezoelectric, capacitive, and field-effect transistor (FET). These sensors are compared in the blow to identify the most suitable sensing mechanism for 3D printing of pressure sensors with improved sensitivity. II. COMPARATION BETWEEN DIFFERENT SENSING MECHANISMS OF PRESSURE SENSOR A pressure sensor is a device that can be used to perceive a pressure signal and then convert this signal to a type of electric signal that can be read by humans through certain mechanisms. Based on the working mechanisms, pressure sensors can be categorized as piezoresistive, piezoelectric, [1] optical fiber, and capacitive. These pressure sensors are commonly utilized in industries and research. In particular, the capacitive pressure sensor has many advantages over other pressure sensors. This sensor has low power consumption because no DC current flows through the sensor element. It uses current only when a pressure signal passes through the circuit, thereby measuring the capacitance. Passive sensors have an external reader, which provides a signal to the circuit; hence, a power supply is not needed. This feature is good for applications that require low power, such as e-skin sensors or remote monitoring. A capacitive sensor is a mechanically simple device with a stable output, and it is compatible with complicated environments. Moreover, it has high tolerance for temporary over-pressure situations. The main merits of capacitive sensors are that they are independent of temperature and have good repeatability. However, capacitive sensors do not produce a linear output, and they may cause high hysteresis and increased sensitivity to vibration. However, they perform well in touch-mode devices because their diaphragm comes into contact with the insulating layer located on the lower electrode. One of the risks is that these sensors may damage sensitivity and increase hysteresis. Parasitic capacitance reduces the sensitivity and accuracy of capacitive sensors. A good circuit design is necessary for interface electronics with high output impedance to minimize the influence of parasitic capacitance. Placing numerous electronics close to the sensor improves the situation. This is one of the advantages of the MEMS technique. For hu-man-carry or e-skin devices, optical sensors perform poorly because they are uncomfortable and large. Meanwhile, piezoresistive sensors exhibit hysteresis and have high power consumption; therefore, they are unsuitable for e-skin device design. Although piezoelectric and capacitive sensors are highly sensitive and accurate, the former provides only dynamic sensing. Overall, capacitive sensors are a good choice for robotic e-skin device design. After comparing the advantages, disadvantages, and different physical aspects (from the miniaturization technique to the number of sensing elements) of different pressure sensors, I conclude that capacitive pressure sensors are better than the others. They are highly sensitive and independent of temperature, as shown in Table 1. They also have many sensing elements in a single unit area. Our conclusion is that capacitive pressure sensors are more suitable for human e-skin device development compared with other sensors. A. COMPARATION BETWEEN DIFFERENT FABRICATION METHODS In related fabrication processes, MEMS pressure sensors can exhibit similar performance as 3D printing pressure sensors even though they are used in different scenarios. However, the reliability of MEMS devices and fabrication poses a major challenge. MEMS is developed based on semiconductor manufacturing technology, which integrates the technologies of lithography, corrosion, thin film, LIGA, silicon micromachining, non-silicon micromachining, and precision machining. Pressure sensors cannot avoid the original problems from semiconductor manufacturing technology. Such problems can be divided into six categories: mechanical fracture, stiction, wear, creep and fatigue, electric circuit, and contamination. For example, in the fabrication of graphene-resistive and optical-fiber pressure sensors, the holes, including the array of SiNx holes and the glass ferrule hole for the optical fiber, inevitably encounter the shock problem, which is related to mechanical interference dis-order, excessive loading, and drops. As indicated by the examples, 3D printing has several advantages over traditional MEMS fabrication in terms of fabrication processes, design difficulty, and required manpower. Notably, 3D printing pressure sensors focus on the principles and materials. In addition, MEMS fabrication needs a laboratory to implement the manufacturing process, whereas 3D printing has a compact layout. However, the resolution of 3D printing is an issue that should be solved as soon as possible. Photolithography can reach a resolution of 1 micron, and laser direct write can reach 100 nm or even better. Current usable 3D printing is limited to 500 nm, which leads to the use of large-scale pressure sensors. Furthermore, 3D printing has high requirements for the materials' form (e.g., liquid-like), viscosity, particle size, temperature, and pressure. Considering that 3D printing is a multi-structure sensor manufacturing process, it cannot use directly laser sinter metal powder for printing or cooling and molding after melting polymer materials. As shown in the example above, the PVDF polymer and barium titanite are mixed and injected as dipoles under the substrate. At the same time, the entire process needs to be electrified, and electrostatic force is used to ensure a stable combination of the materials and substrate. As a result, the materials and resolution interact with and constrain each other. On the other hand, tactile sensing is often considered as one of the most important possible technological extensfter comparing the advantages, disadvantages, and different physical aspectsions in robotic and automation systems since it provides another realm of information (i.e., touch) from physical interactions with the surrounding environment. Researchers have long been looking for more sensitive and flexible materials to be integrated into their robotic systems in order for robots to "feel" the physical world. Nowadays, Graphene oxide (GO) has been drawing much attention because of its unique mechanical, optical, electrical, and chemical properties [2], [3], [4], including high surface to volume ratio, easy and low cost to manufacture, ultra-thin and transparent. It has become a very attractive material for applications in flexible electronics and sensors, including applications to sense pressure [4], strain [5], temperature [6] and humidity [7]. Furthermore, GO can be chemically reduced to produce reduced-GO (r-GO) which could serve as a conductive electrode material. GO has also been reported to have a relatively high electric permittivity [8], [9], [10], [11], [12] and has been used as a dielectric material in pressure or tactile sensors [13], [14], with reported sensitivities ranging from 10-3 to 1kPa-1 [15], [16]. For the fabrication of GO based sensors, GO is often dispersed in water [16], [17] or mixed with other elastomers (e.g., PDMS) [18], [19]. Typically, the GO suspension is applied through spin coating or dropping. However, these techniques are complicated for fabricating multiple sensor arrays, especially if the patterning multiple GO layers is required. In this paper, we presented a simple, low cost, and direct method of fabricating capacitive sensing elements using micro-dispensing of GO suspension. The GO layer is directly printed on top of sensing electrodes and its thickness (i.e., capacitance) can be easily controlled by droplet volume and number of printed layers. Sensing elements with Preliminary results on human pulse sensing are also reported. Based on our current results, we believe that the developed sensing elements can potentially be used in wearable sensors/electronics and healthcare applications. III. FABRICATION OF GO BASED SENSING ELEMENTS A. FABRICATION PROCESS A drop dispensing system (from Microdrop Technologies, Germany) was used to dispense GO suspension as shown in Fig. 1(a). The GO suspended water (with concentration of 2 mg/ml) (from Tanfeng, China [20]) was sonicated in a water bath for 15 minutes to homogenize the GO suspension. The printing voltage applied in our experiments varied from 100 V to 150 V with a pulse width of ∼20 to 30 ms. The ejected droplet sizes and volumes ranged from 40 to 50 μm and from 33.5 to 65.4 pL, respectively ( Fig. 1(b)). The GO suspended droplets were printed layer by layer on copper electrodes to form GO-films on a flexible Printed Circuit Board (PCB). Before printing another layer, the printed layer was dried at 60°C in order to avoid GO from being dissolved. Once the desired thickness of the GO-films is printed, another flexible PCB (thickness 0.2 mm) was put on top of the GO-films, resulting in a sandwiched structure of GO dielectric layer as shown in Fig. 1(c) and (d). First, the GO suspended water (with a concentration of 2 mg/ml) (from Tanfeng, China) was sonicated in a water bath for 15 min to homogenize the GO suspension. Second, the printing voltage applied in the experiments varied from 100 V to 150 V with a pulse width of ∼20 ms to 30 ms, as shown in Table 2. The ejected droplet sizes and volumes ranged from 40 μm to 50 μm and from 33.5 pL to 65.4 pL, respectively, ( Fig. 1(b)). The GO suspended droplets were printed layer by layer on copper electrodes to form GO-films on a flexible PCB, which was purchased from a commercial vendor. After printing each layer, the printed layer was dried at 60°C to improve control in the GO-film and avoid the adjacent GO droplets merging together for having accumulated droplet size. Once the desired thickness of the GO-films is printed, another flexible PCB (0.2 mm thickness) was placed on top of the GO-printed PCB, resulting in a sandwiched structure of a GO dielectric layer, as shown in Fig. 1(c) and (d). B. FABRICATED GO-FILMS The Scanning Electron Microscopy (SEM) pictures and Raman spectrum of a printed GO-film are shown in Fig. 2. In Fig. 2(a) and (b), the surface and cross-section of the printed GO on the copper electrode are shown, respectively. Fig. 2(c) shows a peeled off GO-film from the electrode. It can be clearly seen that the GO-films contain numerous layers. The Raman spectrum also confirmed that the printed film retains the characteristics of GO as shown in Fig. 2(d). The D and G bands were measured around 1344 cm-1 and 1588 cm-1. The D band represents the presence of defect sites in the framework of GO due to the disorder induced by sp3 hybridization and the D band of GO is strong and broad because the graphene layers have high level of disorder; while the G band represents the in-plane bond-stretching motion of sp3 carbon atoms. The ID/IG ratio is 1.2. A. RELATIVE PERMITTIVITY OF THE PRINTED GO-FILMS As shown in Fig. 1, the dimension of each sensing element in our current work is 2.2 mm × 1.2 mm. The thicknesses of the printed GO-films were measured by a surface profiler (DekTakXT from Bruker). Applying the equation below, the dielectric constant of the printed GO-film can be calculated. where C is the capacitance of the sensing element; ε r is the relative permittivity (i.e., dielectric constant; relative permittivity ε r of air is 1) of printed GO-film; ε 0 is the permittivity of free space (=8.854 × 10 −12 Fm −1 ); A is the area of the sensing element; and d is the distance between the top and bottom electrodes. In this experiment, GO-films were fabricated according to the procedure described in the last section. Five examples of the printed GO sensors and their measured parameters are shown in Table I. The calculated relative permittivity ε r of the printed GO-films is ∼0.5 to 7.5 (measured at 600 kHz). B. SENSITIVITY In order to fabricate a useful pressure sensor, high responsivity and sensitivity are of prime importance. Since the sensing elements are capacitive type, the relationship between capacitance changes and applied pressure is first determined. The sensors, listed in Table 3, were tested under an applied pressure ranging from 0 to 20 kPa, which often uses in human pulse detection [21]. The sensitivity S is calculated using the following equation: where C 0 is the initial capacitance; C is the change in capacitance; and P is the change in pressure. The sensitivity changes clearly along with the thickness of the printed GO layer. As shown in Fig. 3, the sensitivity decreases linearly from 0.0027 kPa −1 to 0.0002 kPa −1 as the GO layer thickness increases. Moreover, as shown in Fig. 4, if the GO concentration is further increased to 5 mg/ml (i.e., sensor E), the corresponding sensitivity will be increased to 0.0036 kPa-1, which is more than 10 times of that of 2 mg/ml with similar thickness. The main reason may be due to the increase in GO concentration per volume printed. Thus, when the GO suspension is dried, more GO will be closely packed on top of the electrode. In other words, the GO/air ratio will be increased, and thus, the ε r will be increased, consequently improving the sensitivity. C. INSTANTANEOUS RESPONSE The instantaneous response is an important factor for designing a sensing element. In this experiment, an instantaneous pressure of 20 kPa was applied to sensor E for a short duration (2 seconds) and the result is shown in Fig. 5. The response time is ∼100 to 150 ms, which is fast enough to capture human pulse, which is usually lower than 2 Hz [21]. D. HUMAN PULSE SENSING Arterial pulses, which are generated by heart-pumping of blood, reflect the holistic health status of human. In Traditional Chinese Medicine (TCM), there are 28 basic combinations of well-known pulse patterns [22]. The detection of human pulses is one of the important applications in wearable electronics and healthcare. In this study, we preliminary tested the detection of human pulses using the fabricated GO sensing elements. The sensor E's circuit, with a sampling rate of 200 MHz, used for signal detection is shown in Fig. 6(a). It was placed on a subject's wrist for pulse detection as shown in Fig. 6(b). The recorded and normalized signal is shown in Fig. 6(c). The collected data shows a clear main peak and several sub-peaks. The recorded pulse rate was ∼72 beats/min. Currently, the sensors reported here are not as responsive to pressure input as the commercial capacitive sensors reported in [21] 0.00031 kPa −1 . One of the possible reasons is that the flexible PCB used is too thick (i.e., 0.2 mm) and stiff, which will reduce mechanical responsivity of the sensor and suppresses the weak reflected-pulse signals (i.e., the higher frequency small peaks in the pulse response signals in Fig. 6(c)). Another way to improve it is to further increase the GO concentration in the suspension. E. MULTIPLE ELEMENT SENSING In the last section, the pulse recording was done by a single sensor. Indeed, much more information (i.e., spatial pressure distribution) can be captured if a sensing array can be built. For demonstration purpose, a sensing array containing 9 elements from C1 to C9 was fabricated using GO-films as the dielectric as shown in Fig. 7(b). The structure of the sensing array is shown in Fig. 7(a). A constant and static pressure (i.e., 20 kPa) is applied near C8, C9, C5 and C6. The recorded capacitance values and normalized 3D position map of the normalized change in capacitances are shown in Fig. 7(c). The map is plotted using cubic interpolation function. The values printed on the map are the capacitances of the sensing elements when the pressure is applied. With this technique, the spatial pressure distribution of a human pulse can be easily resolved. V. GO SENSOR FABRICATION USING SPUTTERING In summary, our preliminary results on fabricating a novel flexible pressure sensor array by micro-dispensing of GO suspension were reported. The printed sensors were characterized and demonstrated to be capable of acquiring human pulse pressures. More detailed studies are underway (i.e., the effect of different printing parameters on electrical and mechanical properties of the printed sensing elements) to better control the performance of each sensing element. Combined with the development of machine learning algorithms on human wrist pulse patterns, a wearable electronic device can be realized to collect and monitor personalized health data for healthcare applications in the future. A. SENSOR FABRICATION -CONDUCTIVE LAYER The preliminary results on the fabrication of a flexible capacitive sensing array using GO as the sensing material reported about a fabrication process of producing dielectric and conductive layers by using a direct material deposition method, which would enable the fabrication of final sensing devices by means of vertically assembled structures. Thus far, the best sensitivity of our fabricated sensor was 0.00031 kpa −1 , and the transient response time is ∼40 ms. The abilities of the sensors in showing static spatial pressure distribution and capturing dynamic temporal human pulse were demonstrated, which showed the potential application of these novel flexible sensing array in wearable electronics. The potential of using GO as a dielectric layer of capacitive pressure sensors is demonstrated above, which mainly used a micro-dispensing method to achieve a precise, low-cost, and effective sensor fabrication process. However, to increase the flexibility of the pressure sensing array, the electrodes on the PET (polyethylene terephthalate) substrate should be replaced by soft materials such as PDMS (polydimethylsiloxane). Based on previous works of fabricating GO-films, deposition of designated materials on substrates was feasible, especially in multi-layer structures. Hence, a similar mechanism was adapted in our current work to fabricate electrodes and GO-based sensing elements. B. PROPOSED FABRICATION PROCESS AND SENSOR CHARACTERISTICS Reduced graphene oxide (rGO) and carbon nanotube (CNT) were first tested by screen printing methods to form a conductive layer of capacitive tactile sensors which contain electrode patterns transferred from designed stencil mask. However, these materials were found difficult to attach on the surface of PDMS, leading to disconnected or partially formed electrodes with tremendously high resistance, i.e., up to 100 kOhm. In fact, sputtering gold target onto the surface of PDMS (with a thickness of 0.5 mm) was applied to obtain electrodes with conductivity of 166.67-714.29 mS/m (the fabrication process is shown in Fig. 8). In this chapter, the GO-films were deposited on top of the electrodes by a syringe, instead of a micro-dispensing system reported from previous chapter. A sandwiched structure was then formed by stacking another PDMS (with a thickness of 0.5 mm) with same pattern of electrodes. The final device was mounted to connect an external circuit, as shown in Fig. 9, for sensor characterization. Each sensing element was applied with various static pressure to measure its capacitance changes by using the setup of Table 4. The difference of sensitivity among each element was due to the process of GO-film deposition. The capacitance changes under pressure of C1-C5 are shown in Fig. 11. In comparison with previous research on GO-film fabrication, changing the conductive layer from electrodes on PET to PDMS showed an inconsiderable effect on sensor sensitivity. After characterizing each element of the sensing array, the response time was measured by applying an instant force (1 and 5 N), as shown in Fig. 12, and the corresponding response was ∼40-50 ms. Given that the ability of using the sensors to capture human pulse pressure is of prime importance to our application, the device was put on a wrist of a person who had a heart rate of 74 bpm to record human pulse. In Fig. 13, the measured rate of this subject was 72 bmp. The advantages of direct deposition of multiple materials using shadow masks could easily achieve an efficient nanofabrication process because the thickness of the overall device could be easily controlled by the deposited layers and PDMS substrate (e.g., fabricating ultra-thin tactile or pulse-pressure sensors for the application of wearable electronics). The idea of fabricating sensing electrodes by sputtering on stencil masks was proven to accomplish the main component for a flexible sensor. However, the design was particularly rough to achieve our ultimate application. Accordingly, two different stencil masks were made by commercial vendors utilizing laser drilling to fabricate high precision masks. The aforementioned fabrication process was repeated but with a different mask. In Fig. 14, the micro-dispensing process was performed on a non-plasma ( Fig. 14(a)) and a plasma surface (Fig. 14(b)). A hydrophobic surface would lead to a partial electrode coverage because the top layer was later placed on top of the GO-films. Hence, a hydrophilic surface would be ideal to avoid short circuit. VI. CONCLUSION In this paper, we presented our preliminary results on fabricating a novel flexible pressure sensor array by micro-dispensing of GO suspension. The printed sensors were characterized and have been demonstrated to be capable of acquiring human pulse pressures. More detailed and intensive studies are underway to study the effect of different printing parameters on electrical and mechanical properties of the printed sensing elements in order better control the performance of each sensing elements. Together with the development of this technology and existing classification algorithms on human wrist pulses, a wearable electronic device can be realized to collect and monitor personalized health data for healthcare applications in the future.
2022-11-23T16:27:26.914Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "4858e00f4497faf920cda2f2d0db5b18b3b1d11e", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/8782713/9036065/09956867.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "b32fd06efd354e48db0df98ac967b16b476bf0ce", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
55250047
pes2o/s2orc
v3-fos-license
Comparison of extensive and intensive pig production systems in Uruguay in terms of ethologic , physiologic and meat quality parameters The objective of this work is to characterize two contrasting systems of fattening pigs in Uruguay. A total of 96 pigs (average 41.7 kg) were divided into eight groups of 12 animals, representing two production systems: (IN) pigs confined in pens of 12 m2 or (OUT) kept in plots with field shelters and access to pasture. Behavioral observations were performed by scan sampling at 5-minute intervals, three times a day during weeks 6, 8, 10 and 12 of the experiment. Aggressions were also observed at the end of the experimental period. Blood samples were taken for cortisol analysis and other physiological parameters, during growth period and slaughter and meat quality characteristics were assessed after slaughter. Differences were found in carcass characteristics, wherein IN presented a higher dorsal fat. These animals presented an overall lower activity and spent less time resting, with a stable pattern throughout the day. In OUT, pigs usually rested at midday hours, more active in the morning and afternoon. The number of total reciprocal aggressions in the observation period was 4.2±3.7 for IN and 2.3±2.2 for OUT. Cortisol levels and biochemical profile did not show evidence of important problems in the animals. Welfare is not compromised in any of the systems, although higher levels of cortisol and aggressions could be indicating some stress problems in the confinement system. Meat characteristics in OUT were considered better than in IN from a nutritional point of view. Introduction More than 60% of pigs are reared outdoors in Uruguay, with variable participation of pastures in feed (DIEA, 2007).Outdoor production systems with pastures are becoming attractive for consumers, mainly due to environmental sustainability and social benefits of the sector as well as the initial low cost of the production system (Leite et al., 2001).In addition, outdoor rotation systems have been reported to be of interest in South America because they have been adopted by small and medium farmers (Leite et al., 2006).Other characteristics of these production systems, which affect animal welfare and product quality, have a growing societal and scientific importance (Smulders et al., 2006).Pigs produced under outdoor conditions present different meat characteristics mainly due to exercise (Daza et al., 2009) or to pasture intake (Moisá et al., 2007), and these may affect pH (Bee et al., 2004), fat deposition (Gentry et al., 2002), fatty acid profile (Daza et al., 2009) or meat colour (Echenique et al., 2009). Several aspects of behaviour have been used to evaluate confinement effects or to compare it with outdoor systems.Some examples are negative (Barnett et al., 1993;Deen, 2010) and positive (Temple et al., 2011) social behaviour, exploratory behaviour (Beattie et al., 2000;Docking et al., 2008), development of abnormal behaviour (Lawrence & Terlouw, 1993;Moinard et al., 2003) or resting time (Scott et al., 2006), among others.On the other hand, changes in behavioural patterns often represent the first level of response of an animal to an aversive or stressful environment (Temple et al., 2011).Furthermore, there are other behavioural and physiological responses commonly used to measure animal welfare (Barnett, 2007) such as animal health (Broom, 2006) or serum biochemical profile (Chorfi et al., 2007;Adams et al., 2008).Cortisol is the most widely used physiological parameter because of its association with stress and acute stressors (Barnett et al., 1996;Rushen et al., 1995) or because of a permanent social stress in animals reared in poor environments (de Jonge et al., 1996).However, assessment of stress must be based on a wide range of variables describing the process (Jensen et al., 2004). The main objective of the present work is to compare two contrasting systems of fattening pigs, one with animals confined in pens and an alternative outdoor system with rotational pasture access.This study is focused on productivity, meat quality characteristics, animal behaviour and physiological stress indicators. Material and Methods Two trials were carried out at Las Brujas Experimental Centre of the National Agricultural Research Institute (INIA) of Uruguay.The experimental period lasted 12 weeks, from October 23th 2007 to January 16th 2008.Ninety-six Landrace × Large Withe pigs of 12 weeks of age and 41.7±5.81 kg average live weight (48 females and 48 castrated males) were used.Animals were individually tagged and randomly divided into eight groups of six females and six males each.Four groups were assigned to a conventional indoor confined fattening system (IN), and the rest to an outdoor system with free access to grassland plots (OUT). In IN, pigs were housed in a natural-ventilation building, in 4 × 3 m pens.Floor surface was 25 % plastic slat and 75% solid concrete.In OUT, each group was housed in a 20 × 10 m yard (permanent plot), with a 12 m 2 wood hut.Ten different 170 m 2 grassland plots were built and every week, each group had free access to one of these grazing plots.After this week, a new plot was opened and the old one was closed in order to provide the animals with fresh pasture and avoid over-pasturing.This process was followed according to the order shown in Figure 1. In all groups, animals were fed ad libitum with the same commercial concentrate, with nutrient contents of 88.9% dry matter (DM) and 14.9% crude protein (CP), 13.2% acid detergent fiber (ADF), 34% neutral detergent fiber (NDF), 3.98% ether extract (EE) and 4.7% o f ash on a dry matter basis.Pasture in OUT was a seeded prairie with a mix of white clover, red clover and ryegrass. Individual live weight and group feed intake were measured weekly.Pasture intake was calculated by the difference between initial availability and remaining pasture at the end of the seven-day grazing period with standard method (Moliterno, 1997).Pasture samples were taken at the moment of opening the new plots and after closing.Average feed conversion rate per group was estimated by dividing weekly feed intake and the sum of weekly individual live weight gain. Animals were slaughtered at 169 days of age, after 2 hours transport and 6 hours of lairage in a unique paddock.Twenty-four animals per treatment (six animals per group) were selected for carcass and meat studies as described by Gispert et al. (2007).Carcass length, pH 45 min postmortem, dorsal fat thickness (mm) at the gluteus medium, dorsal fat thickness (mm) at the last rib and pH 24 hours post-mortem (pH24) were recorded at the slaughterhouse.Muscle colour was measured in loin eye at the first steak level, with a Minolta C10 colorimeter, determining parameters L * (lightness), a * (redness/greenness) and b * (yellowness/blueness).Chroma (C) and Hue angle (Hº) values were obtained by the following equations: Fat samples for fatty acid analysis were collected from the same animals as dorsal fat at the last rib.This lipid profile was analysed by liquid chromatography at the Nutrition Laboratory of the Chemistry Faculty of the Republic University (Montevideo, Uruguay), determining individual fatty acid contents, total saturated fatty acids (SFA), total monounsaturated fatty acids (MUFA) and total polyunsaturated fatty acids (PUFA). Behaviour of outdoor-system pigs (OUT), was directly observed by two observers in three daily periods of two hours (morning: 7h00 to 9h00; midday: 13h00 to 15h00; and afternoon: 18h00 to 20h00), on three alternate days a week, during four weeks, (6, 8, 10 and 12).Animals in IN were continuously recorded and behavioural observations were carried out in the same periods (morning: 7h00 to 9h00, midday: 13h00 to 15h00; and afternoon: 18h00 to 20h00), on three alternate days a week, in weeks 6, 8, 10 and 12 of the experiment.Because of problems in video recordings of weeks 6 and 8, only weeks 10 and 12 could be used for the analysis.In both treatments, observations were carried out by scan sampling every five minutes, recording the number of pigs performing each of the activities (Table 1).This ethogram was partially adapted from Morgan et al. (1998) and Bolhuis et al. (2005). In addition, active and passive behaviour rates were created by integrating active behaviours (eating, drinking, grazing, walking, exploring, interacting and others) and passive behaviours (staying in the hut, resting and resting in the mud).Considering that the resting behaviour of outdoor pigs was mainly developed inside the huts, a variable was created in order to compare resting behaviour of IN in relation to OUT.This variable (TR) resulted from integrating R and H for OUT, while for IN it was considered that TR = R. Agonistic behaviour (aggressions) was separately recorded in IN and OUT during weeks 11 and 12 every two days.Two observers recorded aggressions between animals by continuous observation in two 30-minute periods: one in the morning (randomly for each group between 9h00 and 10h00), and another in the afternoon (randomly between 18h00 and 19h00).Three levels of aggressions were established: aggression of one animal on another without response (unidirectional aggression), aggression of one animal on another with response (reciprocal aggression) and fight, which was described as a reciprocal aggression during at least five seconds.In all observations, the activity performed by the pigs at the moment of the aggression was also recorded. Blood samples of six randomly selected pigs of each group (a total of 24 animals per treatment) were taken on day 84 of the experiment, during the weighing routine, for the two treatments and in the slaughterhouse at the moment animals were stuck after electric stunning.Samples were collected in 7 mL vacuum tubes without anticoagulant and immediately refrigerated and taken to the laboratory, where they were centrifuged at 3000 rpm for 15 min at 4 °C, as described by Titto et al. (2010).Serum was then removed and transferred to eppendorf tubes (1.5 mL) for storage at -40 ºC until the analyses were performed.Serum samples were assayed in the Laboratory of Nuclear Techniques, Veterinary Faculty, (Montevideo, Uruguay).Cortisol concentrations were determined by a direct solid-phase radioimmunoassay (RIA) using DPC kits (Diagnostic Product Co., Los Angeles, CA, USA). The RIA had a sensitivity of 0.52 µg/dL.All samples were determined in the same assay.The intra-assay coefficients of variation for low (1.28 ug/dL), medium (5.91 ug/dL) and high concentration (17.05 ug/dL) ranges were 10.89, 7.13 and 2.58%, respectively. All data were analyzed on SAS (Statistical Analysis System, version 9.2).Live weight and average daily gain were analyzed using procedure Mixed with a repeated measurement design.A General Linear Model procedure was performed for meat characteristics, cortisol concentration and biochemical blood profile.Logarithmic transformations for cortisol and biochemical profile were used for the analysis. For general behaviour and aggressions, the statistical unit was the group.Behavioural data were transformed prior to the analysis into relative numbers (proportion of pigs within a group doing each activity; expressed as a mean of the observation period).Treatment effects on each behaviour were evaluated using procedure mixed linear models (proc MIXED) with repeated measurement design and a compound symmetry covariance structure.The model included treatment (IN, OUT), period of observation (morning, midday and afternoon) and the interaction between treatment and period as fixed effects.Tukey-Kramer adjustments were made for post-hoc comparisons.Regarding aggressions, data were analyzed by PROC MIXED with repeated measurement design, using treatment and moment of the day as fixed effects.Effects were corrected for multiple testing with Tukey-Kramer test, with P≤0.05 as significance level.Logarithmic transformations for behaviour and aggression data were used as well. Results No significant differences between the studied systems in final live weight were found, although average weekly weight gain showed significant differences (P = 0.0126): 5.5±1.7 and 5.1±1.9kg, for IN and OUT respectively.Moreover, if considering the comparison between treatments for each week separately (Table 2), significant differences (P<0.001) of weight gain were found only for the third week of the experiment. As regards average feed intake (Table 3), animals in OUT systems ingested significantly more dry matter than those in IN, adding pasture and concentrate intake.This, along with the lower weekly gain and consequent lower live weights, resulted in a significantly higher conversion rate. Regarding carcass characteristics (Table 4), dorsal fat thickness was higher in IN than OUT, both at gluteus medium and last rib, whereas no differences were found for the rest of carcass measures.Meat colour did not present significant differences between parameters, except for L, which was higher for IN pigs. Regarding the fatty acid profile (Table 5), monounsaturated fatty acids (MUFA) and trans acids were significantly higher in IN, while polyunsaturated fatty acids PUFA were higher in OUT.Nevertheless, individual unsaturated fatty acids showed different patterns according to the type of acid for both treatments.As regards individual monounsaturated fatty acids, only 18:1 showed significant differences, with higher values in IN, while 16:1 and 20:1 did not differ between treatments. Regarding individual polyunsaturated fatty acids, 18:2 cis and 20:2 showed significant differences between treatments, with higher concentrations in pigs of OUT.Despite the fact that 18:2 trans was significantly higher in IN, values were trace concentrations, tending to zero.Considering saturated individual fatty acids, 17:0 was higher in IN, but no significant differences were found for the rest.In addition, although no differences were found for MUFA:SFA ratio, OUT presented a higher PUFA:SFA ratio and n-3 and n-6 fatty acid levels, but a lower n-6:n-3 ratio. Active behavior was significantly higher in OUT than in IN and did not differ between the different periods of the day in IN (Table 6).However, activity in OUT was higher in the morning and afternoon in relation to midday, when it reduced to the minimum.Consequently, the opposite tendency was found for passive behavior. In general, statistically significant differences were present in all the studied behaviours.Eating showed similar results between treatments during the morning and the afternoon, but this behaviour was drastically reduced in OUT during midday period.On the contrary, "walking" was always higher in the outdoor treatment except during midday.At the same time, "grazing" did not appear in the midday period and the use of the huts significantly increased in the same period.However, mud resting was increased in the afternoon.On average, most of the animals of IN were resting during the three periods, and TR was higher in IN in relation to OUT except for midday period. Regarding agonistic behaviour (Table 5), even though no significant difference was found in general for most of the aggressions studied, a marked tendency to increase aggressive behaviour was observed for confined animals as compared with pigs in outdoor system.In this way, reciprocal aggression and fighting were significantly higher in IN. On the other hand, aggressions were higher during the afternoon than during the morning, although fighting was more stable.This occurred mostly in association with an increase of agonistic behaviour during resting and exploration (Table 7).When the total number of aggressions in active or passive activities is analyzed, it can be observed that in the afternoon, unidirectional aggressions are higher in passive behaviours, while reciprocal and fighting aggressions are mainly produced during active behaviours, always higher in IN.In addition, aggressions while exploring (E in Table 7) showed significant differences between treatments regardless of the period, always higher in IN as well. Finally, significant differences were detected for serum cortisol concentrations between IN and OUT (P<0.001) (Table 8), and levels were higher for IN both on day 84 and at the slaughterhouse. As regards biochemical profile, most of the parameters analysed were within the reference ranges (Iddex, 2006), except for alanine trasnpherase, cholesterol, gamma glutamine transferase and total protein.Overall, there was not a clear alteration pattern, although OUT resulted in a higher activity of gamma glutamine transferase (63.7±21.3 vs. 48.9±19;P = 0.0327) and IN, in a higher activity of glucose (150.6±12 vs. 123.1±27.2,P>0.0001). Discussion Productive results showed differences between systems not only as to growing rates, but also characteristics linked to the product quality.The reduction in growing rates in outdoor pigs is probably caused by the exercise, which has an extra energy cost (Edwards, 2005), as hypothesized by Hansen et al. (2006) and Bee et al. (2004), although they remarked that it is not possible to separate the effect of increased activity and the effect of the environment in outdoor systems.Regarding this, general live weight evolution was strongly marked by a dramatic reduction of weight gain in OUT during the third week.In this week, several days of cold and hard wind affected outdoor animals, causing a deficient growing response.Long exposures to cold temperatures cause pigs to adapt by reducing energy losses and adjusting intake (Demo et al., 1995;Macari et al., 1986), but when low temperatures occur suddenly or cyclically, animals are more severely affected (Nienaber et al., 1989;Geers et al., 1987).Moreover, higher consumption levels and consequently higher conversion rate reached in OUT could compromise profit of this system and should be studied accurately. Regarding carcass and meat quality, differences between treatments were concentrated in two characteristics: dorsal fat thickness and lightness of the meat.The reduction of back fat deposition in outdoor systems can be a consequence of exercise, as Enfält et al. (1997) and Gnanaraj et al. (2002) reported, although other authors did not find any difference in fat deposition between systems with outdoor access (Daza et al., 2009;Hale et al., 1986;Morrison et al., 2007).Low lightness of the meat is a desirable characteristic for reducing paleness of meat, and OUT presented lower values than IN, which is in accordance with Pugliese et al. (2005).These authors attributed this characteristic to a higher level of intramuscular fat, which was not measured in the present experiment.Nevertheless, these findings and the lack of differences among the rest of the studied parameters make it possible to infer that meat is similar in both systems, with a little better colour in OUT pigs. The fatty acid composition of meat receives a lot of attention in research because of its implications for human health, and pig fat is associated with its concentration in the diet (Raes et al., 2004).For this reason, differences in lipid profile of the two production systems have to be considered as relevant results. Th higher PUFA contents of the fat in OUT animals could be a consequence of including pasture in the diet, given that pasture is an important source of unsaturated fatty acids, especially linolenic acid (Woods & Fearon, 2009).In addition, pigs from OUT had higher concentrations of linoleic (18:2), linolenic (18:3) and eicosadienoic (20:2) acids, which are the most relevant from the nutritional point of view (Nilzén et al., 2001;Lebret & Guillard, 2005;Pugliese et al., 2005).On the contrary, higher contents of MUFA in IN are explained by 18:1 fatty acid concentration. Because of the influence of the diet on the fat of pigs (Raes et al., 2004), MUFA was expected to be similar or higher in OUT due to pasture intake, although similar results were found by other authors such as Hansen et al. (2006).This result can be interpreted by the fact that pigs in OUT, which had less total fat than IN according to dorsal fat thickness, had an increase in the proportion of unsaturated acids by decreasing the saturated fatty acid deposition, as concluded by Hansen et al. (2006). On the other hand, a lower n-6:n-3 fatty acid ratio was observed for OUT.Thus, it is reasonable to think that the pasture intake was enough to reduce n-6:n-3 ratio by increasing n-3 fatty acid content in OUT animals.Therefore, according to fatty acids, the influence of the pasture intake on the lipidic profile seems clear. General activity, assessed in the present study as the average of animals performing an active behaviour, is commonly reported as increasing in pigs reared outdoors, whereas in intensive farming systems animals remain physically inactive most of the time (Daza et al., 2009).Space availability is revealed as a reason to explain this behaviour, but some other factors are probably contributing.Activity in OUT was very low at midday, and it increased in the morning and the afternoon, whereas pigs in IN presented the same proportion of activity for the three periods assessed.To this regard, Leite et al. (2006) and Villagrá et al. (2007) found that pigs in multi-activity pens and rotational systems were active mainly during daytime, and the activities were clearly bimodal with peaks in the morning and in the afternoon.Thus, pigs seem to be more likely to have this circadian behaviour in outdoors systems.Nevertheless, in the present work, the activity pattern is also strongly influenced by sun incidence.As regards this, in midday hours all animals looked for a shade inside or near the hut, and they exposed themselves to sun only in isolated activities like drinking, eating concentrate or mud bath.This could be a sign that animals protect themselves from the sun, adapting their behaviour to the different moments of the day or covering their skin with mud. Regarding individual active behaviours, grazing and exploring were the most frequent.Exploring behaviour did not differ between IN and OUT in the morning or in the afternoon, which is coincident with other authors (Temple et al., 2011), but this can be related to grazing.Grazing was only possible in OUT system, and it was very frequent.The time used for it was similar to the time spent on total active behaviour in IN system for the periods of morning and afternoon, so although exploring did not display differences between systems, grazing did.Thus, frequencies of exploring and grazing behaviours are related, and their separate observation seems interesting from the behavioural viewpoint. Considering agonistic behaviour, IN showed a general tendency to increase reciprocal aggressive interactions and fighting along the day, although the unidirectional ones were not significant.Intensive rearing conditions tend to foment competition for resources between pigs, increasing the occurrence and duration of negative social interactions (Temple et al., 2011).Space allowance is one of the possible causes, but OUT also had a much enriched environment which contributed to the reduction of agonistic behaviour, as reported by several authors (Beattie et al., 2000;van de Weerd et al., 2006;Morrison et al., 2007;van de Weerd & Day, 2009).In addition, the higher level of aggressions in IN while exploring suggests that the competence derived from the lack of space affects all the behaviours.Then, when an individual detected a place of its interest, it competes for exploring it.In addition, in our work there was also an increase in aggressions during the afternoon (hotter period), which also showed an influence of discomfort in agonistic behaviour.Thus, not only barren or poor environments but also climatic conditions can increase aggressions. Serum cortisol concentration in IN was higher than in OUT, although concentrations found on farm on day 84 in the two treatments were low.On the contrary, cortisol levels in blood samples taken at slaughter, after transport and lairage, increased in both IN and OUT, so pigs experienced stress related to transport and loading regardless of the system. Similarly, most of the biochemical parameters analyzed were inside the reference values, except for alanine trasnpherase, cholesterol, gamma glutamine transferase (related to hepatic function distortion, Idexx, 2006) and total protein.However, animals did not show any symptoms, and they had a normal growth rate.gamma glutamine transferase had been previously related to maize contaminated with Deoxinivalenol (Döll et al., 2005) but no evidence was present in this study. Glucose concentration in blood serum was higher in IN, and according to Fernandez et al. (1994), it can be explained by the increasing of aggressive behaviour of pigs.Puppe et al. (1997) also found that levels of glucose increased in piglets with increasing stress and Barnett et al. (1983) found elevated levels of plasma glucose as a response to stress in gilts.These authors assume that an increase in glucose levels is a response to increasing glucogenesis, which is caused by increasing corticosteroid.This can also be related to higher cortisol levels in IN, so this could be an evidence of distress in pigs under this treatment.Furthermore, animals in OUT presented reduced glucose levels, which can be associated with exercised pigs and diets with a low energy level (Hale et al., 1986). Conclusions Both indoor and outdoor production systems reached an acceptable productive performance and meat quality, with no relevant health problems detected, although conversion rates in OUT animals should be studied.However, meat characteristics are better in animals reared outdoors than indoors.Animals in outdoor systems are in general more active and present a daily pattern of behaviour with two peaks of activity, whereas confined pigs are more sedentary and have a more stable behavioural pattern along the day.The upholding of cortisol levels in animals reared indoors after loading and transport, as well as the higher level of reciprocal aggressions and fights and the higher level of glucose, could be an evidence of some welfare problems in pigs reared in conventional indoor systems in comparison with open systems. Figure 1 - Figure 1 -Housing diagram and distribution of grazing plots in treatment OUT. Table 1 - Description of recorded behaviour Table 2 - Average weekly live weight gain per animal (kg) in indoor traditional system (IN) and outdoor system (OUT) Table 3 - Total average feed intake, pasture intake and conversion rate per animal in traditional indoor system (IN) and outdoor system (OUT) SD -standard deviation; NS -not significant. Table 4 - Carcass and meat characteristics of pigs reared in traditional indoor system (IN) and outdoor (OUT) system DFTGM -dorsal fat thickness (in mm) at the gluteus medium; DFTLR -dorsal fat thickness (in mm) at the last rib; CL -carcass length; NS -not significant. Table 5 - Fatty acid composition (% of total fat) of back fat in pigs reared in traditional indoor (IN) or outdoor (OUT) systems NS -not significant. Table 7 - Average number of aggressions in each activity during 30-minute observation periods in the morning (AM) and in the afternoon (PM), for pigs reared indoors (IN) and outdoors (OUT) 1 Table 8 - Serum cortisol concentration (ng/mL) for pigs reared in two different production systems (IN and OUT) sampled on the 84th day of the experiment and at slaughter
2018-12-06T00:02:56.668Z
2013-07-01T00:00:00.000
{ "year": 2013, "sha1": "a2255b06f3f49b461ee5217b8e89294a3226bfca", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rbz/a/NC94QQqSXFkPVz9qhbx3pfS/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a2255b06f3f49b461ee5217b8e89294a3226bfca", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
265087051
pes2o/s2orc
v3-fos-license
Resource-saving technologies for the basic cultivation of chernozem typical in the northeastern region of the Central Chernozem region . and non-moldboard tillage systems led to a decrease in the productivity of a hectare of arable land in crop rotation by 0.09 tons of grain units without protection and by 0.11-0.14tons in combination with plant protection.The combined (dump-surface) system of tillage ensured the productivity of arable land at the level of the traditional dump system at different depths.It has been proved that the highest profitability of production in crop rotation (250.3%) was achieved using a combined (dump-surface) tillage system in combination with the use of complex protection products (seed dressing + pesticides for crop vegetation).Resource-saving processing systems (surface and non-moldboard) worsened economic performance.Income from the use of these tillage systems decreased by 4.8-5.5%,profitability decreased by 1.6-2.7%compared to the traditional mid-depth moldboard tillage system in crop rotation. Introduction In recent years, the problem of reducing costs and increasing the profitability of crop production has become more and more urgent.The solution to this problem largely depends on the methods and systems of tillage, which are the main technological operations of all agriculture and in the technologies of growing field crops, as a rule, make up the bulk of the costs [2,14].The productivity of crops, energy consumption, and the profitability of production depend on tillage [1,11,15].Studies have established that chernozem soils have a stable composition, their density changes little over time and is optimal for the development of grain crops, which is the basis for minimizing tillage [7]. In modern agriculture, the main tillage in agrotechnological complexes of crop cultivation is carried out mainly with the use of plowing, surface and non-moldboard methods, which, with a certain combination, form a system of tillage in crop rotation [12].Minimization of tillage in crop rotations leads to conflicting results on crop productivity [9,13]. The main processing in a certain way affects the fertility of the soil, as well as the phytosanitary state of agrocenoses, which significantly affects their productivity.At the same time, it is important that tillage is economically justified. In connection with the transition to resource-saving technologies, there is a need to study various methods and systems of soil cultivation and determine the most effective ones, with the maximum use of the natural and climatic potential, ensuring the reduction of weeds, high yields, increasing the profitability of production and maintaining soil fertility. The purpose of the research was to solve the problem of optimizing the main tillage system, which ensures a reduction in energy costs, obtaining high crop productivity and profitability of production in a grain-fallow crop rotation (black fallow -winter wheatsoybean -barley) on typical heavy loamy chernozem in the conditions of the northeast of the Central Chernozem Region.cultivation: traditional moldboard at different depths (control), surface and moldboardless at different depths (resource-saving), combined 75% non-dump, dump-surface -25% dump + 75% surface). Within the framework of the traditional moldboard system (control), the main tillage was carried out by plowing with a mounted plow PLN-5-40.The main processing in the surface system was carried out by disking with a BDM disk (3/4).The non-moldboard middepth system provided for the main processing with a PLN-5-40 plow without mouldboards.The main processing in combined systems was carried out using the PLN-5-40 plow during moldboard and non-moldboard tillage and the BDM harrow (3/4) -surface treatment. The studies were carried out on a fertilized background; under the main treatment, mineral fertilizers were applied at a dose of N 60 P 30 K 30 , including ammonium nitrate for winter wheat (N 30 ) as a spring top dressing during the resumption of vegetation; N 30 P 30 K 30 , and soybeans were cultivated without the use of fertilizers.Thus, the level of fertilization of a hectare of arable land in the crop rotation was N 20 P 10 K 10 .The crop protection system of crop rotation consisted of two levels:  Seed dressing -background. Background + pesticides (fungicides, insecticides, herbicides) for the vegetation of crops in the fight against diseases, pests and weeds, for this, highly effective chemical protection agents registered in Russia were used.Agrotechnics for growing crops of crop rotation is generally accepted for the study region, with the exception of the studied tillage systems. The repetition in the experiment is threefold.The placement of plots in the experiment is systematic (consecutive).Plots of the first order (tillage) -52 by 7.20 m, area 374 m 2 .Plots of the second order (plant protection) -25 by 7.20 m, area 187 m 2 .The accounting area of the plot is 75 m 2 (17 m by 5 m). In the experiment, crop varieties released for the region were used for sowing: winter wheat -variety Scepter, barley -Chakinsky 221 and soybean -variety Avanta. Observations, analyzes and records were carried out according to the current methods adopted in field and laboratory research on agriculture [3,4]. Soil moisture was determined before sowing crop rotation crops in a meter layer with an interval of 10 cm, using the thermostatic-weight method, GOST 282687-89. Soil samples for analysis were taken in the spring on fixed plots in variants of soil cultivation in a soil layer of 0-30 cm with an interval of 10 cm.Weed infestation of crop rotation crops was determined during the period using the quantitative-weight method according to the methods of VNIIZ and ZPE (Kursk). Accounting for the yield of crops was carried out by the method of continuous harvesting of the accounting area of the plots with the SAMPO-500 combine.Yield data resulted in 14% moisture and 100% purity. The years of the study varied according to weather conditions.Growing seasons (May-August) 2014 and 2018-2020 were characterized by insufficient precipitation -23.9-77.0% of the average long-term norm (204 mm).In these years, an increased air temperature regime was noted, and the HTC value varied from 0.2 to 0.7.In 2013 and 2015-2017 precipitation during the growing season was much higher than the norm (1.2-2.1 times), the average daily air temperature exceeded the long-term average by 0.80 C, HTC (according to Selyaninov) was 1.1; 1.6; 2.0 and 2.4, respectively, by years, with a long-term average of 1.0. Results In the soil and climatic conditions of the region, soil moisture serves, in most cases, as a limiting factor for the formation of crop yields.The main moisture charging of the soil occurs in the autumn-winter-spring periods and reaches a maximum by the beginning of spring field work. The obtained experimental material on the accumulation of available moisture, depending on the main tillage, showed that the greatest accumulation of autumn-spring precipitation, on average for crop rotation crops, both in the arable and meter soil layers is 60.9-61.8mm and 191.3-198.4mm were noted in variants 3.4.5 with non-moldboard middepth and combined systems of the main processing, with indicators in the control (option 1) -54.3 and 184.1 mm (Figure 1).Surface treatment (option 2) worsened the filtration capacity of typical chernozem.The stock of spring productive moisture during surface treatment was minimal in the experiment -51.9 mm in the arable layer and 174.8 mm in the meter layer, which is less than the control by 2.4 and 9.3 mm, respectively.2).In these options, its content in the arable layer (0-30 cm) of the soil before sowing crops, on average, was in the range of 25.1-25.3mg/kg of soil.The lowest content of nitrate nitrogen was noted for surface treatment (option 2) -21.7 mg/kg of soil, which is less than the control (option 1) by 3.4 mg/kg of soil or by 13.5%. Determining the availability of mobile phosphorus and exchangeable potassium showed a decrease in these elements in options 2 and 3 with surface and non-moldboard mid-depth processing systems.The content of these elements was 11.0 and 16.0 mg/kg of soil for mobile phosphorus, 15.0 and 9.0 mg/kg of soil for exchangeable potassium, compared with the control (option 1). The use of combined tillage systems in crop rotation (options 4 and 5) ensured the maximum content of phosphorus and potassium in the arable soil layer before sowing crops.The main tillage system in crop rotation without wrapping and mixing the wrapped layer leads to a greater concentration of nutrients in the upper (0-10 cm) soil layer and a noticeable decrease in them in the lower (20-30 cm) layer, that is, they increased the differentiation of the arable layer in terms of fertility. In options 1, 4, 5 with traditional mid-depth dump and combined tillage systems, a more homogeneous arable soil layer is created in terms of the content of nutrients, which has a positive effect on the formation of crop yields, especially in years with insufficient moisture supply. In the agrocenoses, during the research, there were weeds belonging to three ecological and biological groups.The type of infestation can be characterized as juvenile-root shoots.At the same time, up to 80% of the total amount of the weed component was accounted for by annual grasses and dicotyledonous species.Of that group of weeds, dicotyledonous species predominated (white gauze, tenacious bedstraw).Of the perennial weeds, field bindweed dominated.The results of the research showed by harvesting crops, the least weediness of crops was noted against the background of traditional moldboard mid-depth cultivation -option 1 (Figure 2). Surface tillage (option 2) led to a significant increase in weed infestation of crops of crop rotation in terms of weed numbers by 2.0 times, by air-dry weight -by 1.4 times without chemical weeding and by 2.2 and 1.5 times against a herbicide background compared with control (option 1).The combined moldboard-surface tillage system with 75% saturation with surface treatment (option 5) increased the number of weeds without herbicides by 1.9 times, against the background of herbicide treatment -by 1.5 times.At the same time, the mass of the weed component was at the level with the control. On variants with non-moldboard mid-depth and combined moldboard-non-moldboard systems, an increase in the number of weeds was also noted, but to a lesser extent than with surface cultivation.The air-dry mass of weeds in these variants was at the control level with a traditional mid-depth tillage system. Fig. 2. Infestation of crops of crops of grain-fallow crop rotation depending on the systems of the main processing and their combination with herbicides (2013-2020). Chemical weeding of crops ensured a decrease in the number of weeds by an average of 55.8% for tillage options, their air-dry mass decreased by 2.4 times. Analyzing the value of the productivity of a hectare of arable land in a grain-fallow crop rotation in terms of the yield of grain units (Figure 3), it can be noted that the traditional mid-depth dump with 100% saturation with plowing and the combined dump-surface with 75% saturation with surface tillage (options 1 and 5) ensured the equal and highest yield of grain units, which amounted to 2.29 t/ha without means of protection and 2.57-2.58t/ha in combination with crop protection during vegetation. A lower yield of grain units was noted in options 2, 3, 4 with surface, moldboardless mid-depth and combined moldboard-less moldboard processing systems -2.20-2.23 t/ha and 2.44-2.47t/ha without protection and with protection of crops during the growing season, respectively.Compared with the control, the traditional mid-depth moldboard tillage system, the decrease in the productivity of a hectare of arable land was 0.06-0.09t/ha without protection and 0.07-0.14t/ha against the background of the use of complex protection products (seed dressing + pesticides crop vegetation).The use of the second level of crop protection in technological complexes of crop cultivation in crop rotation (seed dressing + pesticides for crop vegetation) ensured an increase in yield per hectare of arable land.The increase in average for the options for tillage systems was 0.27 t/ha, compared with the first level of protection (seed dressing). The task of modern technologies for the cultivation of agricultural crops is not only to ensure high productivity, but also the maximum possible profit and profitability at a minimum cost per unit of production. The use of one or another system of basic tillage is largely determined by economic indicators, which include an increase in profits and an increase in the profitability of production. The systems of basic tillage and plant protection agents studied in the grain-fallow crop rotation in the cultivation of crops had a certain impact on profit and profitability (Figure 4).They were higher in option 5 a combined moldboard-surface treatment system with 75% saturation of surface treatment in combination with a complex of crop protection products (seed treatment + pesticides for crop vegetation from harmful objects of diseases, pests and weeds).Profit per hectare of arable land amounted to 29.59 thousand rubles, the level of profitability -250.3%, with indicators under control (option 1) -29.28 thousand rubles.and 241.8%. The lowest values of these indicators of economic efficiency were characterized by resource-saving processing systems -surface and non-moldboard mid-depth (options 2, 3).Compared to the control (option 1), profit decreased by 1620 and 1420 rubles/ha, profitability decreased by 1.7 and 2.7%. In the absence of plant protection products in crop cultivation technologies, during their growing season, not only a decrease in crop productivity and output per hectare of arable land occurred, but also the economic indicators of production in crop rotation worsened.The established regularity was typical for all studied technologies. Discussion The highest productivity of a hectare of arable land in the grain-fallow crop rotation was obtained in the variants with traditional moldboard plowing at 100% saturation and combined moldboard-surface (25% plowing + 75% surface) systems of basic tillage. When using resource-saving processing systems, crop rotation productivity decreased.Higher profit and profitability level of production of a grain unit was ensured by a combined dump-surface treatment system in combination with a complex of plant protection products (seed dressing + fungicides, insecticides and herbicides for crop vegetation).The least economically profitable were the technologies of cultivation of field crops with a resource-saving system of basic tillage Conclusion In the soil and climatic conditions of the northeast of the Central Chernozem region in field crop rotations, tillage should be differentiated and built taking into account the agroecological requirements of crops.In the grain-fallow crop rotation (black fallowwinter wheat -soybean -barley), the most agro-economically profitable is the combined dump-surface system of the main tillage, where 25-27 cm plowing is carried out for leguminous crops (soybeans), and disk plowing for grain winter wheat and barley processing to a depth of 10-12 cm, in combination with means of protecting crops from harmful objects. Fig. 1 . Fig. 1.The content of productive moisture in the soil under various systems of basic cultivation of typical chernozem in grain-fallow crop rotation (2013-2020).Studies on the study of the nutritional regime of typical chernozem in crop rotation found that the highest concentration of nitrate nitrogen in the soil was achieved with the , 3 . 010 (2023) BIO Web of Conferences CIBTA-II-2023 https://doi.org/10.1051/bioconf/2023710108989 71 The productivity of a hectare of arable land depending on the systems of basic tillage and their combination with plant protection products of crops in the grain fallow crop rotation (2013-2020). Fig. 4 . Fig. 4. Economic indicators of the production of a grain unit with various systems of basic tillage and their combination with crop protection agents in grain-fallow crop rotation (2013-2020). Table 1 . Technology of cultivation of agricultural crops. Table 2 . The content of mineral nutrition elements in the soil, before sowing crops, with various systems of basic processing, on average for the crop rotation of 2013-2020.(mg/kg absolutely dry soil).
2023-11-10T16:21:29.625Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "ffd0814412ebb953561c0ad84cd03b13b3ea2f4a", "oa_license": "CCBY", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2023/16/bioconf_cibta2023_01089.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dfaebcfbad8a5fe995dc0ed8b15bf10eca88e3cb", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
231733808
pes2o/s2orc
v3-fos-license
A Central Clearing Clinic to Provide Mental Health Services for Refugees in Germany Objective: To determine migration related distress pattern in refugees and feasibility of a de novo established, central low-threshold outpatient clinic serving more than 80,000 newly arrived refugees in the metropole of Berlin. Methods: In an observational cohort study the relative prevalence of major psychiatric disorders by age, place of living within berlin, language and region of origin were assessed in a refugee cohort from 63 nationalities speaking 36 languages. Findings: Within 18 months, a total of 3,096 cases with a mean age of 29.7 years (11.7) have been referred from all 12 districts and 165 of 182 subdistricts of Berlin to the CCC. 33.7% of the patients were female. The three most frequent diagnoses were unipolar depression (40.4%), posttraumatic stress disorder (24.3%), and adjustment disorder (19.6%). Conclusion: The present data gives insight into the distribution of mental disorders in a large sample of refugees and provides evidence that a CCC is an effective service to quickly and broadly provide psychiatric consultations and thus to overcome classical barriers refugees usually experience in the host communities. In Berlin, Germany, and Europe treatment resources for this population should focus on stress and trauma related disorders. INTRODUCTION As a consequence of armed conflicts in embattled countries, more than sixty million people worldwide had been forced to leave their home countries according to recent UNHCR assessments (1). Although most forced migrants were seeking shelter in the countries directly neighboring the war regions, Central Europe has increasingly become one of the main destinations of larger transnational migration streams (2). As a consequence, since summer 2015 more than one million refugees have found their way to and stayed in Germany (3). A large proportion of those refugees have witnessed the cruelties of civil war and experienced exceedingly stressful journeys to Central Europe. In addition, the process of resettlement in a novel environment with unfamiliar habits, norms and expectations, restrictive policies in regard to residence status, limited access to main stream health services, the lack of other basic infrastructures and the living conditions in provisional shelters all increase insecurity and uncertainty among the newcomers, often leading to ethnic discrimination and social exclusion (4). This in sum leads to an accumulation of emotional distress presumably resulting in increased prevalence rates of mental disorders within the group of refugees as compared to the general population (5)(6)(7)(8). While the exact response pattern to stress and trauma experiences in this specific population after more than 2 years still remain unclear, it is beyond all questions that the overall demand in mental health services surmounted the capacities available within the existing German health care structures (9,10). Given the large number of potentially affected individuals, mental health stakeholders across Germany fiercely debate how the newcomers could best be provided access to a health care system, in which services of mental health care often are impeded by societal, individual, and structural barriers. Such barriers comprise the lack of knowledge about symptoms of mental disorders and treatment possibilities within the health care system and the still existing and widespread stigmatization of mental disorders. In addition to those general barriers, refugees experience further difficulties in form of language barriers and often culturally engrained different disease and treatment models (10)(11)(12)(13). Taken together, Germany faced a situation, in which a largely unprepared mental health care system needed to provide culture and trauma sensitive diagnostic and therapeutic procedures for a large number of refugees. As the health system was unable to bear this challenge without additional capacities, new mental health care models were needed to provide fast and low-threshold access to mental health care in order to diagnose, prioritize and treat refugees with mental disorders. To address this need, we established in Berlin Germany's first central clearing clinic (CCC), an institution, in which refugees regardless of their legal and insurance status are seen at short notice. We here report results of the first six quarters of the CCC, in which the largest so far cohort of refugees (n = 3.096) in Germany underwent mental health screening. Study Population and Outreach Activities In the time frame between February 10th 2016 and July 28th 2017 there had been a total of 4.635 contacts which have been aggregated to a total of 3.549 cases. From these a total of 453 cases have been excluded from further analysis because patients did not show up to complete the diagnostic procedure or the diagnosis remained unclear after two contacts. All assessments were performed in the central clearing clinic, which is centrally located and situated in an area well-known to refugees since it hosted the registration authority for all newly arrived refugees in Berlin until May 2016. All refugee housing facilities were informed about the availability of psychiatric services at the CCC on the day before opening (February 9th 2016). Additional outreach activities for refugee registration authorities, social workers, teachers, psychologists and volunteers working in the housing facilities were offered every 4 to 8 weeks providing information about the CCC and basic knowledge in culture-sensitive diagnosis and treatment of trauma and stress related symptoms. This observational study was performed in accordance with International Conference on Harmonization Good Clinical Practice guidelines and the principles of the Declaration of Helsinki. The Charité ethics committee approved the protocol of this retrospective study. Appointment Procedure and Psychiatric Assessment Appointments were made via telephone and through email by social workers, volunteers, physicians, or by refugees themselves. A nurse with experience in mental health care made an initial estimate on case severity in order to adjust waiting time and give preference to the more severe cases to optimize resource allocation. There were also slots for immediate emergency contacts, for example for suicidal patients, after acute deterioration of symptoms or when the continuity of medication was crucial. All psychiatric assessments were conducted by physicians experienced in transcultural psychiatry. The team of physicians consisted of two psychiatrists for adults, both working full time and one doctor for children and youth psychiatry who was working only part time at the CCC during 2016, but whose attendance was increased to a fulltime presence in 2017. Daily presence of at least one psychiatrist who spoke the most frequent language (Arabic) as a native language was ensured. Average assessment time was 1 h for adults and 90 min for children and adolescents. Some patients only had one single appointment. These cases either did not need further attendance or they were directly transferred to other institutions, such as outpatient clinics of psychiatric hospitals in Berlin. In other cases, follow up appointments at the CCC were scheduled to complete diagnostics or to offer further psychiatric support when referral could not be organized in a timely manner or for other reasons, lack of translators being the most common one. At the CCC a range of the most commonly used psychiatric drugs was available on site. When physicians saw an indication to initiate pharmacological treatment, medication could directly be provided to the patients. Additionally, psychotherapeutic shortterm interventions were offered as a group program to Farsi and Arabic speaking women. Translation Techniques Language barriers was addressed in three ways: (i) a native Arabic speaking physicians provided care for the bigger part of Arabic speaking patients; (ii) interpreters for the main languages (Arabic and Farsi/Dari) were present in the CCC all the time, and (iii) for other languages on-demand interpreters and/or a video-based interpreter service was in use. These videobased consultations were conducted in front of a screen which connected a permanently available professional interpreter to psychiatrist and patient via audio and video. Although this medium certainly influenced the interaction the system was overall well-accepted by psychiatrists and patients. Analysis We described the diagnoses of refugees who have been referred to the CCC in the mentioned time frame. We reported distribution of clinical syndromes and disorders involved, by age and sex. Data were analyzed on SPSS 21. Sociodemographic Data Over 72 weeks (from February 10th 2016 to July 28th 2017) the CCC had a total of 4.635 contacts with 3.096 refugees (see Supplementary Figure 1) from all twelve districts and 165 out of 182 sub-districts of Berlin (see Figure 1). The investigated population consisted out of 2.052 male and 1.044 female refugees with a mean age of 29.7 years (SD 11.7). The age pattern significantly differed from the pattern of the general German population with a clear peak and focus in the age range between twenty and 40 years (see Figure 2). Seven Hundred Sixty two contacts (16.4%) and 542 patients (17.5%) referred to the CCC were younger than 18 years (see Figure 2). Further characteristics of the investigated population are shown in Table 1. Clinical Data The most frequent disorders were unipolar depression (40.4%), posttraumatic stress disorder (24.3%), and adjustment disorder (19.6%). Notably, in 7.4% of all patients referred to the CCC patients did not show any clinical syndrome which could have been classified according to DSM5 or . Unipolar depression was more frequent in female refugees whereas addiction and psychotic syndromes were diagnosed more often in male refugees (further details in Table 2). Discussion Main results of the present investigation were (i) that a central clearing clinic is a feasible and probably superior institutional strategy to provide mental health care, and (ii) that stressful and traumatic life and flight experiences are associated with complex psychopathological reaction pattern with affective disorders, posttraumatic stress disorders and adjustment disorders being the most prominent disorders. As a consequence of the steep rise of transnational migration Germany is becoming progressively ethno-culturally diverse, posing challenges for the countries' population and economy as well as for the refugees. This includes issues pertaining to the social and cultural inclusion of people into a receiving society, to social equality, education, labor market, democratic participation, social cohesion, and to the health care system. At the same time, the influx of people from ethnically and culturally heterogeneous backgrounds might spark progress toward an inclusive society benefiting from the variety of languages, cultural and ethnic diversity and the values and norms connected to it. More specifically, the challenges within healthcare comprise the availability of transculturally trained experts, techniques to overcome the language barrier and as a consequence the question whether the mental health care services should be provided in a centralized vs. a decentralized fashion. Characteristics of the Central Clearing Point Typical Western European mental health care institutions are usually not experienced in working with refugees from heterogeneous countries and cultures, who often present unfamiliar and diverse histories of mental disorders and traumata. Especially diagnostic evaluation of psychiatric disorders is associated with particular difficulties: different concepts of illness and mental health (13), varying expressions of psychological distress as well as a lack of acceptance and trust in an unknown health care system (13). Potential consequences are misdiagnoses, which might lead to delayed adequate treatment with significant emotional distress for the patient and their relatives as well as an additional financial burden to the health care system. Additionally, the absence of professional interpreters is often an essential obstacle. Recent studies reveal that patients who face language barriers receive unfavorable medical care (14,15). In order to prevent such negative consequences, the use of professional interpreters is highly needed (16)(17)(18)(19). But often financial funding of interpreters is unclear and health care institutions have to face the additional high financial burden (20). The availability of professional interpreters, respective bilingual physicians was a major advantage of the CCC. The financial as well as the organizational burden could not have been carried out by the regular mental health care system. We saw patients of 36 different languages (Supplementary Table 2) with Arabic and Farsi being by far the most common languages. By conceptualizing the CCC as a central access point located next to the central registration authority for refugees, we alleviated the access to mental health care. This centralized set up of the CCC counteracts the pre-existing organization of mental health care facilities in Berlin, which traditionally aim to provide supply in every district of the city. Those community based mental health services have the advantages of (i) being close to the homes of the patients and such being more accessible, (ii) facilitating the collaboration between psychiatrists/psychologists and social workers, institutions etc. of a respective district, (iii) setting clear responsibilities regarding the mental health care in Berlin, (iv) and enabling the inclusion in the regular health care system instead of developing parallel structures which may enhance barriers and exclusion on the long run. A central clearing point contrasts this model, but have been broadly utilized from the refugee population (as indicated in Figure 1). A reason might be that the available general mental health services did not have the resources to provide sufficient care within an appropriate time span: the group of refugees was even after arrival in Berlin a highly mobile group. Often refugees had to move several times within Berlin from provisionary refugee camps to permanent housings. For many refugees, the CCC became a stable contact they could return to whilst having to make an odyssey through different accommodations and institutions during the first months in Germany. Priorisation Strategies in Mental Health Especially in those countries which have become the primary destination of migration of populations from civil war regions, the comparatively high trauma and stress load dares for novel solutions in the field of mental health addressing the need for a quick and substantial response and at the same time acknowledging the composition, threshold and extend of available resources within the traditional system. Such solutions can be inspired from concepts of humanitarian aid (9) or emergency medicine, that usually address situations, in which a quick response is required in an environment with limited resources. In such settings, in which treatment resources are insufficient to treat all patients immediately, a priorisation system ("triage") is an effective approach to allot therapies efficiently. The concept, first described by Dominique Jean Larrey (19) during the Napoleonic Wars is nowadays a standard framework for many emergency medical services and a tool often used in mass-casualty incidents, e.g., in disaster medicine. Triage in this context refers to distinguishing between different levels of patients' needs and referring them to adequate treatment options, it does not exclude any patient from required treatment but rather helps to provide targeted interventions. Psychosocial and disaster behavioral health issues in situations affecting a large number of patients are used in broader concepts such as the continuous integrated care. However, triage concepts that exclusively address mental health services which are entrusted with decisions regarding the allocation of resource-intense therapies or in-patient treatments have rarely been implemented. The main considerations hindering such concepts is at first the general approach that every patient should be provided with the needed treatment as soon as possible regardless of the severity of the disorder. Second, unlike in emergency medicine, treatment decisions in most mental disorders happen rather on long-term considerations and often without an immediate life-threatening consequence making the implementation of a triage system more complex. In the CCC we were able to proof that a priorisation system can be applied in mental health care. Advantages of the CCC were that with relatively little means it was possible to provide fast mental health care services to a larger population otherwise hard to access. This was indicated by the fact that patients were referred from all districts of Berlin. Even though they received an evaluation within a relatively short time span, the waiting time for appointments rather increased over the months which supports the high need for a mental health care contact point. While those patients that needed psychiatric care urgently could be identified faster and transferred to appropriate institutions for further treatment, less severe cases could also be identified and partly supported in the CCC, such unburdening the more specialized institutions and making more efficient use of the available resources such as specialized trauma therapies. By that, it was avoided that limited capacities were used ineffectively and immoderately. The needs for costly interventions were evaluated through psychiatrist with experience in intercultural work, ensuring that the indication was checked professionally. Shortcomings of the CCC were that for those not in need of specialized treatments low-threshold interventions such as stress relief groups or support by social workers were not (yet) broadly available. To overcome this issue, we offered short term interventions in a group setting, which however were only available to a small group of our patients. Pattern of Trauma and Stress Response in Refugees So far, there is little evidence on the prevalence of mental disorders among refugees arriving in Germany in the last 3 years (20). The data from the CCC are corresponding to some extend with existing international data, showing an increased risk for depression and post-traumatic stress disorders in refugee populations compared to the general population (21,22). In general, the rates of almost all psychiatric disorders diagnosed at the CCC show increased rates compared to the general German population (DEGS Study) (23). While comparing our findings to prevalence dates of the general German population one must take into account that our data relies on a preselected younger subpopulation as compared to the general population (see Figure 2). In consequence, disorders more prevalent in the population below 40 years are likely to be overrepresented. Of importance: as only cases that presented noticeable psychiatric problems were referred to the CCC, one must assume lower prevalence rates in the general refugee population. On the other hand, the data from the 3.096 included cases allow useful insights into the complexity of psychiatric disorders with a broad variety of clinical syndromes and suggest the need for further differentiated studies on the impact of refugee status and refugee living conditions as well as flight circumstances and discrimination experiences on the mental health status of the refugee population in Germany. Of importance, the finding that affective disorder, PTSD and adjustment disorder are the leading diagnoses support the current and future need for an increase in psychosocial and psychotherapeutic offers for refugees. Limitations However, we need to point out some difficulties and potential blurs of the preliminary data presented here: (i) diagnostic evaluation in the CCC was mainly based on clinical interviews. In order to respond to the huge demand and to keep the screening procedure feasible, we refrained from a systematic use of psychologic questionnaires, which were not available in all languages. (ii) In addition, reliability of our diagnostic evaluation therefore depended crucially on the quality of interpretation and may differ between e.g., Arabic native speaking psychiatrists and consultations conducted with interpreters. (iii) In the procedure of application for asylum, medical attestations become a certain form of informal currency that may influence chances of success. A lot of patients consulting in the CCC asked for medical attestations, in some cases this demands seemed to be the main reason for consultation. It was sometimes difficult to estimate how and to what extent this demand influenced the description of symptoms relevant for the diagnostic assessment. (iv) Moreover, the structure of the CCC leads to the fact that mainly the severe cases were referred and thus data are not representative of the refugee population, albeit the large sample size. In conclusion, we were able to demonstrate that the concept of a central institution prioritizing mental health needs of individuals with a high stress and trauma load is feasible andimportantly-well-accepted. The CCC concept might be scalable and serve as a model for other settings where populations with a high stress load are coming into receiving countries with limited resources within the mental health care system. It may not only improve mental health of refugees but may also serve as an intervention against the frequently reported perception and experience of discrimination, that may further hamper the adaptation process for newcomers during resettlement (24,25). To avoid exclusive health care structures, the current challenge is to integrate emergency services including translators into general mental health care and its local organization including service sectors and local networks of hospitals and outpatient services. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Charité ethics committee. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS MB, PP, IS, JA, and JS analyzed the data and drafted the report. PP, CA, MBB, AHa, AHo, SS, and DR assisted in clinical case confirmation. AHe, S-MW, IH, MB, JS, EH, UK, and MA contributed to the methodology and study design. JS, IH, and AHe edited the drafted report. All authors contributed to the article and approved the submitted version. FUNDING This study was funded by the Berlin Office for Refugees, State of Berlin, Germany.
2021-02-02T18:28:10.431Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "4c931c0a71fe1fbcaa4c7573159eeddbb2078040", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2021.635474/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c931c0a71fe1fbcaa4c7573159eeddbb2078040", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220411500
pes2o/s2orc
v3-fos-license
Estimating the Magnitude of Illicit Cigarette Trade in Bangladesh: Protocol for a Mixed-Methods Study The illicit tobacco trade undermines the effectiveness of tobacco tax policies; increases the availability of cheap cigarettes, which, in turn, increases tobacco use and tobacco related deaths; and causes huge revenue losses to governments. There is limited evidence on the extent of illicit tobacco trade particularly cigarettes in Bangladesh. The paper presents the protocol for a mixed-methods study to estimate the extent of illicit cigarette trade in Bangladesh. The study will address three research questions: (a) What proportion of cigarettes sold as retail are illicit? (b) What are the common types of tax avoidance and tax evasion? (c) Can pack examination from the trash recycle market be considered as a new method to assess illicit trade in comparison to that from retailers and streets? Following an observational research method, data will be collected utilizing empty cigarette packs from three sources: (a) retailers; (b) streets; and (c) trash recycle market. In addition, a structured questionnaire will be used to collect information from retailers selling cigarettes. We will select post codes as Primary Sampling Unit (PSU) using a multi-stage random sampling technique. We will randomly select eight districts from eight divisions stratified by those with land border and non-land border; and within each district, we will randomly select ten postcodes, stratified by rural (five) and urban (five) PSU to ensure maximum geographical variation, leading to a total of eighty post codes from eight districts. The analysis will report the proportions of packs that do not comply with the study definition of illicit. Independent estimates of illicit tobacco are rare in low- and middle-income countries such as Bangladesh. Findings will inform efforts by revenue authorities and others to address the effects of illicit trade and counter tobacco industry claims. Introduction The World Health Organization's (WHO) Framework Convention on Tobacco Control (FCTC) suggests a number of measures to reduce the demand for and supply of tobacco. Elimination of all forms of illicit tobacco trade (Article 15) is an essential tobacco control measure [1,2]. Illicit trade may include large-and small-scale smuggling, illicit manufacturing, and counterfeiting of existing brands [1]. Cigarettes are a particularly attractive product to smugglers [2]. As cigarettes in Low-and Middle-Income Counties (LMICs) have relatively higher taxes compared to other tobacco products such as birris and smokeless tobacco [3], evading tax by diverting cigarettes into the illicit market (where sales are largely tax-free) can generate a considerable profit margin for smugglers [2,4]. Illicit trade undermines the effectiveness of tobacco tax policies, results in availability of cheap cigarettes to smokers, and thus increases tobacco use and tobacco related deaths [2,3]. Elimination or curbing illicit trade will therefore increase price and tax revenue to governments and reduce tobacco consumption and tobacco-related premature deaths [3,5]. The share of illicit in total cigarette consumption is estimated to be 11.6% globally and 16.8% in LMICs [2]. Tobacco companies claim that illicit trade in cigarettes has been growing rapidly since the 1990s [4], and often use inflated estimates of illicit cigarettes to argue against cigarette tax increases [5][6][7][8]. In the absence of national-level independent data on the extent of illicit cigarette trade and limited data on cigarette confiscation provided by the enforcement authorities [3,9], tobacco companies manipulate evidence to their advantage and use these tactics to continually lobby governments for reducing cigarette taxes [3,6,7]. In Bangladesh, 35.3% of adults (38 million) use tobacco regularly, 14% (15 million) use cigarette, and 5% (5.3 million) use "birri" [10]. However, there has been no independent published evidence measuring the extent of illicit tobacco trade in Bangladesh. Nevertheless, following industry estimates, it has been claimed that the illicit cigarette trade would be about 2% of the total cigarette market, leading to a revenue loss of 8 billion Taka, 4% of the total tobacco revenue [11]. This protocol will present a mixed-methods study to estimate the extent of illicit cigarette trade in Bangladesh. Three research questions will be addressed: (a) What proportion of cigarettes sold as retail in Bangladesh are illicit? (b) What are the common types of tax avoidance and tax evasion in Bangladesh? (c) Can pack examination from trash recycle market be considered as a new method to assess illicit trade in comparison to that from retailers and streets? Findings from this study will help policy makers develop more effective tobacco control measures and refute industry claims in Bangladesh. Materials and Methods Measuring illicit tobacco trade is a complex task due to the illegal nature of the activities involved [2]. Various methods have been used to assess the extent of illicit trade, such as measuring the difference between consumption and tax paid sales, interviewing smokers, studying features of cigarette packs and econometric modeling, each having their own strengths and weaknesses [3,4,12,13]. We will use observational methods and collect data from empty cigarette packs from three sources: (a) from the retailers resulting from their loose cigarette sale [3,14]; (b) littered packs from the streets surrounding the retailers [3,15,16]; and (c) littered packs from trash recycle market. The last one can be considered as an innovation in the methodology for not being applied before and could be particularly useful in LMICs where trash recycle markets are established. Regardless of the pack source, such observational studies provide a direct way to assess illicit trade [7,17]. Once collected, we will examine certain characteristics of cigarette packs such as "no mention of retail price", "mention of brand element", "no graphic and textual health warnings", "no declaration", "a duty-free sign", and "absence of correct and authentic tax stamps" of cigarette packs as indicators of being illicit [12]. Such information could also be collected either directly from tobacco users and/or by inspecting tobacco users' packs. However, as "loose selling" of cigarettes is common and permitted by law in Bangladesh, consumer surveys and pack inspections are of limited use. On the other hand, loose selling makes retailer pack collections more relevant. We will therefore adopt the approach developed by [3] to measure the extent of illicit cigarette sold as retail in Bangladesh. In addition, a structured questionnaire will be used to collect information from retailers selling cigarettes. Study Sites and Sampling Design Bangladesh is divided into eight administrative divisions (highest administrative unit) and 64 districts. Bangladesh also has large international land borders with neighboring countries, and therefore 31 districts in six divisions border with India and Myanmar (Table 1). In Appendix A, Figure A1 presents the map of Bangladesh where the green colored places represent the two divisions with no land border area and the rest share land borders with neighboring countries. We will use post codes as the primary sampling unit (PSU) and select PSUs using a multi-stage random sampling technique [14]. In a division having districts with borders, we will only consider those districts having borders for inclusion in the study, and randomly select one such district from the division. For instance, while considering Rajshahi division, among eight districts, four (namely, Naogaon, Rajshahi, Nwabganj, and Joypurhat) have areas with a land border. Therefore, the sampling frame would consist of those four districts only and one will be randomly selected. In two divisions, Dhaka and Barisal-where there is no district with international land border-we will randomly select one district from each. For example, in Barishal division, there are six districts (namely, Bhola, Barishal, Barguna, Pirojpur, Patuakhali, and Jhalokathi) and none of them have international border, hence all of them would be considered in the sampling frame for the purpose of random selection. Hence, eight districts will be selected, one from each division. Within a district, we will select 10 post codes randomly, 60 post codes from border and 20 from non-boarder districts-a total of 80 post codes from eight districts. The PSUs in each district will also be stratified by rural (five post codes) and urban (five post codes) to ensure maximum geographical variation. Post codes (PSUs) under city corporation or paurasava (municipality) will be defined as "urban" while postcodes outside of those places will be considered as rural. Currently, there are 1216 post codes, each with an average area of 121 square kilometers. The list of the post codes is available by districts and sub districts [18]. Table 2 illustrates the sampling design. We will collect the list of districts for each specific division and organize them with respect to their border status. A two-digit serial number would be assigned for each district and following randomization the respective one will be finally selected. Figure A2 in Appendix A presents the selected districts in each division covering urban-rural feature and border-non-border status from where cigarette packs would be collected. Data Collection and Management From each PSU, three methods will be used-collecting packs from: tobacco retailers; those discarded in the street; and those discarded in trash recycle markets. An interview of the same cigarette retailers supplying packs for the study will also be carried out. Collecting Packs from Tobacco Retailers The data collection team will determine a central point (such as a bus station, a government building, or a market place) in each PSU. All types of shops, such as tea stalls, kiosks, supermarkets, department stores, and cigarette shops that sell cigarettes or birri, or both, would be eligible to take part in the study. Enumerators will walk half a kilometer in both directions on the main street around the public gathering place and approach all the retailers meeting the eligibility criteria. All eligible retailers will be provided with both verbal and written information about the study. If they agree to take part, they will be required to sign the consent form. Following consent, the field worker will proceed to pack collection. The retailers would be supplied with a "collection bag" on the previous day and asked to keep all the empty packs resulted from loose sale on the following day since the shop opened in that bag. All "collection bags" will be retrieved by the field workers at the end of the business day. We will provide a small monetary reward of BDT 3 (GBP 0.03) per empty pack deposited in the collection bag. In case the empty pack of the cheapest and popular brands were not available in the bag from a particular retailer, the team would take a picture and code all the relevant attributes of such pack observed in the shop itself [3]. Collecting Discarded Packs from Streets In addition to collecting empty packs from the tobacco retailers, discarded packs from the streets where those retailers are established and also from some other locally important streets around public gathering places will be accumulated [3]. The distance that would be covered in each street would be one kilometer (half a kilometer in each direction). The field workers would use separate "collection bag" with unique identification number for collecting the discarded packs in each area. The pack collection from streets would take place simultaneously during the day when the "collection bag" to the retailers would be distributed. The street selection would also consider the street condition for walking and possibility of finding littered packs. Necessary safety equipment such as masks and gloves will be provided to the field staff while collecting littered packs. Collecting Discarded Packs from Trash Recycle Markets A common practice in Bangladesh is to recycle waste papers, including cigarette packs, which are then used as raw material in paper mills. Large municipal cities in urban areas locally known as "City Corporations" have a trash collection system. They collect trash from confined areas and from street corners where the trash bins are placed. A trash recycle market exists in some places to collect used papers and cigarette packs, but this is limited in rural areas. If a trash recycle market exists in the sample areas, and we are able to obtain the empty packs, we will include them in the study. Interview of Point of Sale Retailers During bag collection, basic information such as name and type of the store, price and estimated quantity of the cheapest brand sold in the shop, name and price of most popular brand, and the estimated quantity sold will be collected. In addition, they will be asked about the profit share of tobacco products in their total daily profit, sources of supply, incentives from the suppliers' end, market supervision, and knowledge, understanding, and perception regarding illicit tobacco products. A structured questionnaire that has been used in studies in the UK, Bangladesh, Nepal, and Pakistan previously will be adapted to collect these data from the retailers [19]. Following the Tobacco Advertisement and Promotion Survey (TAPS) [20], a standard pack analysis tool has been developed. Field Design and Implementation Ten data collectors will be recruited and attend a one-day training session on: (a) getting packs from retailers; (b) getting packs from the streets; and (c) collecting packs from trash recycle market. The in-house training will focus on the ways of finding the public gathering places, selecting the retailers, instructing the retailers about pack preservation for the day maintaining certain qualities, and collecting the packs. They will also receive training on obtaining informed and written consent, how to generate the "Unique ID" for the "Collection Bags", how the packs should be preserved, and how to collect discarded packs from the streets. The data collection would be carried out during the period February 2020 to September 2020 and the analyses be performed by March 2021. The Research Fellow and one field supervisor will visit the study sites during data collection period to check the survey is done properly. They will also visit a subset of the chosen retailers. Data will be monitored for quality and completeness by the core research team. Due to the sensitive nature of the research study, we will make our best efforts to provide security measures to our data collectors. We will group the data collectors together, whenever possible. We will prescribe specific hours for data collection, outside of which data collectors will not be allowed to collect data. Every data collector will have a cell phone with pre-paid credit and a dedicated team member will be on-call during the prescribed data collection time, to provide assistance regarding the questions or any issue in the field. All the collection bags will be brought back to the research office where we will have data entry operators to inspect the packs and register their characteristics into the database for further analysis. The training for data entry operators will take place with the direct support from National Board of Revenue (NBR) officials in Bangladesh. The relevant officials will be invited to share the techniques and expertise they employ when they examine the authenticity of tax stamps on cigarettes. Ethical Considerations The primary principles of good ethical practice such as autonomy, justice, beneficence, and non-maleficence will be maintained during the study. We are conscious that retailers could have concerns that the research may negatively impact their business, and interviewers could feel uncomfortable during the interaction with retailers. We will adopt multiple measures to mitigate these concerns. First, all potential retailers will be provided with information regarding the project and written consent will be taken prior to data collection. Those who do not agree to participate after reviewing the study information will not be enrolled in the study. The reasons for declining to participate in the study will be recorded. Second, retailers will be reassured that all information collected during the course of the study will be kept strictly confidential. Third, researchers will be trained on techniques for handling conflict, threats, abuse, or compromising situations. We will specifically train researchers to ask questions in a non-judgmental manner and not to put any pressure on the respondents, if they show signs of reluctance in answering one or more questions. Our data collection team will be respectful to the retailers and will avoid interference with the normal flow of business. Interviewers will only continue the interview when the retailer is not dealing or in the vicinity of customers. Every interviewer will have the right to decline collecting packs from a retailer in a location that makes him or her feel unsafe. The primary principles of good ethical practice such as autonomy, justice, beneficence, and non-maleficence will be maintained during the study. We are conscious that retailers could have concerns that the research may negatively impact their business, and interviewers could feel uncomfortable during the interaction with retailers. We will adopt multiple measures to mitigate these concerns. First, all potential retailers will be provided with information regarding the project and written consent will be taken prior to data collection. Those who do not agree to participate after reviewing the study information will not be enrolled in the study. The reasons for declining to participate in the study will be recorded. Second, retailers will be reassured that all information collected during the course of the study will be kept strictly confidential. Third, researchers will be trained on techniques for handling conflict, threats, abuse, or compromising situations. We will specifically train researchers to ask questions in a non-judgmental manner and not to put any pressure on the respondents, if they show signs of reluctance in answering one or more questions. Our data collection team will be respectful to the retailers and will avoid interference with the normal flow of business. Interviewers will only continue the interview when the retailer is not dealing or in the vicinity of customers. Every interviewer will have the right to decline collecting packs from a retailer in a location that makes him or her feel unsafe. The study has been granted ethics approval by the National Analysis Plan We will classify a cigarette pack as illicit if it has at least one of the following attributes: 1. No mention of "retail price": It is mandatory to print the retail price of the goods "on the body of the goods or on every package, sachets or cells distinctly, conspicuously and indelibly". This applies to cigarettes, and packs not containing the retail price can be considered illegal [21]. 2. Mention of brand element: Packets should not contain any brand elements, for instance Light, Mild, Low Tar, Extra, and Ultra. Packs mentioning brand element will be considered illegal [22]. 3. No graphic health warnings: Health warnings shall be printed on both sides of the packet covering at least 50% of the total area of each main display area. In addition, the warnings should be in colored pictures. Rules notified further suggests that the ratio and written warnings should be 6:1 and the written warnings has to be printed in black background with white font color. Packs not fully complying the above rules will be considered illegal Second-hand smoking causes death. 5. No declaration: Selling of tobacco products without having the statement "Sales allowed only in Bangladesh" printed on the packets would be illegal [22]. 6. A duty-free sign: Cigarette packs collected from retailers with a "duty-free" sign will be classified as illegal as these are to be sold in Duty Free shops only and should not be available in the market. 7. Absence of correct and authentic tax stamp/banderole: The government has made it obligatory to use tax stamp or banderole on cigarette packets. Any sale of cigarette without this tax stamp or banderole is legally prohibited. The banderole/stamp of cigarette pack of different size and design had been defined [21]. We will check if the packs have used banderole/stamp correctly as per the SRO and VAT Booklet, and if the banderole/stamp is genuine. In general, all pack characteristics considered for detecting legitimacy in the retailer sample might also be accepted for those in littered or trash samples. Nevertheless, one of the limitations of the above general approach is that it might fail to distinguish between tax evasion and avoidance. More specifically, there may be some legally purchased duty-free packs as well as those bought from neighboring countries, in a sample from street, while also having illegally purchased and illegally sold packs in the retail shops belonging to that street. This possibility might result in under-or overestimation of the illicit cigarette trade. Hence, it rationalizes the application of an adjustment factor while measuring the magnitude of illicit cigarette in the sample collected particularly from street and trash recycle market. Table 3 exemplifies a thematic presentation of legitimacy criterion of a cigarette pack and adjustment requirement in the estimation. Assuming that the littered packs in the streets would by and large reflect the average characteristics of those sold by different retailers on the same street, the following adjustment method would be applied: Suppose DF L is the proportion of duty-free packs (both legal and illegal) in the littered sample and DF R is the proportion of duty-free packs (all illegal) in retailer sample. Then, the estimated proportion of illegal duty-free packs from littered sample would be Similar adjustment process would be followed while estimating extent of illicit cigarette using packs from trash recycle sample. It should be noted here that self-made cigarette using tax-paid tobacco powder or crushed tobacco may not be the scope of current work. However, a pack for a local category of cheap cigarette known as "birri" would be collected and examined for its compliance as per the study definition. Statistical Analysis (STATA) software version 15 would be used. The main statistical analysis will report the frequencies and proportions of cigarette packs that do not comply with the tobacco control laws and falls within the study definition of illicit. The packs collected from different sources would be kept in separate bags and analyzed separately. In addition, the features of the respective packs would be recorded in three different databases to find separate estimates of illicit cigarettes. The estimates from first two inter-related methods will be triangulated and hence complementing each other. We will then compare and validate these findings with the third method. Given the availability of the trash market in the study sites, and adequate volume of packs collected from the trash market, the findings will be compared with the first two methods. We will assess the extent to which the result of trash market measure corresponds to first two measures to estimate of illicit trade for its validity. Findings from pack analysis would be further triangulated with the retailer interview findings for better understanding of the pattern of illicit practices in the cigarette market. Discussion Over the last two and half decades, several significant advances have been made on studying illicit tobacco trade. This body of evidence has identified the following three key themes of particular relevance for research on illicit tobacco trade: (1) different ways to measure illicit tobacco trade; (2) distortion of the data by tobacco industry; and (3) effectiveness of interventions to address the problem of illicit trade [23]. The empirical estimation of the extent of illicit trade in the tobacco market often remains an intricate and challenging task. Tobacco control researchers around the world often triangulate several methods to improve the precision of their estimates of the magnitude of illicit trade [24]. The volume of scientific literature is skewed towards the high-and upper middle-income countries [1,[4][5][6][7]17,[25][26][27][28][29][30][31][32][33][34] relative to the low-and lower middle-income countries [11,24,[35][36][37][38]. This is perhaps due to the issue of data availability and research capacity. Compared to LMICs, a greater potential loss in government revenue due to illicit trade in high-income countries, owing to higher tax rates, results in a higher public investment on related research. On the other hand, the dimension of illicit trade in LMICs is different [1]. Although the extent of financial incentive would be lower for illicit tobacco traders in those countries, the risks associated with such malicious activities would also be lower due to lack of governance, which might encourage the illicit traders to extend their activities there. Therefore, the importance of research in illicit trade in Bangladesh cannot be ignored. In addition, existing works prescribed to follow country specific approach towards illicit trade as uniform ranking of research priorities within the arena appears to be impossible [1,34]. Taking into account the country context, this attempt would be a considerable contribution to the field of illicit tobacco trade research in Bangladesh. This study is being conducted as part of a research consortium of fifteen partner organizations across eight countries (Bangladesh, Ethiopia, Ghana, India, South Africa, the Gambia, Uganda, and the UK) funded by the Global Challenges Research Fund in the UK and named the "Tobacco Control Capacity Programme" [39]. We have established a national stakeholders' group in Bangladesh and arranged several stakeholder engagement meetings to get their feedback on the study protocol. The successful implementation of the protocol is subject to endorsed inspection of collected packs and data recording about the authenticity of tax stamp. According to the local tobacco control laws, a tax paid pack must contain the tax stamp and we are in need of examining the security features of the tax stamp for study purpose. In practice, in most cases, these tax stamps are affixed in the upper left-or right-hand corner in the outer part of a cigarette pack. Thus, it might happen that the required tax stamp is removed, lost, or torn while opening by the retailers or littering by the smokers into the streets and trash bins. Considering this potential threat, the gold standard would have been collecting the unopened packs for tax and other compliance inspection [12]. This scope is limited in the current context due to the fund challenge that has to be made with respect to the sampling design. Hence, training on eligibility criterion for an empty cigarette pack to be included in the inspection basket would be of sheer importance. All the packs where tax stamps are not in proper status to be checked for authenticity would be excluded from the study. Another important potential limitation of the proposed methodology would be its ability of correct estimation for the size of illicit market. Since the unit of observation from where packs would be collected is retail cigarette shops, selection bias might remain present in the estimation. The estimate would be overestimated if pack collection takes place in an area where illegal tobacco products are known to be traded. In contrast, the estimates would be underestimated if retailers purposefully preserve the licit cigarette packs in the collection bag. Moreover, conditioning that illicit cigarettes are sold in full packs in general rather than single sticks, collecting empty packs from retailers might underestimate the magnitude. Considering all these potential threats, we expect the study would be free from selection bias as the retailer selection would be completely random covering the urban-rural and border-non-border perspective around the country. The validity and representativeness of the estimate measured using the packs collected from retailers would be ensured by comparing that with the estimate retrieved from littered packs from the streets and packs collected from the trash recycle market in the same neighborhood. Estimates from the two later sources to a greater extent would address the limitations if retailer deposit only licit packs or illegal cigarette happens to be sold in full pack. The comparison of results and further analysis of illicit packs from these three sources would help us to examine the existence of any systematic difference among them. Absence of systematic difference would ensure the possible unbiasedness of the estimates [12]. Finally, in the absence of any prevalence data at the division or district level, it would not be possible to propose a sample size weighted on the local smoking prevalence or smoking intensity. Bangladesh has completed two rounds of Global Adult Tobacco Survey (GATS). However, the first round does not have district level or divisional level information on smoking prevalence while the data from second round have not been released yet. Contemplating the context of Bangladesh, the data challenges along with the data quality required in other methods increases the suitability of observational analysis. In addition, it would help researchers and in turn policy makers to learn about the pattern of illicit activities related to packaging compliance and initiate the proper policy measure henceforth. In addition, the estimate along with analysis would help us to enhance our knowledge regarding how an important stakeholder in supply chain (retailers) is legally used to reach to the smokers. Conclusions In this paper, we describe a protocol for a mixed-methods research study to estimate the extent of illicit cigarette sold as retail in Bangladesh. This research aims to better understand the proportion of illicit cigarettes traded in the retail market, as well as the common types of tax avoidance and tax evasion in the country. It will also give effort to validate a new method to assess illicit cigarette trade with the established methods. The study will potentially be of interest to a wider audience, including but not limited to, academia, government and non-government organizations, and tobacco control and policy implementation researchers, particularly because independent estimates of illicit tobacco are rare in LMICs such as Bangladesh. Moreover, findings will inform efforts to address the effects of illicit trade and counter tobacco industry claims. The strength of this research particularly relies on two aspects; one is the methodology design that has been successfully applied in a country with similar characteristics and the other one is robust sampling design covering urban-rural and border-non-border geographical areas across the country. While estimating the magnitude of illicit trade there is established evidence of using packs directly collected from smokers. Deliberating the context and required study design, we believe that might be a future research option in the field of illicit trade research in Bangladesh. Moreover, effectiveness of different intervention tool put forwarded for controlling illicit trade and consequently the revenue loss of the country would be another important perspective of future research. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
2020-07-09T09:13:34.084Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "09340401f917adeb932c0f092f0f5dda5c312a41", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/13/4791/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c940fa522d07b4fc12173bdc518efaf79053a6da", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
251992342
pes2o/s2orc
v3-fos-license
Predicting the Severity of Acute Appendicitis in Children Using Neutrophil-to-Lymphocyte Ratio (NLR) and Platelet-to-Lymphocyte Ratio (PLR) Introduction The ability to predict risk of perforation in acute appendicitis (AA) could direct timely management and reduce morbidity. Platelet-to-lymphocyte ratio (PLR) and neutrophil-to-lymphocyte ratio (NLR) are surrogate severity markers in infections. This study investigates the use of PLR and NLR as a marker for distinguishing uncomplicated (UA) and complicated appendicitis (CA) in children. Materials and methods This retrospective single-center study collected data between January 1, 2014, and December 31, 2020. Children between five and 17 years of age with histologically confirmed appendicitis were included. Cut-off values for NLR and PLR were determined by employing the receiver operating characteristic (ROC) curve with sensitivity and specificity in addition to regression analysis. Results A total of 701 patients were included with a median age of 13 years. Out of which 52% of the cohort was female. The difference between the NLR and PLR ratios between UA and CA was significant (p=0.05, Kruskal-Wallis). For UA, the area under the ROC curve (AUC) and cut-off for NLR and PLR were 0.741, 3.80 with 95% CI of 0.701-0.781 and 0.660, 149.25 with 95% CI of 0.618-0.703, respectively. In CA, using NLR and PLR, AUC and cut-off were 0.776, 8.86 with 95%CI of 0.730-0.822 and 0.694, 193.67 with 95%CI of 0.634-0.755, respectively. All were significant with p<0.001. Conclusions NLR and PLR are reliable, synergistic markers predicting complicated appendicitis which can guide non-operative management in children. Introduction Acute appendicitis is among the most frequently encountered acute surgical presentations with a lifetime risk of 7% and the most common surgical emergency in children [1][2][3][4][5]. Diagnosis of pediatric acute appendicitis is based upon clinical features, although these are frequently atypical, with radiological investigations preserved for selected cases [5]. Despite the established classical symptoms and signs of acute appendicitis, prompt diagnosis of complicated appendicitis is challenging [6]. The World Society of Emergency Surgery (WSES) 2020 guidelines incorporate gangrene, perforation, and abscess in their definition of complicated appendicitis [6]. In addition, the rates of perforated appendicitis vary between 16% and 40% overall and are higher in younger age groups, ranging between 40% and 57% and are associated with diagnostic delay [7]. Thus, inability to diagnose acute appendicitis immediately on presentation may result in increased morbidity and mortality. Numerous scoring methods to help in the quick clinical diagnosis and categorization of acute appendicitis have been developed, such as the Alvarado score, pediatric appendicitis score (PAS), the appendicitis inflammatory response (AIR), and Shera score [6,[8][9][10]. Although each of these scores differs in their utilized clinical parameters, the overdiagnosis of appendicitis is 32% by PAS and 35% by the Alvarado score [11]. Furthermore, these scoring tools lack sensitivity and specificity in predicting the severity of acute appendicitis, although low scores can help categorize patients as low risk [6]. Thus, scoring tools are not routinely used or recommended by the WSES [6]. As a result, other parameters have been investigated to indicate the severity of appendicitis. White cell counts (WCCs), absolute neutrophil counts (ANC), and creactive protein (CRP) are raised in patients diagnosed with acute appendicitis [12]. Although an increase in WCC has no significant predictive value in distinguishing between uncomplicated and complicated appendicitis, WSES advises absolute neutrophil count and CRP to predict severity of inflammation [6,13,14]. Furthermore, raised blood bilirubin has been demonstrated to be a possible marker for perforated appendix but lacked sufficient sensitivity and specificity, with elevated CRP superior to bilirubin in predicting perforated appendicitis [15,16]. Identifying a marker to predict complicated appendicitis with good sensitivity and specificity can guide clinical decision-making. Nneutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) are simple inexpensive markers of inflammation, which are easily obtained [3]. Neutrophilia and lymphocytopenia are cellular response elements in systemic inflammation. The increase in the difference between neutrophil and lymphocyte reflects the severity of the inflammatory response. Therefore, NLR has been used as a marker in many pathological conditions such as malignancies, chronic inflammatory diseases, and post-operative complications for many years [17]. NLR and PLR provide information on immune and inflammatory pathways and have been studied as potential markers predicting severity of appendicitis [17]. A recent metaanalysis demonstrated that NLR predicts both diagnosis and severity of appendicitis in adults [3]. This may have implications for prioritizing cases for surgery, monitoring conservatively treated patients, and for patients who do not routinely undergo CT scans such as children. This study aimed to assess the ability of NLR and PLR to differentiate between simple and complicated appendicitis in children. We assess the sensitivity, specificity, and predictive value of NLR as a marker of severity in this age group. Materials And Methods Approval for the study was obtained from the Clinical Research and Audit Department of Russells Hall Hospital. A cohort study was done retrospectively in the Department of Surgery at Russells Hall Hospital, Dudley, UK, with approval number GENSUR/CA/2020-21/22. All patients aged five to 17 years who underwent emergency appendicectomy for suspected acute appendicitis between 2014 and 2020 were identified from the hospital admission statistics (HAS) database. Included patients must have available post-operative histology, either radiological or intra-operative diagnosis of acute appendicitis and receiving either conservative or surgical management for acute appendicitis. Where patients received surgical management of acute appendicitis, we included both open and laparoscopic appendicectomy. Patients diagnosed with neoplasm of the appendix were not included in the study. Outcomes Our main outcome was the predictive capability of PLR/NLR in differentiating uncomplicated from complicated acute appendicitis. Secondary outcomes included evaluating the negative appendicectomy rate, presence of Enterobius vermicularis infestation, complications, and length of stay. Data collection and analysis From our records, two categories of patients were isolated: uncomplicated appendicitis and complicated appendicitis. A complicated appendicitis is defined as perforation of appendix anywhere from the tip to base, abscess formation, or gangrenous appendix [6]. The data gathering proforma assessed patients' demographic data, white cell count (WBC), neutrophil count, lymphocyte count, platelet count, diagnostic modality, imaging (including ultrasound and computer tomography) findings if performed, post-operative complications using Clavien Dindo classification, duration of hospitalization, intra-operative details where operative management performed, and histology. Data were extracted separately by two members of the research team. The team members resolved any inconsistencies by engaging in dialogue. However, in the event of irreconcilable difference, a third independent member was approached. Statistical Package for the Social Sciences (SPSS) for Windows, version 25.0 (IBM Corp.: Armonk, NY) was used to carry out the statistical analysis. The accuracy of NLR and PLR was characterized and compared using a receiver operating characteristic (ROC) curve. The area under the ROC curve (AUC) denoted accuracy in differentiating between complicated and uncomplicated appendicitis. Cut-off values were assessed for each biomarker including the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) with 95% confidence intervals. A p-value of <0.05 was considered statistically significant. Power calculation Available literature has shown NLR of 4.7 to be a cut-off value for uncomplicated appendicitis and 8.8 for complicated appendicitis with sensitivity and specificity approaching 90% [3]. Furthermore, a cut-off for PLR of >140.45 and >163.27 for uncomplicated appendicitis and complicated appendicitis, respectively, have been documented [17]. Therefore, to ensure a study power of 80% with confidence interval (CI) of 95%, a total number of 392 patients (196 patients in each group) was the calculated minimum study size. Results A total of 701 patients aged between five and 18 years of age had appendicectomies performed for suspected appendicitis between 2014 and 2020. The demographics of these patients are detailed in Table 1. The overall median age was 13±3.4 years. The gender distribution was similar with 339 males (48.4%) and 362 females (51.6%). The overall median length of stay following appendicectomy was three days (range: 1-22 days). Demographics Overall Appendix non-inflamed Uncomplicated appendicitis Complicated Age mean (years) 12 Furthermore, the majority of these patients (78.9%) had laparoscopic appendicectomies performed with an overall complication rate of 1.9% with 0.3% of these patients requiring further surgical intervention. Histology showed a 30% negative appendicectomy rate (i.e, those who had normal appendix on histology), along with a 6.1% incidence of Enterobius vermicularis. Interestingly, no patients with Enterobius vermicularis had histological evidence of appendicitis. These findings are summarized in Table 2. Biochemical markers supporting clinical decision-making including white cell count (WCC), CRP, NLR, and PLR were determined across categories of complexity ( Table 3). Percentage Frequency Negative The mean and standard deviation (SD) of white cell count (WCC), neutrophil-to-lymphocyte (NLR) ratio, platelet-to-lymphocyte (PLR) ratio, and c-reactive protein (CRP) levels in patients with histologically normal, uncomplicated appendicitis, and complicated appendicitis. In addition, we determined the utility of the neutrophil-to-lymphocyte ratio (NLR) and platelet-tolymphocyte ratio (PLR) in predicting uncomplicated versus complicated appendicitis. A curve estimation showed that NLR and PLR exhibit an exponential relationship (R 2 =0.478, p<0.001) (Figure 1). FIGURE 1: NLR and PLR exponential relationship. NLR: neutrophil-to-lymphocyte ratio; PLR: platelet-to-lymphocyte ratio A Kruskal-Wallis test showed a statistically significant difference between complicated and uncomplicated appendicitis for both NLR and PLR (p<0.001). ROC curves for NLR and PLR both had statistically significant area under the ROC curve (AUC) differentiating complicated and uncomplicated appendicitis (p<0.001) (Figures 2, 3). The cut-offs for uncomplicated appendicitis (NLR=3.80, PLR=149.25) and complicated appendicitis (NLR=8.86, PLR=193.67) were calculated from the ROC curves and used to measure inter-rater agreement between NLR and PLR. FIGURE 2: NLR and PLR ROC curve for uncomplicated appendicitis. Diagonal segments are produced by ties. FIGURE 3: NLR and PLR ROC curve for complicated appendicitis. NLR: neutrophil-to-lymphocyte ratio; PLR: platelet-to-lymphocyte ratio; ROC: receiver operating characteristic Cohens' and Fleiss' kappa were used to calculate agreement between NLR and PLR for uncomplicated and complicated appendicitis, respectively ( Table 4). This showed weak-moderate agreement for uncomplicated appendicitis (0.574, p<0.001) and moderate agreement for complicated appendicitis (0.530, p<0.001). Furthermore, cross-tabulation of NLR, PLR as well as CRP for sensitivity, specificity, positive predictive value, and negative predictive value using the above cut-off values in determining severity of appendicitis are shown below in Table 5. NLR: neutrophil-to-lymphocyte ratio; PLR: platelet-to-lymphocyte ratio; CRP: c-reactive protein Discussion Our results show concordance between NLR and PLR in discriminating between uncomplicated appendicitis and complicated appendicitis in children. Based on our analysis, NLR has a sensitivity of 70.3% and a specificity of 70% with a positive predictive value (PPV) of 84.6% and a negative predictive value (NPV) of 50.2% (Table 4). Moreover, PLR has a sensitivity of 64% and a specificity of 61% with a PPV of 79.3% and a NPV of 42%. This was similar to the more established marker CRP, which has a sensitivity of 70%, specificity of 71.8%, PPV of 85.2%, and NPV of 50.9%. According to McGowan et al., CRP had a sensitivity of 46-94% and a specificity of 32-84% considering the cut-off used which ranged from >5 to >100 [18]. The PPV of CRP ranges from 16% to 25% but has an NPV of 91-97%. The PPV of all three parameters was most discriminating when predicting severity of appendicitis in the pediatric population, i.e., patients with a CRP>111 mg/L, an NLR>14, and a PLR>280 were more likely to have complicated appendicitis. In addition, the PLR and NLR showed concordance in their tendency to predict severity of appendicitis in children as shown statistically by the inter-rater agreement analysis. Our study shows significant correlation between PLR and NLR allowing for a moderate degree of agreement between simple and complicated appendicitis in pediatric patients. This demonstrates that when NLR and PLR are considered together, the reliability of diagnosing both complicated and uncomplicated appendicitis increases significantly. Thus, NLR and PLR can be used synergistically to identify more severe cases of acute appendicitis and serve as valuable tools in clinical decision-making for pediatric appendicitis. These markers (NLR and PLR) can both be easily calculated from the complete blood count (CBC) differential, thus are inexpensive markers. This has practical implications for ease of use and avoids reliance on more complex scoring systems which usually require reference to online calculators or text-based scoring systems [6,8,9]. However, the sensitivity and its specificity can differ significantly depending on the time the test was conducted. Tests carried out within hours of the onset of symptoms may be normal, only to increase in subsequent hours [6,19]. Considering the variability in the timing of presentation of patients to secondary care with respect to their onset of symptoms, it is paramount to interpret these tests in context of the clinical presentation. While we recognize this limitation in our study, we believe that a pragmatic approach to interpretation of results is a vital part of medical practice and further stratifying patients based on their onset of symptoms, which in itself can be extremely unreliable, would limit the practical applicability of NLR and PLR in children with acute appendicitis. Our study provides further validation of previously published data evaluating the utility of NLR and PLR in predicting appendicitis severity. In the meta-analysis by Hajibandeh et al., NLR >4.7 and >8.8 were independent predictors of uncomplicated appendicitis and complicated appendicitis, respectively [3]. This corresponds with our NLR cut-off value of 8.86 for complicated appendicitis. In addition, Pehlivanlı and Aydin reported that a PLR> 140.45 has a sensitivity of 71.4% and specificity of 88.9% to distinguish between appendicitis and a normal appendix, whereas a PLR >163.27 has a sensitivity of 64.3% and a specificity of 67.5% to differentiate complicated appendicitis from uncomplicated appendicitis [17]. However, our data suggest a much higher cut-off value of >193.67 with a sensitivity of 64% and a specificity of 61%. The level and unit of measurement of PLR in the literature vary due to heterogeneity between study populations in addition to geographical and ethnic differences. In a recent meta-analysis, Liu et al. used standardized mean difference to account for this and demonstrated significant increase in PLR level in simple appendicitis compared to controls (standardized mean difference {SMD}: 1.23, 95% CI: 0.88-1.59) but was unable to demonstrate this effect when differentiating between simple appendicitis and complicated appendicitis (SMD: 2.28, 95% CI: -1.72-6.28) [20]. Thus, although raised PLR may indicate severity of appendicitis, this parameter should be interpreted with caution in isolation. These results also argue for the development of more nuanced clinical scoring systems that have a greater sensitivity and specificity than established scores that have not gained widespread use. In the setting of a busy emergency surgical on-call, the NLR and PLR can help in the prioritization of patients for surgery. Moreover, there has been an increasing trend toward non-operative management of patients [21][22][23]. The risk of recurrent appendicitis requiring operative intervention in adult patients managed non-operatively is 30% in 90 days [23], whereas up to 25% of children require appendicectomy at one year with a higher risk of complicated appendicitis [22,24]. Nonetheless, non-operative management can be considered in selected cases where complicated appendicitis is not expected. Conversely, in cases of operative management, appropriate stratification for cases suited to training can be based on pre-operative risk stratification to determine complexity. Finally, negative appendicectomies remain a major problem, particularly in practice in the United Kingdom (UK). In our study, the negative appendicectomy rate in children was relatively high at 30% which is higher than the 15.9% reported in the UK national RIFT audit [9]. Similar to the findings of the right iliac fossa pain treatment (RIFT) audit, young females with concomitant gynecological pathologies were found to be responsible for the majority of negative appendectomies carried out. In our practice, if no other significant explanatory pathology is found during a diagnostic laparoscopy, we advise removal of a macroscopically normal appendix. This practice may be further justified by the high incidence of Enterobius vermicularis found in children which may explain their presentation. Although, increasingly CT scan is being used in investigating acute right iliac fossa pain and diagnosing acute appendicitis in adults [9,25,26]. While this remains routine practice in many countries, its use has been limited in the UK with greater reliance on clinical diagnosis. Furthermore, the ionizing radiation risk of CT scans is difficult to justify for children and pregnant women. Despite the low morbidity associated with negative appendicectomy, NLR and PLR could be used in the future to identify children with acute appendicitis prior to surgery where a CT scan cannot be performed. Our study being a single-center retrospective cohort study makes it impossible to eliminate entirely the possibility of confounding factors including the time of testing and machine calibration. Despite the nuances of UK practice, our findings in pediatric practice may have widespread application. A further multicentre prospective study would also increase the applicability and validate these scoring measures. Conclusions NLR and PLR ratios are propitious markers that can predict both diagnosis and severity (uncomplicated vs complicated) of appendicitis in children and when interpreted together could have acceptable sensitivity and specificity. Our data support the use of NLR and PLR to risk stratify children with either clinically confirmed appendicitis in resource-constrained environments with limited theatre facilities or where repeat imaging is not readily available. It can also be used to monitor pediatric patients with appendicitis who are being treated non-operatively or support the diagnosis in patients where CT imaging is not justified such as children. Further studies are required to assess whether combining NLR and PLR with other markers such as CRP would result in better predictive value. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-09-02T15:03:22.184Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "ec4ffec26d0854cb531d3b27b7dbd8c04365fd8d", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/106963-predicting-the-severity-of-acute-appendicitis-in-children-using-neutrophil-to-lymphocyte-ratio-nlr-and-platelet-to-lymphocyte-ratio-plr.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "144ad5369345afc5280c61f9e4a68bb3a3206153", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118156427
pes2o/s2orc
v3-fos-license
The Chemical Composition of the 30 Doradus Nebula Derived from VLT Echelle Spectrophotometry Echelle spectrophotometry of the 30 Doradus nebula in the LMC is presented. The data consists of VLT UVES observations in the 3100 to 10350 A range. The intensities of 366 emission lines have been measured, including 269 identified permitted lines of H 0, He 0, C 0, C+, N+, N++, O 0, O+, Ne 0, Ne+, S+, S++, Si 0 , Si+, Si++, Ar+, and Mg+. Electron temperatures and densities have been determined using different line intensity ratios. The He+, C++, O+, and O++ ionic abundances have been derived from recombination lines, these abundances are almost independent of the temperature structure of the nebula. Alternatively abundances from collisionally excited lines have been derived for a large number of ions of different elements, these abundances depend strongly on the temperature structure. Accurate t^2 values have been derived from the Balmer continuum, and by comparing the C++, O+, and O++ ionic abundances obtained from collisionally excited and recombination lines. The chemical composition of 30 Doradus is compared with those of Galactic and extragalactic HII regions. The values of Delta Y / Delta O, Delta Y / Delta Z, and Yp are also discussed. Introduction The determination of the chemical composition of H II regions has been paramount for the study of the chemical evolution of galaxies and for the determination of the primordial helium abundance, Y p . In recent times the determination of atomic data of higher accuracy and the detection of fainter emission lines with the use of echelle spectrophotometry have permitted to derive more accurate physical conditions for Galactic H II regions (e.g., Esteban et al. 1998Esteban et al. , 1999a. For these reasons it was decided to carry out echelle spectrophotometry of 30 Doradus. Due to its proximity, its high angular dimensions, and its high surface brightness 30 Doradus, NGC 2070, is the most spectacular extragalactic H II region and thus it has been the subject of many spectrophotometric studies (e.g., Peimbert, & Torres-Peimbert 1974;Aller et al. 1974;Dufour 1975;Pagel et al. 1978;Boeshaar et al. 1980;Dufour, Shields, & Talbot 1982;Shaver et al. 1983;Mathis, Chu, & Peterson 1985;Rosa & Mathis 1987;Vermeij & van der Hulst 2002;Tsamis et al. 2002). The main aim of this paper is to make a new determination of the chemical abundances of 30 Doradus including the following improvements over previous determinations: the consideration of the temperature structure that affects the helium and heavy elements abundance determinations, the derivation of the O and C abundances from recombination line intensities of very high accuracy, the consideration of the collisional excitation of the triplet He I lines from the 2 3 S level by determining the electron density from many line intensity ratios, and the study the 2 3 S level optical depth effects on the intensity of the triplet lines by observing a large number of singlet and triplet lines of He I. In sections 2 and 3 the observations and the reduction procedure are described. In section 4 temperatures and densities are derived from eight and six different methods respectively; also in this section, four independent values of the mean square temperature fluctuation, t 2 , are determined by combining the electron temperatures. In section 5 ionic abundances are determined based on recombination lines that are almost independent of the temperature structure, and ionic abundances based on ratios of collisionally excited lines to recombination lines that do depend on the temperature structure of the nebula. In section 6 the total abundances are determined and compared with those of NGC 346 (the most luminous H II region in the SMC), the Orion nebula, M17, and the Sun. In section 7 ∆Y /∆O and ∆Y /∆Z are determined; these ratios are important restrictions for the study of the chemical evolution of galaxies and for the determination of the primordial helium abundance, Y p . Also in section 7 Y p is determined based on the abundances of 30 Dor and a value of ∆Y /∆O derived from the observations of other objects and from chemical evolution models of irregular galaxies. Observations The observations were obtained with the Ultraviolet Visual Echelle Spectrograph, UVES (D'Odorico et al. 2000), at the VLT Kueyen Telescope in Chile. We observed simultaneously with the red and blue arms in two settings, covering the region from 3100Å to 10360Å (see Table 1). The wavelength regions 5783-5830 AÅ and 8540-8650ÅÅ were not observed due to the separation between the two CCDs used in the red arm. There were also two small gaps that were not observed, 10084-10088ÅÅ and 10252-10259ÅÅ, because the two redmost orders did not fit within the CCD. In addition to the long exposure spectra we took 60 second exposures for the four observed wavelength ranges to check for possible saturation effects. The slit was oriented east-west and the atmospheric dispersion corrector (ADC) was used to keep the same observed region within the slit regardless of the air mass value. The slit width was set to 3.0" and the slit length was set to 10" in the blue arm and to 12" in the red arm; the slit width was chosen to maximize the S/N ratio of the emission lines and to maintain the required resolution to separate most of the weak lines needed for this project. The FWHM resolution for the 30 Dor lines at a given wavelength is given by ∆λ ∼ λ/8800. The reductions were made for an area of 3" × 10" located 64" N and 60" E of HD 38268, these coordinates are similar to those of 30 Dor II, position observed by Peimbert, & Torres-Peimbert (1974), but the area observed by us is considerably smaller. The slit was placed in a region of high emission measure that does not include luminous stars. The spectra were reduced using the IRAF 2 echelle reduction package, following the standard procedure of bias subtraction, aperture extraction, flatfielding, wavelength calibration and flux calibration. For flux calibration the standard star EG 247 was observed. Line Intensities and Reddening Correction Line intensities were measured integrating all the flux in the line between two given limits and over a local continuum estimated by eye. In the few cases of line-blending, the line flux of each individual line was derived from a multiple Voigt profile fit procedure. All these measurements were carried out with the splot task of the IRAF package. An initial reddening coefficient, C(Hβ), was determined by fitting the observed I(Hβ)/I(H Balmer lines) ratios (with the exception of Balmer α) to the theoretical one computed by Storey & Hummer (1995) for T e = 9,000 K and N e = 300 cm −3 , (see below) and assuming the extinction law of Seaton (1979). With this initial fit the the observed I(Hα)/ I(Hβ) and the I(Paschen lines)/ I(Hβ) ratios become larger than predicted. A similar situation prevails in the Orion Nebula (e.g., Costero & Peimbert 1970;Cardelli & Clayton 1988;Greve, Castles, & McKeith 1994). A simultaneous fit to all the H lines was obtained by adopting the following reddening law: it was assumed that the dust distribution is well represented by Seaton's reddening law, but that part of the dust is in front of the nebula and part is well mixed with the gas. A good fit for all the Paschen and Balmer lines is obtained by adopting a C(Hβ) in front of the nebula of 0.31 dex and a C(Hβ) from the front to the back of the H II region of 1.70 dex. The effective C(Hβ) amounts to 0.92 dex and the effective R value to 4.2 (where R = A V /E(B − V ) ), while Seaton's law provides an R value of 3.2 for material in front of the source. There are other assumptions that can produce the same or a similar extinction law. For example if all the extinction occurs in front of the nebula but is not uniform across the observing slit it is also possible to obtain effective R values higher than 3.2 without changing the grain properties. Also it is possible to have all the extinction between us and the H II region caused by grains with a different size distribution to that implied by Seaton's law. Consequently the adopted extinction law is empirical and it is not the purpose of this paper to estimate the relative proportion of the different causes that produce an R value higher than 3.2. Since we avoided the brightest stars, the stellar continuum is very weak and the equivalent width of Hβ in emission amounts to 734Å, consequently the correction due to underlying absorption of the H and He lines is negligible and was not taken into account. Table 2 presents the emission line intensities of 30 Dor. The first and second columns include the observed wavelength in the framework of the solar standard of rest, λ, and the adopted laboratory wavelength, λ 0 . The third and fourth columns include the ion and the multiplet number, or the Balmer, B, or Paschen, P, transitions for each line. The fifth and sixth columns include the observed flux relative to Hβ, F (λ), and the flux corrected for reddening relative to to Hβ, I(λ). The seventh column includes the fractional error (1σ) in the line intensities. Two of the lines presented in Table 2 correspond to a nebular component in emission that originates in our Galaxy and that is not physically associated with 30 Dor. This component has an emission measure in Hα four orders of magnitude smaller than that of 30 Dor in the observed line of sight. The Hβ line of this component was also detected. Both Balmer lines are blueshifted relative to 30 Dor by 252 km sec −1 . This nebular Galactic component is not present in other wavelengths and it does not affect the line intensities presented in Table 2, with the possible exception of 4098.24 O II, that might be blended with the Galactic Hδ line. Temperatures and Densities The temperatures and densities presented in Table 3 were derived from the line intensities presented in Table 2. Most of the determinations were carried out based on the IRAF subroutines. The contribution to the intensities of the λλ 7319, 7320, 7331, and 7332 [O II] lines due to recombination was taken into account based on the following equation: (see Liu et al. 2000). The [Cl III] temperature was obtained from the I(8500)/I(5518 + 5538) ratio and the computations by Keenan et al. (2000). The Balmer continuum temperature was determined from the following equation: (see Liu et al. 2001) where Bac/H11 is inÅ −1 , y + = He + /H + , and y ++ = He ++ /H + . Figure 1 presents the region near the Balmer limit, where the Balmer continuum can be easily appreciated. The [Fe III] density was derived from the I(4986)/I(4658) ratio and the computations by Keenan et al. (2001). In Table 4 I compare the observed I(λ nm )/I(4658) ratios for a set of [Fe III] lines with the predicted ratios by Keenan et al. (2001) for N e = 316 cm −3 and T e = 10,000 K. I did not list in this Table 4 it can be seen that the observed and predicted line ratios are in excellent agreement since the differences between them are of the order of their estimated errors. Temperature variations To derive the ionic abundance ratios the average temperature, T 0 , and the mean square temperature fluctuation, t 2 , were used. These quantities are given by and respectively, where N e and N i are the electron and the ion densities of the observed emission line and V is the observed volume (Peimbert 1967). The H I and He I lines originate in the high and the low degree of ionization zones and will be represented by T 0 and t 2 . I will divide the other lines in two groups those that originate mainly in the high degree of ionization zones: C ++ , O ++ , Ne ++ , S ++ , Cl ++ , Cl +3 , Ar ++ , and Ar +3 , and those that originate mainly in the low degree of ionization zones: N + , O + , S + , Cl + , and Fe ++ . These two groups will be represented by T 0,H , T 0,L , t 2 H , and t 2 L . I will also assume that t 2 H = t 2 L ≈ t 2 . To determine T 0 , T 0,H , T 0,L , and t 2 we need at least three independent T e determinations that weigh differently the high and low temperature zones and the fraction of the emissivity originating in the high and low ionization zones, that for these observations corresponds to 85% and 15% respectively (see below). For example, it is possible to combine the temperature derived from the ratio of the [O III] λλ 4363, 5007 lines, T (4363/5007) , the temperature derived from the ratio of the [N II] λλ 5755, 6584 lines, T (5755/6584) , and the temperature derived from the ratio of the Balmer continuum to I(Hβ), T (Bac/Hβ) , that are given by and I estimate that 85% of the H(β) emissivity comes from the regions of high degree of ionization and 15% from the regions of low degree of ionization. By combining the forbidden line temperatures, T (FL) with the T (Bac/Hβ) temperature I obtain the t 2 value presented in Table 5. Under the assumption that T is constant along the line of sight it is possible to derive the abundances of a given ion from a collisionally excited line of an element p times ionized or from a recombination line of the same element p − 1 times ionized. In many objects the abundances derived from the recombination lines are higher than those derived from the collisionally excited lines, possibly indicating the presence of temperature variations along the line of sight. Taking into account that the abundances derived from the recombination lines and the collisionally excited lines are the same it is possible to derive from the ratio of a collisionally excited line to a recombination line a function of T 0,H and t 2 H , or T 0,L and t 2 L . By combining this relation for lines that originate mainly in regions of high degree of ionization with a temperature determined from the ratio of two collisionally excited lines, like T (4363/5007) , it is also possible to derive T 0,H and t 2 H . Table 5 These relations were combined with the temperatures derived from the ratios of forbidden lines for the low degree of ionization zones in the first case, and with the temperatures derived from the ratios of forbidden lines for the high degree of ionization zones in the second and third cases. Helium ionic abundances To obtain He + /H + values we need a set of effective recombination coefficients for the He and H lines, the contribution due to collisional excitation to the helium line intensities, and an estimate of the optical depth effects for the helium lines. The recombination coefficients used were those by Storey & Hummer (1995) for H, and those by Smits (1996) and Benjamin, Skillman, & Smits (1999) for He. The collisional contribution was estimated from Sawey & Berrington (1993) and Kingdon & Ferland (1995). The optical depth effects in the triplet lines were estimated from the computations by Benjamin, Skillman, & Smits (2002). Table 6 presents the He + /H + values obtained from the eleven best observed helium lines for t 2 = 0.033. Considering the observational errors, for no value of τ 3889 it is possible to find agreement in the He + /H + values among the six triplet helium lines. The four more sensitive lines to τ 3889 are λλ 3188, 3889, 4713, and 7065, two of them, λλ 4713 and 7065, increase in intensity with increasing τ 3889 and two of them, λλ 3188 and 3889, decrease. Two acceptable solutions for τ 3889 have been obtained. By excluding λλ 3188 and 3889, τ 3889 = 4.4 is found; alternatively, by excluding λλ 4713 and 7065, τ 3889 = 10.5 is found. This result indicates that the computations for spherical geometry by Benjamin et al. (2002) do not apply to the observed region of 30 Dor. This region is part of a bright shell probably longer along the line of sight than in the plane of the sky, obviously a non spherical object. A fraction of the np − 2s photons is converted into λλ 4713, 7065, 4471, and 5876 photons; due to geometrical effects a smaller fraction than that expected, under the sphericaly simetric case, is sent in our direction. I consider that it is a good approximation to assume that the increase in the λλ 4713, 7065, 4471, and 5876 line intensities corresponds to that predicted by the same τ 3889 value and that the decrease in the λλ 3188 and 3889 lines corresponds to another τ 3889 value. Consequently by averaging the values of the 9 helium lines for τ 3889 = 4.4 (excluding λλ 3188 and 3889) a value of He + /H + = 0.08470 ±0.00068 is obtained. The five singlet lines, which are not affected by the τ 3889 effect, yield He + /H + = 0.08448 ±0.00099 in excellent agreement with the value derived from the nine helium lines and the discussion presented above. From the observed spectra it is found that the I(4686)/I(Hβ) value is smaller than 3.5 × 10 −5 , which together with the recombination coefficients by Brocklehurst (1971) imply that N (He ++ )/N (H + ) is smaller than 2.9 × 10 −6 . C and O ionic abundances from recombination lines The C ++ abundance was derived from the λ4267Å line of C II and the effective recombination coefficients computed by Davey, Storey, & Kisielius (2000) for Case A and T = 10,000 K. The O + abundance was derived from the λλ 7771.96, and 7775.40 lines of O I and the effective recombination coefficient for the multiplet computed by Péquignot, Petitjean, & Boisson (1991). The third line of the multiplet, λ 7774.18, was partially blended with a telluric line in emission, consequently it was not possible to measure its intensity, it was assumed that the three lines of the multiplet are in LS coupling and consequently that I(7774.18) = I(7771.96 + 7775.40)/2. The O ++ abundance was derived from the eight lines of multiplet 1 of O II (see Figure 2) together with the effective recombination coefficient for the multiplet computed by Storey (1994) under the assumption of Case B for T e = 10, 000 K and N e = 300 cm −3 . The result is almost independent of the case assumed, the difference in the O ++ /H + value between Case A and Case B is smaller than 4%. It was found that the O II lines of multiplet 1 are in Case B based on the observed intensities of multiplets 19, 2, and 28 of O II that are strongly case sensitive; Peimbert, Storey, & Torres-Peimbert (1993) also found that the O II lines in the Orion nebula are in Case B. The line intensity ratios within multiplet 1 do not follow the LS coupling predictions. Figure 2 provides an excelent visual reference to estimate the quality of the data, it includes 2 lines four orders of magnitude fainter than Hβ and also shows that lines separated by 2Å are completely resolved. Ionic abundances from collisionally excited lines With the exception of C ++ /H + and Fe ++ /H + all the other values presented in Table 8 for t 2 = 0.00 were derived with the IRAF task abund, using only the low-and medium-ionization zones. The low and medium ionization zones of IRAF correspond to the low and high ionization zones of this paper. The C ++ /H + value for t 2 = 0.00 was derived from the collisionally excited lines of C III λλ 1906 and 1909 by Dufour, et al. (1982) and Garnett et al. (1995). I consider this procedure valid because the O degree of ionization derived here is in excellent agreement with theirs (O ++ /O equal to 85% and 83% respectively). The Fe ++ /H + value for t 2 = 0.00 was derived from the atomic data by Nahar & Pradhan (1996) and Zhang (1996). I did not determine the Fe + /H + abundances because the observed [Fe II] lines are produced by collisions and by non negligible radiative processes difficult to estimate (Rodríguez 1999). To derive the abundances for t 2 H = t 2 L ≈ t 2 = 0.033 I used the abundances for t 2 = 0.00 and the formulation for t 2 > 0.00 presented by Peimbert & Costero (1969). To derive abundances for other t 2 values it is possible to interpolate or to extrapolate the values presented in Table 8. Table 9 presents the total abundances of 30 Doradus for t 2 L = t 2 H ≈ t 2 = 0.033 ≈ t 2 . To derive the total gaseous abundances the set of equations presented below was used, where the ionization correction factors, ICF 's, correct for the unseen ionization stages. Total Abundances The total He/H value is given by: The He ++ /H + ratio is completely negligible (see previous section). In objects of low degree of ionization the presence of neutral helium inside the H II region is important and ICF (He) becomes larger than 1. To study this problem Vílchez & Pagel (1988, see also Pagel et al. 1992) defined a radiation softness parameter given by for large values of ζ the amount of neutral helium is significant, while for low values of ζ it is negligible, where the critical value is around 8. From the previous equation and the values in Table 8 it is found that ζ = 2.82 which indicates that the amount of He 0 inside the H + region is negligible. On the other hand, for ionization bounded objects of very high degree of ionization the amount of H 0 inside the He + Strömgren sphere becomes significant and the ICF (He) can become smaller than 1. This possibility was firstly mentioned by Shields (1974) and studied extensively by Armour et al. (1999), Viegas, Gruenwald, & Steigman (2000), Ballantyne, Ferland, & Martin (2000) and Sauer & Jedamzik (2002). According to Ballantyne et al. (2000) for [O III]λ5007/[O I]λ6300 ≥ 300, the ICF (He) becomes very close to unity; from Table 2 it is found that I(5007)/I(6300) = 548. Consequently I conclude that the amount of H 0 inside the observed H II region is negligible and in what follows I will adopt an ICF (He) = 1.000. The gaseous abundances for O, N, and Ne were obtained from the following equations (Peimbert & Costero 1969 and To obtain the total O/H gaseous abundance the O + /H + and O ++ /H + values presented in Tables 7 and 8 were weighed according to their observational errors. To obtain the total O abundances a correction of 0.08 dex was adopted to take into account the fraction of O tied up in dust grains, this fraction was estimated from the Mg/O, Si/O, and Fe/O values derived for the Orion nebula (Esteban et al. 1998). To obtain the C gaseous abundance the following equation was adopted where the C ++ /H + gaseous abundance was obtained by weighing the C ++ /H + values presented in Tables 7 and 8 according to their observational errors, the ICF (C) value was obtained from Garnett et al. (1995). Following Esteban et al. (1998) 0.10 dex were added to the total C/H gaseous value to take into account the fraction of C atoms embedded in dust grains. The gaseous abundances of S, Cl, Ar and Fe were obtained from the following equations: and The ICF (S) value was estimated from the models by Garnett (1989). The ICF (Fe) value was estimated from the models of NGC 346 by Relaño, Peimbert, & Beckman (2002) and amounts to 8 ± 2. The ICF (Ar) includes the Ar + /H + contribution and according to Liu et al. (2000) can be approximated by 7. Discussion and Conclusions 7.1. The H II regions and the solar abundances (Relaño et al. 2002). The H II region abundances have been obtained adopting values of t 2 larger than 0.00. Further arguments in favor of t 2 > 0.00 have been presented elsewhere (Peimbert 2002;Peimbert & Peimbert 2002a,b). In addition Table 10 presents also the solar photospheric values for C, N, O, Ne, and Ar, and the solar abundances derived from meteoritic data for S, Cl, and Fe. For the solar initial helium abundance the Y 0 by Christensen-Dalsgaard (1998) was adopted, and not the photospheric one because, apparently, it has been affected by settling. The H II region values for Ne/O, S/O, and Ar/O are in excellent agreement with the solar values which implies that the production of these elements is primary and due to massive stars, and that the assumptions involved in the two types of abundance determinations are sound. The H II regions S/O and Cl/O abundances are in better agreement with the solar meteoritic abundances than with the photospheric ones, the solar photospheric values are S/O = -1.38 ±0.11 dex, and Cl/H = -3.21 ±0.3 dex. The 30 Doradus C/O value is intermediate between that of NGC 346 and those of Orion, M17 and the Sun. The differences are significant and imply that even if C is of primary origin part of it is due to intermediate mass stars and part is due to massive stars, and that the C yield increases with the O/H ratio (Garnett et al. 1995(Garnett et al. , 1999Carigi 2002). The 30 Doradus N/O value is intermediate between that of NGC 346 and those of Orion, M17 and the Sun. The differences are significant and imply that part of the N is of primary origin and part of secondary origin (see Henry, Edmunds & Koppen 2000, and references therein). The accuracies of the He/H abundances of 30 Doradus, NGC 346, and M17 are higher than that of the Orion nebula because the first three objects have ICF s(He) = 1.00, while the ICF (He) for the Orion nebula is larger than 1.00 and it is not well determined. Similarly the values for 30 Doradus, NGC 346, and M17 are more accurate than the solar one because they are based on direct determinations, while the solar value is obtained from models that depend on the helium abundance in a more complex way. The ∆Y /∆O and ∆Y /∆Z ratios and chemical evolution The determination of the ∆Y /∆O and ∆Y /∆Z ratios is crucial for the determination of Y p , and for constraining the models of galactic chemical evolution. The abundance determinations of 30 Doradus are based on emission line intensities of high quality, take into account the temperature structure of the nebula, and include a very accurate He/H value because the degree of ionization of 30 Doradus is relatively high implying an ICF (He) very close to unity. To determine the hydrogen, helium, heavy elements, and oxygen abundance by mass, X, Y , Z, and O, presented in Table 10 I proceeded as follows: for the Sun I adopted the initial helium abundance by mass (Y 0 ) of Christensen-Dalsgaard (1998), the Z/X value derived from the C/H, N/H, O/H, Ne/H, and Ar/H values presented in Table 10, and 0.56 × O/X for the ratio of the rest of the heavy elements to hydrogen, value obtained from the meteoritic abundances by Grevesse & Sauval (1998); for the H II regions I adopted the He/H value determined by the different observers, the C/H, N/H, O/H, Ne/H, and Ar/H values presented in Table 10, and 0.56 × O/X for the rest of the heavy elements. From Table 11 it can be seen that the spread among the ∆Y /∆O and ∆Y /∆Z values derived from the H II regions is smaller for the t 2 > 0.00 results than for the t 2 = 0.00 results, which is consistent with the idea that the t 2 > 0.00 values are better. Moreover the theoretical computations for the chemical evolution of irregular galaxies and for the solar vicinity predict values of ∆Y /∆O in the 2.9 to 4.6 range with a representative value of ∆Y /∆O = 3.5 ±0.9 (see Carigi et al. 1995;Carigi, Colín, & Peimbert 1999;Carigi 2000;Chiappini, Matteucci, & Gratton 1997;, again in better agreement with the results from H II regions under the assumption that t 2 > 0.00. The primordial helium abundance To determine the Y p value from 30 Doradus it is necessary to estimate the fraction of helium present in the interstellar medium produced by galactic chemical evolution. For this purpose it was assumed that: As in section 7.2 a ∆Y /∆O = 3.5 ±0.9 will be adopted, which together with the Y (30 Dor) and Z(30 Dor) values of Table 10 yield Y p = 0.2345 ± 0.0047, where most of the error comes from the adopted ∆Y /∆O value. This Y p value is in excellent agreement with the value derived from NGC 346 . This agreement is due to the similarity between the adopted ∆Y /∆O value and that derived from 30 Doradus and NGC 346. I am grateful to María Teresa Ruiz and Manuel Peimbert for carrying out the observations presented in this paper and for fruitful discussions, to César Esteban for his collaboration in the initial stages of this project, and to Anabel Arrieta for her assistance in the reduction process. It is also a pleasure to thank the Departamento de Astronomía de la Universidad de Chile for its hospitality during a visit where part of this work was made. · · · · · · · · · · · · 4097.33 N III 1 · · · · · · · · · 4101.407 4098.24 O II 46a 0.0069 0.0091 30 · · · · · · H I Gal · · · · · · · · · 4104.914 4101. a Line intensity ratios 100 × I(λ nm )/I(4658), for T e = 10,000 K and N e = 316 cm −3 . b These lines require a higher τ 3889 value to be consistent with the helium abundance and are not included in the adopted average, see text. c These lines require a lower τ 3889 value to be consistent with the helium abundance, see text. d Includes the effects of the uncertainty in t 2 = 0.033 ± 0.005. a In units of 12 + Log N (X)/N (H). Gaseous content with the exception of O and C where 0.08 dex and 0.10 dex have been added respectively to include the fraction of these elements trapped in dust grains (Esteban et al. 1998). b Values derived from recombination lines. All the other values are based on collisionally excited lines. b This paper, values for t 2 = 0.033. c Dufour, et al. (1982); Peimbert, Peimbert, & Ruiz (2000); Relaño et al. (2002), values for t 2 = 0.022. d Peimbert (1993); Esteban et al. (1998Esteban et al. ( , 2002, values for t 2 = 0.024.
2019-04-14T01:35:52.178Z
2002-08-28T00:00:00.000
{ "year": 2002, "sha1": "ca56684a5e11c554ac0ba9f52ac50c5f349faa25", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "3a5a0785bd382c71ecc688aa8f8b1c1d8d5928db", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
3981962
pes2o/s2orc
v3-fos-license
The lysosome as a command-and-control center for cellular metabolism Lysosomes are membrane-bound organelles found in every eukaryotic cell. They are widely known as terminal catabolic stations that rid cells of waste products and scavenge metabolic building blocks that sustain essential biosynthetic reactions during starvation. In recent years, this classical view has been dramatically expanded by the discovery of new roles of the lysosome in nutrient sensing, transcriptional regulation, and metabolic homeostasis. These discoveries have elevated the lysosome to a decision-making center involved in the control of cellular growth and survival. Here we review these recently discovered properties of the lysosome, with a focus on how lysosomal signaling pathways respond to external and internal cues and how they ultimately enable metabolic homeostasis and cellular adaptation. Introduction "Lysosome" is a term originally coined by Christian de Duve in 1955(de Duve, 2005 to describe a newly discovered organelle that housed a pool of soluble hydrolases capable of degrading proteins, nucleic acids, carbohydrates, lipids, and cellular debris. Because of these easily detectable activities, the lysosome quickly earned its reputation as the cell's "trash can" or "recycle bin." There are multiple routes via which lysosomes receive their substrates. In general, extracellular material destined for degradation is delivered to the lysosome via endocytosis (Luzio et al., 2009), whereas intracellular waste is disposed of by the lysosome via a self-catabolic process known as autophagy (Rabinowitz and White, 2010;Singh and Cuervo, 2011). These catabolic events occur in the highly acidic lumen (pH of ∼4.5-5.0) of the lysosome, which is segregated by a single lipid bilayer from the cytoplasm. To maintain the steady acidic environment required for its internal hydrolytic activities, the lysosome constantly pumps in protons (H + ions) across its limiting membrane by means of the vacuolar H + -ATPase (v-ATPase). This proton gradient also provides the driving force for the proton-coupled transport of metabolites, ions, and soluble substrates into and out of the lysosomal lumen (Forgac, 2007) and is necessary for proper targeting of newly synthesized lysosomal enzymes from the Golgi to the lysosome. Dissipation of the transmembrane proton gradient results in inefficient cargo sorting, altered membrane traffic, impaired degradation of cellular waste, and eventually metabolic derangement (Saftig and Klumperman, 2009). In addition to its established role in cellular clearance, the lysosome engages in various biological processes including secretion, plasma membrane repair, immune response, cholesterol transport, and metal ion homeostasis, along with recently discovered roles in nutrient sensing and gene regulation (Fig. 1). Multiple lines of evidence have highlighted a close link between lysosomal activities and metabolic regulation at the systemic level. For example, regulation of lysosomal biogenesis and function appears critical for the execution of lipid catabolic programs in the liver (Settembre et al., 2013b). Moreover, inactivating mutations in genes encoding for lysosomal hydrolases and transporters results in a spectrum of metabolic diseases known as lysosomal storage disorders (Futerman and van Meer, 2004;Platt et al., 2012;Parenti et al., 2015). Timely activation of autophagy in neonatal tissues is also necessary for the survival of organisms, as genetic manipulation of several genes involved in autophagy and lysosomal signaling leads to embryonic lethality in mice (Kuma et al., 2004;Komatsu et al., 2005;Efeyan et al., 2013). Yet, we still lack a complete knowledge of the structural and functional organization of the lysosome and the mechanisms that enable its communication with other cellular compartments. Moreover, we are only beginning to appreciate how lysosomal composition and function evolve dynamically both within a cell and across different organs and tissues, as organisms transition through different metabolic states. In this review, we summarize recent advances in our current understanding of the molecular mechanisms of lysosomal adaptation, and we discuss how the lysosome may be a key mediator of physiological responses to changing metabolic conditions. The lysosome as a metabolic signaling center To cope with ever-changing external conditions, cells have evolved sophisticated signaling pathways that sense available nutrient and energy inputs and couple them with specific metabolic outputs. Many of these pathways, such as insulin-phosphoinositide 3-kinase (PI3K), are organized in a Lysosomes are membrane-bound organelles found in every eukaryotic cell. They are widely known as terminal catabolic stations that rid cells of waste products and scavenge metabolic building blocks that sustain essential biosynthetic reactions during starvation. In recent years, this classical view has been dramatically expanded by the discovery of new roles of the lysosome in nutrient sensing, transcriptional regulation, and metabolic homeostasis. These discoveries have elevated the lysosome to a decision-making center involved in the control of cellular growth and survival. Here we review these recently discovered properties of the lysosome, with a focus on how lysosomal signaling pathways respond to external and internal cues and how they ultimately enable metabolic homeostasis and cellular adaptation. "top-down" manner, as they involve the engagement of a growth factor ligand to its receptor on the cell surface, followed by signal propagation inside the cell (Taniguchi et al., 2006). Growth factor-derived signals trigger changes in the rate of biochemical reactions occurring in the cytoplasm and inside specialized compartments such as mitochondria, peroxisomes, and lysosomes, ultimately steering the cell toward an anabolic or catabolic path (Ward and Thompson, 2012). In contrast to pathways originating at the cell surface, little is known as to whether intracellular organelles are capable of initiating signaling events on their own, particularly in response to changing metabolic conditions, and to communicate their internal status to each other. Because it represents the endpoint of multiple catabolic pathways, the lysosome also serves as a nutrient reservoir that buffers variations in nutrient availability and can actively modify the composition and abundance of the cytoplasmic metabolite pool. The key role of the lysosome in maintaining metabolic homeostasis emerged early on from studies in yeast, a model organism that offers two key advantages, namely the ability to easily isolate intact, functional vacuoles (the equivalent structure of the mammalian lysosome), coupled with powerful genetic approaches. It was found that the yeast vacuolar membrane hosts an array of nutrient transporters and permeases that allow bidirectional transport of solutes (Ohsumi and Anraku, 1981;Li and Kane, 2009). Metabolite transport across the vacuolar membrane is highly regulated and leads to the buildup of major stores of cationic amino acids, polyphosphates, ions, and other building blocks that can be subsequently released on demand. Because of the high conservation of lysosomal enzymes and permeases between yeast and mammals, it is likely that the mammalian lysosome has a similar ability for selective retention and release of metabolic building blocks. Through these processes, the lysosome not only can affect the rate of metabolic reactions occurring elsewhere in the cell, but also can communicate the overall metabolic state of the cell to nutrient-sensing modules. One such module is an ancient protein kinase known as the mechanistic target of rapamycin complex 1 (mTORC1), which has recently been shown to be functionally and physically associated with the lysosome from yeast to humans (Sancak et al., 2008(Sancak et al., , 2010Sturgill et al., 2008;Binda et al., 2009;Zoncu et al., 2011a). Elucidating the connection between mTORC1 and the lysosome has brought about a paradigm shift in the way we understand lysosome biology. Functional organization of the mTORC1 pathway at the lysosome The mTORC1 pathway was identified because its core component, the large (230-kD) serine/threonine kinase mTOR, is the target of the growth-inhibiting macrolide rapamycin (Heitman et al., 1991;Brown et al., 1994;Sabatini et al., 1994). A vast body of research has shown that the main role of mTORC1 is to integrate environmental and intracellular cues, such as growth factors, nutrient availability, energy status, and stresses, to actively drive cell growth and proliferation (Laplante and Sabatini, 2012;Dibble and Manning, 2013). Under favorable growth conditions, mTORC1 and its downstream effectors promote anabolic programs including mRNA translation, ribosome biogenesis, and lipid synthesis. Conversely, under stressful conditions, mTORC1 activities are largely inhibited to give way to catabolic programs such as autophagy, which allow mobilization of nutrient and energy stores. mTORC1 is a multi-subunit protein kinase complex that, in addition to the core kinase mTOR, includes the large adaptor subunit RAP TOR (KOG1 in yeast), which is thought to mediate substrate binding (Hara et al., 2002;Kim et al., 2002) and subcellular localization of the complex (Sancak et al., 2008(Sancak et al., , 2010; two components, PRAS40 (Sancak et al., 2007) and DEP TOR (Peterson et al., 2009), that inhibit intrinsic mTOR kinase activity, and G protein β subunit-like mLST8 (Kim et al., 2003), whose function remains obscure (Guertin et al., 2006). Multiple metabolic inputs including amino acids, glucose, and growth factors control mTORC1 via distinct mechanisms (Jewell and Guan, 2013;Shimobayashi and Hall, 2014;Efeyan et al., 2015), albeit to varying degrees, as none of them alone can fully stimulate mTORC1 on its own. In nutrient-starved mammalian cells, mTORC1 is diffuse throughout the cytoplasm. Readdition of nutrients, particularly amino acids, causes the rapid translocation of mTORC1 to the surface of the lysosome. At the lysosome, the kinase activity of mTORC1 is turned on in a growth factor-dependent manner (Sancak et al., 2008(Sancak et al., , 2010Zoncu et al., 2011a). Thus, the prevailing model is that of coincidence detection: for mTORC1 to become fully activated, both local nutrients (particularly amino acids) and long-range nutritional signals carried by insulin must be present (Sancak et al., 2008(Sancak et al., , 2010Zoncu et al., 2011a). These discoveries established a key role for the lysosome in nutrient sensing, as no other organelle is able to support mTORC1 recruitment and activation. The lysosome membrane harbors specialized molecular machinery that recruits and activates mTORC1 in response to amino acids (Sancak et al., 2010), and several components of this machinery are conserved Figure 1. Expanding roles of the lysosome in key cellular processes. Lysosomes play pivotal roles in cellular clearance by engaging with either autophagosomes or late endosomes to facilitate the degradation and recycling of internal and external substrates. Upon plasma membrane injury, lysosomes can repair the damaged site by fusing locally with the plasma membrane. Specialized cell types such as cytotoxic T cells and natural killer cells are capable of secreting cytolytic proteins from lysosomes to destroy infected or tumorigenic cells. Furthermore, lysosomes act as a storage site where amino acids, phosphate, ions, and intermediate metabolites can be selectively transported and retained. Emerging evidence indicates that the lysosome functions as a signaling hub for nutrientsensing pathways converging on the mTORC1 kinase and can elicit a transcriptional response to meet cellular demands for nutrients and energy. all the way to yeast (Chantranupong et al., 2015). However, in yeast, mTORC1 remains stably associated with the vacuolar membrane even when amino acids are low (Binda et al., 2009;Péli-Gulli et al., 2015). This important difference may reflect the more complex cellular organization of higher eukaryotes. For instance, budding yeast lacks a canonical insulin-PI3K pathway, thus negating the need for a coincidence detection mechanism (Efeyan et al., 2012). mTORC1 activation by the PI3K-RHEB axis The small GTPase Ras homologue enriched in brain (RHEB) contains a C-terminal farnesylation motif that mediates its association with the endomembrane system, including, but not limited to, the lysosome, where it serves as a potent activator of mTORC1 kinase activity (Inoki et al., 2003b;Saucedo et al., 2003;Stocker et al., 2003;Zhang et al., 2003). Rheb is indispensable for mTORC1 activation by virtually all stimuli. However, exactly how Rheb turns on mTORC1 is yet to be resolved. Because of its intrinsically slow GTP hydrolysis activity, Rheb is preferentially in its GTP-bound form at all times and therefore has to be kept under stringent regulation by its inhibitor, the trimeric tuberous sclerosis complex (TSC, composed of TSC1, TSC2, and TBC1D7 subunits; Inoki et al., 2003b;Zhang et al., 2003;Dibble et al., 2012). Specifically, the TSC2 component displays GTPase-activating protein (GAP) activity that converts Rheb into its GDP-bound state and therefore negatively regulates mTORC1 (Inoki et al., 2003b;Zhang et al., 2003). Upon stimulation by growth factors such as insulin, the serine/threonine kinase AKT is activated in a PI3K-dependent manner and phosphorylates TSC2 (Dan et al., 2002;Inoki et al., 2002). AKT-dependent TSC2 phosphorylation induces dissociation of TSC from the lysosome, where TSC was shown to reside in growth factor-deprived conditions, and thus blocks its inhibitory effects toward Rheb (Demetriades et al., 2014Menon et al., 2014). In addition to growth factors, the activity of TSC is regulated by low energy (Inoki et al., 2003a), hypoxia (Brugarolas et al., 2004;Reiling and Hafen, 2004), and genotoxic stress (Budanov and Karin, 2008), which collectively restrict mTORC1-mediated cell growth. Thus, TSC is one, but not the only, integration node for multiple signals that ultimately affect the kinase output of mTORC1. mTORC1 recruitment by Rag GTPases To be activated, mTORC1 needs to translocate to the lysosome membrane where Rheb resides. It turns out that amino acids directly regulate the lysosomal recruitment of mTORC1 by modulating the guanine nucleotide state of the Rag GTPases (Kim et al., 2008;Sancak et al., 2008;Binda et al., 2009). The Rags assemble as obligate heterodimers composed of RagA or RagB (which are similar to each other and homologous to Gtr1 in Saccharomyces cerevisiae) associated with RagC or RagD (homologous to yeast Gtr2) and are tethered to the lysosomal membrane by the pentameric Ragulator complex (known as the Ego complex in yeast), composed of the LAM TOR1-5 proteins (Binda et al., 2009;Nada et al., 2009;Sancak et al., 2010;Bar-Peled et al., 2012;Powis et al., 2015). Under amino acid sufficiency, the Rag GTPase complex becomes active by adopting a nucleotide state in which Rag-A/B is GTP-loaded and Rag-C/D is GDP-loaded and facilitates the lysosomal attachment of mTORC1 by directly interacting with Raptor (Sancak et al., 2010;Bar-Peled et al., 2012). As such, the cycling of Rag heterodimers between their active and inactive states in an amino acid-sensitive fashion is tightly regulated by their corresponding GAPs and guanine nucleotide exchange factors (GEFs), as well as posttranslational modifications (Table 1). Specifically, Ragulator acts as a GEF that promotes the loading of Rag-A/B with GTP and thus activates mTORC1 (Binda et al., 2009;Bar-Peled et al., 2012). Two GAP complexes stimulate GTP hydrolysis. Under low amino acids, GAT OR1 (SEA CIT in yeast) promotes GTP hydrolysis by Rag-A/B, thus switching off the pathway (Dokudovskaya and Rout, 2011;Panchaud et al., 2013). In contrast, in the presence of amino acids, Folliculin/FNIP (yeast Lst4/Lst7) causes Rag-C/D to become GDP loaded, enabling the Rags to bind to mTORC1 and recruit it to the lysosome (Tsun et al., 2013;Péli-Gulli et al., 2015). The placement of the Rag GTPases downstream of amino acids in the mTORC1 pathway has provided important clues toward the long-sought-after questions of how and where amino acids are sensed in the cell. Amino acid sensing inside the lysosome It was initially proposed that plasma membrane amino acid transporters be potential candidates for amino acid sensors because of their roles in controlling the influx of amino acids into the cell (Christie et al., 2002;Beugnet et al., 2003). However, treatment with cycloheximide, a protein synthesis blocker that increases the concentration of free amino acids in the cytoplasm, is sufficient to restore mTORC1 signaling in cells that have been deprived of extracellular amino acids. This evidence strongly suggests that amino acid sensing should originate intracellularly (Price et al., 1989;Christie et al., 2002;Beugnet et al., 2003;Sancak et al., 2008). The presence of the molecular machinery for amino acid-regulated mTORC1 activation at the lysosome membrane also implies that amino acids may be sensed somewhere in close proximity to the lysosome. Similar to the vacuole in yeast, the lysosome appears to accumulate significant amounts of amino acids within its lumen (Harms et al., 1981;Zoncu et al., 2011b). Using a cell-free assay, it was shown that binding of mTORC1 to the Rag GTPases is stimulated by entry of amino acids into the lysosomal lumen. Conversely, both in vitro and in cells, preventing lysosomal amino acid accumulation blocked mTORC1 binding to the lysosomal surface (Zoncu et al., 2011b;Jung et al., 2015;Rebsamen et al., 2015;Wang et al., 2015a). These results are compatible with a model for amino acid sensing by mTORC1 in which accumulation of amino acids in the lysosomal lumen is relayed to the Rag GTPases at the lysosomal surface in an inside-out manner. An RNAi screening in Drosophila melanogaster S2 cells revealed that the v-ATPase is a component of the lysosomal amino acid sensing machinery along with the Rag GTPases and Ragulator (Zoncu et al., 2011b). The v-ATPase forms a supercomplex with Ragulator and the Rag GTPases, and its catalytic activity is essential for mTORC1 recruitment in response to amino acids (Zoncu et al., 2011b;Bar-Peled et al., 2012;Dechant et al., 2014;Jewell et al., 2015). Although the precise mechanism of action of the v-ATPase in amino acid sensing remains to be elucidated, an attractive possibility is that amino acids may regulate the assembly and/or activity of the complex (Stransky and Forgac, 2015). Among the 20 amino acids, leucine and arginine are key activators of mTORC1 (Hara et al., 1998;Wang et al., 1998) upstream of the Rag GTPases (Sancak et al., 2008). Of note, arginine, an amino acid crucial for mammalian embryogenesis and early development, is highly concentrated in rat liver lysosomes and yeast vacuoles (Wiemken and Dürr, 1974;Boller et al., 1975;Dürr et al., 1979;Harms et al., 1981;Kitamoto et al., 1988). SLC38A9, a putative sodium-coupled amino acid transporter in the lysosome membrane, recently has been proposed as a sensor that signals arginine sufficiency to mTORC1 (Jung et al., 2015;Rebsamen et al., 2015;Wang et al., 2015a). Biochemical analysis demonstrated that SLC38A9 acts upstream of the Rag GTPases and Ragulator and in parallel with the v-ATPase. In amino acid transport assays using reconstituted liposomes, SLC38A9 transports arginine, but not leucine, into the lysosome, albeit with relatively low affinity compared with other amino acid transporters. Overexpression of the N-terminal cytoplasmic domain of SLC38A9 is sufficient to render the mTORC1 signaling resistant to amino acid depletion (Jung et al., 2015;Rebsamen et al., 2015;Wang et al., 2015a), suggesting that this domain acts downstream of the amino acid transport function. Thus, SLC38A9 may function as a "transceptor" that, by transporting arginine across the lysosomal membrane, relays an activating signal toward mTORC1. How arginine binding mechanistically regulates SLC38A9 remains to be determined. Also, the functional relationship between SLC38A9 and the v-ATPase is unclear. Interestingly, deleting SLC38A9 reduces mTORC1 substrate phosphorylation but not its localization to the lysosome (Jung et al., 2015;Rebsamen et al., 2015), whereas v-ATPase inhibition affects both (Zoncu et al., 2011b). Other lysosomal amino acid transporters implicated in mTORC1 activation include a histidine transporter, SLC15A4 (Kobayashi et al., 2014), as well as proton-assisted amino acid transporter 1 (PAT1)/SLC36A1 (Ögmundsdóttir et al., 2012), which has transport specificity toward small neutral amino acids. Whether and how SLC15A4 and PAT1 function upstream of the Rag GTPases remains to be determined. Glutamine, the most abundant free amino acid in the human body, provides a carbon and nitrogen source for cell growth. On one hand, several studies indicated that glutamine and glutamine-derived metabolites appear to function upstream of the Rag GTPase orthologues, Gtr1 and 2 (Binda et al., 2009;Durán et al., 2012;Péli-Gulli et al., 2015). On the other hand, it was shown that glutamine can stimulate lysosomal translocation and activation of mTORC1 via a Rag GTPase-independent mechanism, as revealed in recent studies using yeast (Stracka et al., 2014) and Rag-A/B deleted cells (Jewell et al., 2015). Interestingly, stimulation of mTORC1 by glutamine does not require Ragulator, but still relies on the lysosome and the activity of the v-ATPase. Moreover, ADP-ribosylation factor 1 (ARF1), a Golgi-localized small GTPase, is required in an undefined pathway that links glutamine to mTORC1 localization at the lysosome (Jewell et al., 2015). Further investigations are necessary to determine the location of the mTORC1-activating glutamine pool and to establish the glutamine sensor upstream of Arf1. Amino acid sensing in the cytoplasm Recent evidence indicates that cytosolic free amino acids also play a major role in mTORC1 activation. Sestrin-2, a member of the Sestrin family of stress-responsive proteins (Budanov and Karin, 2008), has been shown to be a specific sensor for leucine in mammalian cells Wolfson et al., 2016). Under leucine deprivation, Sestrin-2 inhibits mTORC1 signaling by sequestering GAT OR2, which represses the GAP activity of GAT OR1 toward Rag-A/B. The crystal structure of Sestrin-2 revealed that Sestrin-2 contains a leucine-binding pocket localized to its C-terminal domain Saxton et al., 2016). Binding of leucine disrupts the Sestrin2-GAT OR2 interaction and thus allows GAT OR2 to promote mTORC1 activation via inhibition of GAT OR1 activity. Mutations of Sestrin-2 that abolish binding to GAT OR2 lead to constitutive activation of mTORC1 in the absence of leucine, whereas those that diminish binding to leucine suppress mTORC1 activation regardless of leucine availability. In contrast, a newly characterized vertebrate-specific protein named CAS TOR1 (previously named GAT SL3) functions as a cytoplasmic arginine sensor for mTORC1 by binding to physiological concentrations of arginine through the conserved ACT domains . The CAS TOR1-dependent mechanism of arginine sensing is highly analogous to leucine sensing by Sestrin-2. Under arginine deprivation, homodimers of CAS TOR1, or heterodimers of CAS TOR1 and CAS TOR2, bind tightly to GAT OR2. Refeeding of arginine liberates GAT OR2 from this inhibitory interaction, thereby promoting mTORC1 activation. Additional work elucidating the structure and function of GAT OR2 is required to clarify whether the inhibitory actions of Sestrin-2 and CAS TOR1 toward GAT OR2 operate through similar or distinct mechanisms. The presence of both cytoplasmic and lysosomal amino acid sensing systems raises intriguing questions about their relative importance and the mechanisms that coordinate their activities upstream of mTORC1. Additional proteins have been proposed to play a role in relaying an amino acid signal to mTORC1, including the Ste20 family kinase MAP4K3 (Findlay et al., 2007;Bryk et al., 2010;Yan et al., 2010), leucyl-tRNA synthetase (Bonfils et al., 2012;Han et al., 2012), the scaffold protein SH3BP4 , and the autophagic adaptor p62/SQS TM1 (Duran et al., 2011). Further work is needed to determine the specific role of each protein and how their regulatory inputs are coordinated upstream of mTORC1. In summary, the aforementioned discoveries are illuminating the pivotal roles of the lysosome in regulating the switch between catabolic and anabolic metabolism and foster a unifying model of nutrient sensing by which all the signals from intracellular nutrients and exogenous growth factors are integrated at the lysosomal surface (Fig. 2). This model likely undergoes variations between species or even among different organs and tissues of multicellular organisms. Interestingly, S. cerevisiae lacks Sestrin and CAS TOR homologues, suggesting that yeast mTORC1 may preferentially sense amino acids in the vacuole (the main cellular repository for these metabolites) or that it may be more concerned with nitrogen abundance than with the levels of any specific amino acid (Bahn et al., 2007). Also, any proteins or small molecules able to interact with and regulate the mTORC1-activating supercomplex may provide additional regulatory mechanisms. Thus, it is conceivable that additional nutrient inputs upstream of mTORC1 wait to be discovered. Transcriptional regulation of lysosomal function For a long time, the lysosome was thought of as a metabolic "dead end," a static compartment that, unlike mitochondria or peroxisomes, is not subjected to feedback regulation by the nutrient state of the cell. This view has been radically altered by the recent discovery of a vast and coordinated transcriptional program controlled by the transcription factor EB (TFEB), along with other members of the microphthalmia-transcription factor E (MiT/TFE) subfamily (Rehli et al., 1999;Sardiello et al., 2009;Settembre et al., 2011). These basic helix-loop-helix (HLH) transcription factors up-regulate the expression of genes encoding for lysosomal and autophagic proteins by preferentially binding to a 10-bp GTC ACG TGAC motif found within their promoters and termed coordinated lysosomal expression and regulation (CLE AR) element (Sardiello et al., 2009;Palmieri et al., 2011;Settembre et al., 2011). The nutrient status of the cell, along with other environmental cues, tightly controls the expression of the CLE AR network through the cytoplasmic-nuclear shuttling of TFEB. Under nutrient-rich conditions, TFEB is recruited to the lysosomes via its physical interaction with the Rag GTPases. At the lysosomal surface, mTORC1 phosphorylates TFEB on two critical residues, Ser142 and Ser211. Phosphorylated TFEB is then retained in the cytoplasm via binding to 14-3-3 proteins (Martina et al., 2012;Roczniak-Ferguson et al., 2012;Settembre et al., 2012;Martina and Puertollano, 2013). Upon nutrient withdrawal or lysosomal stress, TFEB undergoes dephosphorylation and rapidly translocates to the nucleus to activate the transcription of CLE AR genes, including lysosomal hydrolases, pumps, and permeases, along with autophagic regulatory proteins. Thus, the net effect of TFEB activation is an increase in autophagic flux matched by an expansion of the lysosomal compartment, thereby boosting the ability of the cell to adapt to nutrient-poor and stressful conditions. Calcineurin, a calcium-dependent phosphatase, is responsible for TFEB dephosphorylation on Ser142 and Ser211 and thus promotes TFEB entry into the nucleus (Medina et al., 2015). Interestingly, it is thought that the lysosomal calcium pool controls calcineurin activation and TFEB dephosphorylation. Figure 2. The lysosome as a signaling hub for nutrient sensing. Under growth-promoting conditions, signals from amino acids, energy, oxygen, and growth factors are integrated upstream of the Rag and Rheb GTPases to facilitate the recruitment and activation of mTORC1. Loss of any of these inputs leads to shutdown of mTORC1 signaling by blocking its lysosomal recruitment, kinase activation, or both. Starvation triggers release of calcium from the lysosomal lumen via MCO LN1/TRP ML1, a multispan lysosomal ion transporter, and inhibition of MCO LN1 by genetic inactivation and pharmacological approaches impaired TFEB nuclear translocation and autophagy induction (Medina et al., 2015;Wang et al., 2015b). MITF and TFE3, which are also members of the MiT/ TFE family, can form homodimers or heterodimers with TFEB and are regulated by similar mechanisms (Roczniak-Ferguson et al., 2012;Martina and Puertollano, 2013). How starvation increases MCO LN1-mediated lysosomal Ca 2+ release remains elusive. A recent study (Wang et al., 2015b) has suggested that nutrient-induced changes in the levels of phosphatidylinositol-(3,5)-bisphosphate, a known activator of TRP ML1, may be involved in this process. Phosphoproteomic studies revealed that TFEB possesses multiple sites of phosphorylation (Dephoure et al., 2008;Peña-Llopis et al., 2011;Settembre et al., 2011;Ferron et al., 2013), suggesting that mTORC1-independent signaling pathways could also modulate the nuclear translocation of TFEB. Consistently, the ERK2 kinase phosphorylates TFEB at serine 142 in response to growth factor stimulation and restricts its nuclear localization (Settembre et al., 2011). MITF is also subjected to phosphorylation by c-kit and WNT signaling on the sites that are conserved in TFEB and TFE3 (Wu et al., 2000;Ploper et al., 2015). All these findings indicate that the phosphorylation-dependent regulation of TFEB represents a universal mechanism of lysosomal adaptation to combat cellular stresses. TFEB regulation and function are evolutionarily conserved from nematodes to humans, and at the organism level TFEB-driven transcriptional responses mediate important physiological processes such as lipid catabolism, longevity, and organismal survival. Experimental evidence indicates that TFEB expression is up-regulated in mice after food deprivation or energy expenditure (Settembre et al., 2013b;Medina et al., 2015). Overexpression of TFEB in mouse liver attenuated diet-induced obesity by promoting lipid catabolism. In contrast, lipid degradation pathways were impaired in hepatocytes from liver-specific TFEB knockout mice (Settembre et al., 2013b). These liver-specific functions of TFEB result from its ability to activate a transcriptional program for lipid catabolism through direct up-regulation of peroxisome proliferator-activated receptor α (PPARα) and peroxisome proliferator-activated receptor gamma coactivator 1α (PGC1α), which are key regulators of lipid breakdown in response to starvation. Thus, TFEBmediated programs allow the organism to derive energy from the stored lipids, linking lysosomal function to the maintenance of cellular energy balance (Rabinowitz and White, 2010;Singh and Cuervo, 2011). A TFEB-mediated adaptive response could also contribute to the extended lifespan seen in the nematode Caenorhabditis elegans, in which the TFEB homologue, known as HLH-30, acts similarly to its human counterpart to promote lipid mobilization and autophagy in fasting worms (Kaeberlein et al., 2006;Lapierre et al., 2013;O'Rourke and Ruvkun, 2013), whereas loss of HLH-30 diminishes the starvation-induced lifespan extension (Settembre et al., 2013b). Further investigation is required to determine whether modulating the expression and activity of TFEB would impact the lifespan of higher organisms. Of note, the capability of TFEB to promote cellular clearance could also be exploited to develop novel therapeutics for diseases associated with lysosomal and autophagic dysfunction such as lysosomal storage diseases (Settembre et al., 2013a;Spampanato et al., 2013;Lim et al., 2015) and common neurodegenerative diseases including Parkinson's, Alzheimer's, and Huntington's diseases. It was observed that TFEB activation by Non-mTORC1-related signaling functions of the lysosome In addition to regulating mTORC1, the lysosome plays an important role in other major signaling pathways by mediating either the breakdown of activated receptors for signal termination or the proteolytic activation of signaling ligands. Two notable examples are discussed here: (1) In canonical receptor tyrosine kinase (RTK) pathways, binding of the epidermal growth factor (EGF) to its receptor EGFR/ErbB1 at the cell surface triggers receptor dimerization and cross-phosphorylation of the EGFR intracellular kinase domains; phosphorylated domains then recruit and activate effectors such as the small GTPase Ras and the lipid kinase PI3K. Activation of these effectors is essential for the propagation of the EGF-initiated signal to a set of protein kinases such as the MAPK/ERK kinases, which convert the initial ligand-receptor binding into a mitogenic response (Avraham and Yarden, 2011;Tomas et al., 2014). The degradation of activated EGFR in the lysosome is a key step for termination of this highly mitogenic signal. Phosphorylated EGFR triggers its own ubiquitination by the E3 ligase Cbl; endocytic adaptors containing ubiquitin-interacting motif, such as epsin and Eps15, recognize ubiquitinated EGFR and promote its internalization into endocytic vesicles (Tomas et al., 2014). Internalized EGFR is then trafficked via Rab5-positive early endosomes to Rab7-positive late endosomes and progressively removed from the endosomal-limiting membrane via ESC RT-mediated budding of intraluminal vesicles. Through further rounds of fusion, these EGFR-loaded late endosomes then convert into mature lysosomes, wherein cathepsin proteases and lipases degrade the EGFR-loaded intraluminal vesicles (Tomas et al., 2014). The key role of this lysosome-based degradative pathway in signal down-regulation is highlighted by the presence of inactivating mutations of Cbl in various malignancies (Makishima et al., 2009;Tan et al., 2010), resulting in constitutive EGFR signaling at the plasma membrane. Moreover, this mode of regulation is shared by other RTKs, including platelet-derived growth factor receptor and insulin-like growth factor receptor. Thus, in the context of RTK signaling, the lysosomal lumen functions primarily as an endpoint for signal down-regulation, thereby playing an important role in limiting the mitogenic effect of EGFR. (2) The lysosome is also an important signaling station for innate immunity. This is an ancient pathogen-defense system in which specialized pattern recognition receptors (PRRs) recognize and bind to molecular signatures known as pathogen-associated molecular patterns shared by several classes of pathogens. A prominent class of PRRs is the Toll-like receptor (TLR) family (O'Neill et al., 2013). TLRs consist of leucine-rich repeat motifs in an antigen-binding ectodomain, a single pass transmembrane portion, and an intracellular Toll-IL-1 receptor (TIR) domain. Upon binding of the ectodomain to microbial ligands such as viral or bacterial proteins and nucleic acids, the ectodomain undergoes a conformational change that leads to the recruitment of specific adaptor proteins to the TIR. This binding event initiates a signaling cascade that mounts several anti-pathogen responses, including secretion of inflammatory cytokines and anti-microbial peptides, as well as activation of dendritic cells. Of the 13 known TLRs, TLR 3, 7, 8, and 9 localize to the endolysosomal compartment, with their ectodomain protruding into the lumen and the TIR facing the cytoplasm. These TLRs specialize in recognizing nucleic acids, which are released from invading pathogens that were taken up in intracellular compartments. Localization of TLRs to endocytic compartments is thought to prevent them from recognizing "self" nucleic acids and thus mounting an autoimmune response. Moreover, full activation of these TLRs requires proteolytic processing of their ectodomain by proteases such as asparagine endopeptidases and cathepsins in the acidic lumen of the lysosome (Lee and Barton, 2014). After ligand binding and activation, TLR3 and TLR9 recruit adaptor proteins, such as TRIF and MyD88, respectively, to their TIR, triggering parallel signaling cascades that culminate with the activation of transcription factors and the release of inflammatory cytokines and interferons. Thus, in the context of innate immunity, the lysosomal lumen provides an ideal environment in which TLRs become fully activated and where they bind to their respective ligands, whereas the cytoplasmic face provides a platform for the recruitment of secondary effectors that propagate the pathogen-initiated signal all the way to the nucleus. overexpression or pharmacological stimulation can attenuate protein aggregation in cellular and mouse models of neurodegenerative disease, likely through increased autophagic clearance of protein aggregates (Polito et al., 2014;Xiao et al., 2014Xiao et al., , 2015Chauhan et al., 2015). At the organismal level, elimination of the TFEB gene in mouse is embryonically lethal because of defects in placental vascularization (Steingrímsson et al., 1998). Prosurvival effects of autophagy elicited by TFEB may also induce metabolic reprogramming that favors cancer growth, for example, by deliberately accumulating nutrients such as amino acids in lysosomes. One notable example is human pancreatic ductal adenocarcinoma, which displays increased number of lysosomes and enlarged autophagosomes as a result of constitutive nuclear localization and activation of MiT/TFE transcription factors, which are decoupled from mTORC1 regulation (Perera et al., 2015). Moreover, gene fusions involving TFE3 or TFEB have been identified in patients with sporadic renal cell carcinoma (Komai et al., 2009;Mosquera et al., 2011;Zhong et al., 2012). A feature common to all TFE3 and TFEB fusion proteins is the retention of the wild-type protein C-terminus that is required for DNA binding, dimerization, and nuclear localization. How the translocated TFE3 and TFEB genes contribute to renal carcinogenesis remains poorly understood. A detailed investigation of the transcriptional programs that become constitutively activated will shed light on this important question (Kauffman et al., 2014;Magers et al., 2015). The MiT/TFE proteins are prominent members of a rapidly expanding group of transcription factors involved in autophagy-lysosome gene regulation. Emerging evidence has also suggested a key role for forkhead box O (FOXO) transcription factor family in the regulation of autophagy (Webb and Brunet, 2014). Insulin and growth factor signaling negatively regulate FOXO transcriptional activity through AKT/SGK1-dependent phosphorylation, leading to FOXO exclusion from the nucleus and inhibition of its transcriptional activity (Biggs et al., 1999;Brunet et al., 1999Brunet et al., , 2001Kops et al., 1999). During starvation, when insulin and growth factors are absent, FOXO translocates into the nucleus and activates the expression of genes involved in stress response, metabolism, and cellular quality control (Calnan and Brunet, 2008). It was shown that FOXO3 is required for fasting-induced autophagy in muscles (Mammucari et al., 2007;Zhao et al., 2007). Overexpression of FOXO3 is sufficient to induce autophagosome, as revealed by increased foci of LC3-GFP in C2C12-derived myotubes and primary mouse myofibers, whereas knockdown of FOXO3 leads to decreased autophagosome formation (Mammucari et al., 2007;Zhao et al., 2007). Chromatin immunoprecipitation analysis in mouse muscle cells demonstrated that FOXO3 directly binds to the promoters of key autophagy genes including LC3b, Gabar-apl1, Atg12l, Bnip3, and Bnip3l (Mammucari et al., 2007;Zhao et al., 2007). Therefore, FOXO3 may synergize with TFEB to maintain muscle functionality during fasting. Just as turning on autophagy and lysosomal biogenesis in response to nutrient scarcity is critical for cellular survival, turning off these processes is equally important, as it allows cells to readjust their metabolic requirements when nutrients are replete. ZKS CAN3, a zinc finger transcription factor containing KRAB and SCAN domains previously identified as a "driver" of cell proliferation (Ma et al., 2007;Yang et al., 2008), has been proposed as a master transcriptional repressor of autophagy (Chauhan et al., 2013). ZKS CAN3 directly represses the expression of a repertoire of genes involved in sequential steps of autophagic process ranging from lysosome biogenesis to trafficking and autophagosome-lysosome fusion. Similar to TFEB, ZKS CAN3 activity is regulated by nuclear-cytoplasmic shuttling but in the opposite way. Nutrient deprivation or mTORC1 inhibition triggers cytoplasmic localization of ZKS CAN3 and silences its activity, whereas nutrient-rich conditions promote nuclear translocation of ZKS CAN3, leading to suppression of the autophagic response (Chauhan et al., 2013). Hence, by switching on and off multiple components of the autophagylysosome system through reciprocal regulation of TFEB and ZKS CAN3, the lysosome exerts its adaptation to meet metabolic demands according to nutrient levels. Farnesoid X receptor (FXR), which is a bile acid-activated nuclear receptor involved in regulation of bile acid, lipid, and glucose homeostasis, has also been shown to negatively regulate autophagy in the liver through multiple mechanisms Seok et al., 2014). In the fed state, FXR competes with PPARα for a common binding site in the promoter regions of key autophagy genes, resulting in their repression . Interestingly, mTORC1 also attenuates PPARα activity by promoting nuclear translocation of nuclear receptor corepressor 1, which is a transcriptional repressor of PPARα (Sengupta et al., 2010). Furthermore, in fed mice, FXR blocks the transcriptional activity of cAMP-responsive element binding protein (CREB) by disrupting the functional interaction between CREB and its coactivator CRTC2. Because CREB promotes the expression of TFEB and other autophagic regulators, decreased assembly of CREB/CRTC2 complex suppresses catabolism (Seok et al., 2014). Conversely, under fasting conditions, FXR becomes inactive, thus leading to de-repression of the transcriptional activity of TFEB, PPARα, and the CREB-CRTC2 complex. Together, these studies establish a mechanistic link between nutrient-sensing transcription factors/nuclear receptors and the regulation of autophagy and lysosomal function and delineate an integrated regulatory network for metabolic adaptation of increasing complexity. Concluding remarks The discovery of the lysosome-centric signaling networks for nutrient sensing and metabolic adaptation described herein has projected this organelle into the pilot's seat of cellular physiology. Clearly, more studies are required to achieve a comprehensive understanding of how the lysosome's many parts interact under various physiological and pathological conditions. Increasing evidence also suggests that this organelle may constantly communicate with other cellular structures to carry out specific metabolic programs. For instance, a contact site between the yeast mitochondria and vacuole named vCLA MP (vacuole and mitochondria patch) provides an alternative route to phospholipid transfer to the conventional route via mitochondria-endoplasmic reticulum contacts, and thus participates in mitochondria biogenesis (Elbaz-Alon et al., 2014;Hönscher et al., 2014). From a cell biological standpoint, studying lysosomal organization and plasticity will answer longstanding questions regarding the functional diversity of lysosomes in different tissues and organs, which is mediated by tissue-and cell-type specific gene expression but is also likely influenced by local metabolic conditions and age. Deciphering the molecular basis that determines the differences in lysosomal composition and function will help us understand how the lysosome acquires specialized functions to carry out specific metabolic tasks. A good example is provided by lysosome-related organelles known as melanosomes, which specialize in the synthesis and storage of melanin pigment Marks, 2002, 2007). The molecular composition of melanosomes changes through sequential and well-defined stages of maturation Marks, 2002, 2007). Moreover, melanosomes function differently according to cell type. In retinal pigment epithelial cells, melanosomes help detoxify phagocytosed photoreceptor outer membranes, whereas in epidermal melanocytes, melanosomes contribute to generation of the pigmentation of skin and hair by supplying melanins to neighboring keratinocytes (Dell'Angelica, 2003). Importantly, loss of ability to synthesize pigments and disorganization of melanosomal structures are associated with development of malignant melanoma. Hence, functional characterization of the molecular components of melanosomes throughout different stages of maturation and across cell types will not only provide insights into how they deviate from conventional lysosomes, but also help unravel the pathogenesis of melanoma. A further emerging aspect is the heterogeneity of lysosomes within a cell. Lysosomes appear to have different abilities to internally acidify and, potentially, to generate metabolic signals (Korolchuk et al., 2011;Johnson et al., 2016). These intrinsic differences may stem from the positioning of lysosomes within cells, which is controlled by specialized protein complexes at the lysosomal surface and by the activity of lysosomal ion channels (Pu et al., 2015;Li et al., 2016). With rapid advances in genomic editing and proteomic technologies, comparative analyses of lysosomal composition and function will allow us to better appreciate the important contributions of this organelle to many aspects of cellular metabolism, organismal physiology, and disease. Demetriades, C., N. Doumpas, and A.A. Teleman. 2014
2017-09-03T00:29:53.810Z
2016-09-12T00:00:00.000
{ "year": 2016, "sha1": "631480a9c35d05a48e54e9bb591663d47ef389e1", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/214/6/653/1403766/jcb_201607005.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "631480a9c35d05a48e54e9bb591663d47ef389e1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54665062
pes2o/s2orc
v3-fos-license
Electromagnetic Manifestation of Earthquakes In a joint analysis of the results of recording the electrical component of the natural electromagnetic field of the Earth and the catalog of earthquakes in Kamchatka in 2013, unipolar pulses of constant amplitude associated with earthquakes were identified, whose activity is closely correlated with the energy of the electromagnetic field. For the explanation, a hypothesis about the cooperative character of these impulses is proposed. Introduction In this paper the data for year of 2013 regarding seismic activity (represented by the Kamchatka branch of the geophysical service of the Russian Academy of Sciences), and the signal of the vertical electric component of the electromagnetic field of the ELF-VLF range (registered at the Karymshin observation station of the IKIR FEB RAS) we compared. Data analysis During the visual analysis of the waveforms of the initial data in the time vicinity of the earthquake moment, attention was drawn to the presence of unipolar pulses accompanying earthquakes.The range of these pulses is ten times higher than the background value, while its shape remains unchanged.The pulse duration is about 0.4 milliseconds (Figure 1).* Corresponding author: uvarovvn@ikir.ruFor the subsequent analysis of the mentioned phenomenon, a consistent filtering of the original data was carried out using the reference sample obtained by averaging the anomaly value (Fig 2).Stable anomaly of the electric component of the electromagnetic field, which appears near the moment of the main earthquake push.Above is the individual anomaly.Below is the anomaly is averaged by 50 individual implementations. The Figure 3 shows the results of signal processing using as an example the earthquakes of 2031/02/28 14:00. Pulse properties Analysis of these graphs shows that At the time of the earthquake, the activity (the number of pulses per unit time) of unipolar pulses corresponds to the background level  The activity of unipolar pulses reaches its maximum 2-4 hours before and after the earthquake  The activity of unipolar pulses is extremely closely correlated with the moments of maximum intensity values  The amplitudes of different unipolar pulses are the same. Model of deformation process As is known, the process of deformation of a solid body by its conversion character has two extreme forms -an evolutionary one, at which a continuous change in the body is observed, and a catastrophic one, in which the structure of the body changes abruptly.The process of evolutionary deformation is accompanied by the birth and elimination of structures that impede the evolutionary process of deformation (stoppers) formed from stronger structural heterogeneities by piling up these heterogeneities with the formation of impassable formation.These stoppers can be eliminated in two ways: either as a result of thermal diffusion, or when disrupted by the accumulated stress.If the rate of birth of stoppers is lower than the rate of their annihilation, then evolutionary aseismic development of deformation occurs.The equality of birth and annihilation rates corresponds to the bifurcation region, where intermittency of the evolutionary and catastrophic forms is observed.At a birth rate greater than the rate of annihilation leads to the deformation process occurring with an increasing number of stoppers, an increase in the stress to the threshold of fragile breaking's of the rock, the subsequent disruption of the stoppers, relaxation of the stresses, and a transition to a new equilibrium state.In such case, the process is catastrophic and accompanied by seismic manifestations. The presence of deformational electromagnetic transformation mechanisms inherent in the rock leads to the existence of electromagnetic satellites of seismic/acoustic perturbations of the earth's crust [1][2][3].Since charges of different signs have different mobility and interact differently with stoppers, during the formation of a stopper electric charges of the same sign are being accumulated, and therefore charges of different signs are being separated.Simultaneously with the accumulation of charge, the environment is polarized to compensate for the excess charge, and a compensating cloud with a core of accumulated charges is being formed.With the subsequent disruption of the stopper, the cloud breaks, during which the total dipole moment increases many times.The rate of growth of the dipole moment is determined by the velocity of the substance motion during the stopper detachment.After the relaxation of mechanical stresses, relaxation of the dipole moment arises. Fig. 1 . Fig. 1.Unipolar pulses of electromagnetic radiation appearing near the time of the earthquake.Electrical component.Unipolar impulses are singled out. Fig. 2 . Fig.2.Initial data and processing results. 1 -initial signal. 2 -signal subjected to optimal filtration.3 -rms value of the original signal.4 -dispersion of original signal.The moment of the earthquake is marked by the arrow. Fig. 3 . Fig.3.Initial data and processing results. 1 -the initial signal.2 -signal subjected to optimal filtration.3 -the rms value of the original signal.4 -dispersion of original signal.The moment of earthquake is marked by the arrow.
2018-12-12T16:26:33.739Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "e112027f2cc689555889944409be2a2cde0cf0c7", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2017/08/e3sconf_strpep2017_03006.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e112027f2cc689555889944409be2a2cde0cf0c7", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geography" ] }
237363763
pes2o/s2orc
v3-fos-license
Dead Pixel Test Using Effective Receptive Field Deep neural networks have been used in various fields, but their internal behavior is not well known. In this study, we discuss two counterintuitive behaviors of convolutional neural networks (CNNs). First, we evaluated the size of the receptive field. Previous studies have attempted to increase or control the size of the receptive field. However, we observed that the size of the receptive field does not describe the classification accuracy. The size of the receptive field would be inappropriate for representing superiority in performance because it reflects only depth or kernel size and does not reflect other factors such as width or cardinality. Second, using the effective receptive field, we examined the pixels contributing to the output. Intuitively, each pixel is expected to equally contribute to the final output. However, we found that there exist pixels in a partially dead state with little contribution to the output. We reveal that the reason for this lies in the architecture of CNN and discuss solutions to reduce the phenomenon. Interestingly, for general classification tasks, the existence of dead pixels improves the training of CNNs. However, in a task that captures small perturbation, dead pixels degrade the performance. Therefore, the existence of these dead pixels should be understood and considered in practical applications of CNN. Introduction Deep neural networks have demonstrated remarkable performance in various fields such as object detection, semantic segmentation, and image classification Redmon et al. 2016;Long, Shelhamer, and Darrell 2015;Ronneberger, Fischer, and Brox 2015;He et al. 2015;Tan and Le 2019). The performance of deep neural networks varies depending on the architecture. State-of-the-art results have been obtained by designing large neural networks with increased depth (He et al. 2016), width (Zagoruyko and Komodakis 2016), and cardinality (Xie et al. 2017). In architecture design, one of the factors often considered is the receptive field (Araujo, Norris, and Sim 2019). For example, if two 3 × 3 convolutions are applied, a resulting feature covers a 5×5 area. As such, the pixel-level area covered by a specific feature is called the theoretical receptive field. Meanwhile, effective receptive field was proposed by Luo Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. et al. (2016). Contrary to the previous square-shaped theoretical receptive field, the effective receptive field illuminates the actually activated pixels through gradients, whose shape appears as a 2D Gaussian. Many studies have preferred enlarging the receptive fields to obtain performance gain (Tsai et al. 2018;Fu et al. 2018;Singh and Davis 2018;Kim, Lee, and Lee 2016;Johnson, Alahi, and Fei-Fei 2016;Shi et al. 2020;Plötz and Roth 2018). Further, Araujo, Norris, and Sim (2019) conjectured the relationship between the size of the receptive field and the classification accuracy. However, this study points out that these common practices should be reconsidered. For modern convolutional neural networks (CNNs), we measured the size of the receptive field. We observed that the large receptive field cannot guarantee the performance superiority of the neural network. For example, some neural networks exhibit high accuracy but have a smaller receptive field. This is because the size of the receptive field reflects only the depth or kernel size and does not reflect the width or cardinality. In addition to examining the size of the effective receptive field, we investigate the shape of the effective receptive field. We further obtained the effective receptive field of the final output. Conventionally, for a CNN, every pixel, or at least an adjacent pixel, is expected to contribute almost equally to the final output. In other words, it would be strange if a pixel at a specific location is partially dead, and the dead pixel has little effect on the output for any data. Surprisingly, we found that the partially dead pixels exist. In modern CNNs such as ResNet, strong pixels and weak pixels exist for any data. Here, the contribution to the output is significantly different for strong pixels and weak pixels. We will show that this pixel sensitivity imbalance is significant even for adjacent pixels. This pixel sensitivity imbalance occurs when an operation with an odd-sized kernel is applied with stride 2. A solution to this problem is provided in Section 4. Is pixel sensitivity imbalance a bug or a feature? We compared the performance of CNNs after reducing the pixel sensitivity imbalance. Interestingly, pixel sensitivity imbalance does not degrade but rather enhances the neural network's performance. In this respect, pixel sensitivity imbalance is a feature for general vision tasks. However, when pixel sensitivity imbalance exists, it is difficult to capture small perturbations in images. In contrast, when the pixel sensitivity im-balance is reduced, the neural networks easily capture small perturbations in images. In this regard, pixel sensitivity imbalance is a bug for some special tasks. Preliminaries: Receptive Field In the theoretical receptive field, the largest pixel-level area covered by the target feature is investigated by tracing backward operations in the CNN. For example, if three 3 × 3 convolutions are applied, one target feature has a 7 × 7 theoretical receptive field. If any of these operations have a stride greater than 1, the target feature will cover a larger area, resulting in a wider theoretical receptive field (Araujo, Norris, and Sim 2019). However, as the theoretical receptive field is the theoretical maximum area covered by a target feature, it is far from the practical behavior of the neural network. In the effective receptive field, gradients are used to examine the actual pixels that affect the target feature. Contrary to the theoretical receptive field that appears as a square, the effective receptive field appears as a 2D Gaussian. Here we provide a detailed formulation of our trick to obtain the effective receptive field. Suppose an image I xyz ∈ R 224×224×3 is given. The image is passed through given CNN, resulting in a target feature map A ijk ∈ R 7×7×Nc . For effective receptive field, the goal is to represent the spatial relationship between pixel-level (x, y) and feature-level (i, j). Therefore, the channel of the image and feature map should be ignored and averaged. First, we define, which is the averaged feature over channel k for the spatial center (4, 4) in the target feature map. Then we compute the gradient w.r.t image, ∂F ∂Ixyz . By averaging the gradient over the channel z, we obtain, which represents how pixel (x, y) affects the central feature for the given image. However, the G xy from a single image is sparse and depends on the image. By averaging G xy over a sufficiently large number of data, the nature of the neural network can be obtained. However, if some G xy has a negative value, it cancels out with a positive G xy . As we want to obtain the accumulation of pixel contributions, we ignore negative importance (Selvaraju et al. 2017;Chattopadhay et al. 2018). Thus, we pass G xy through the ReLU (Glorot, Bordes, and Bengio 2011): Now, R xy represents the general contribution property of pixel (x, y) to the target feature, i.e., the effective receptive field. In summary, we need first to calculate G xy for each image, pass it through ReLU , and then average it over a sufficiently large dataset. In the modern deep learning environment using mini-batch, applying ReLU to the gradient for each image can be difficult. We recommend using batch size 1 to correctly accumulate ReLU (G xy ) for each image. Accumulating ReLU (G xy ) over a sufficiently large amount of data yields a clean, high-quality effective receptive field R xy that well describes the internal behavior of a neural network. Size Test on the Effective Receptive Field Here, we investigate the size of the effective receptive field of modern CNNs. Target CNNs are ResNet and its variants (He et al. 2016;Zagoruyko and Komodakis 2016;Xie et al. 2017), which are widely used in various vision tasks. We used torchvision.models (Paszke et al. 2019(Paszke et al. , 2017 that were pre-trained on ImageNet (Russakovsky et al. 2015). For each model, we summarized the top-1 accuracy and top-5 accuracy reported. We also computed the size of the theoretical receptive field for each model. Note that since ResNet differs in detailed architecture for each implementation, the sizes of the theoretical receptive field for torchvision.models are different from that of the TensorFlow models (Araujo, Norris, and Sim 2019; Silberman and Guadarrama 2016). For each pre-trained model, we obtained an effective receptive field using the test dataset from the CUB-200-2011 dataset (Wah et al. 2011). Here, we set the target feature map as the last feature map of 7 × 7 layer-4. For the effective receptive field using 14 × 14 layer-3 as the target feature map, refer to the supplementary material. The obtained effective receptive field was fitted with 2D Gaussian using the Lmfit library (Newville et al. 2016). The resultingσ X andσ Y indicate how large the effective receptive field is. These results are summarized in Table 1. Our major observations are summarized as follows. Observation 1. The size of the theoretical receptive field does not describe the classification accuracy. We observed that the classification accuracy of CNNs is not proportional to the size of the theoretical receptive field. For example, Wide-ResNet-50-2 and ResNet-152 show similar classification accuracy, but the size of their theoretical receptive field is 427 and 1451 pixels, respectively. This is because the theoretical receptive field reflects only depth or kernel size and cannot reflect width or cardinality. On the other hand, ResNet-34 has a large theoretical receptive field of 899 pixels because it uses early convolution with stride 2 in residual blocks, unlike ResNet-50. As such, ResNet-34 has a wider theoretical receptive field than ResNet-50, but its classification accuracy is lower. These observations are inconsistent with the conjecture (Araujo, Norris, and Sim 2019) that the classification accuracy tends to be proportional to the size of the theoretical receptive field. Observation 2. The size of the effective receptive field does not describe the classification accuracy. We observed that the classification accuracy of CNNs is also not proportional to the size of the effective receptive field. In other words, even from the viewpoint of the effective receptive field, the large receptive field does not guarantee superiority in performance. Meanwhile, when the depth in- Table 1: For ResNet and its variants, we summarize top-k classification accuracy (%), theoretical receptive field (TRF) size, effective receptive field size (σ X ,σ Y ), and R 2 from fitting. Contrary to previous studies, we observed that the classification accuracy was not proportional to the size of the theoretical receptive field. The size of the effective receptive field also did not show a tendency consistent with the classification accuracy. creases within these ResNets, the size of the effective receptive field does not increase further and saturates to a certain size. These results are different from the study of Luo et al. (2016), which reported that the size of the effective receptive field tends to be proportional to √ depth. The same experiment was performed once more. First, each pre-trained ResNet was fine-tuned on the Caltech-101 dataset (Fei-Fei, Fergus, and Perona 2004). We replaced the last fully connected layer to output for 101 classes. For training, stochastic gradient descent with momentum 0.9 (Sutskever et al. 2013), learning rate 0.01, weight decay 0.0005, batch size 64, epochs 200, and cosine annealing schedule with 200 iterations (Loshchilov and Hutter 2016) was used. For data augmentation, random resized crop with size 256, random rotation with degree 15, color jitter, random horizontal flip, center crop with size 224, and mean-std normalization was applied. The train/val/test set was split at a ratio of 70:15:15. Within 200 epochs, the model with the best validation accuracy was obtained and evaluated. Model For each fine-tuned model, an effective receptive field was obtained using the test dataset, and its size was investigated (Table 2). Similarly, the size of the theoretical receptive field and the effective receptive field do not agree with the trends in classification accuracy. Therefore, we conclude that the size of the receptive field is not a representative indicator of classification accuracy, nor architectural superiority. Shape Test on the Effective Receptive Field In the previous section, we fitted each effective receptive field to a 2D Gaussian. Although R 2 showed near 0.9, those effective receptive fields did not perfectly match the 2D Gaussian. To understand this behavior, we visualize the obtained effective receptive field. For the ResNeXt-101-32x8d, we plotted the effective receptive field (Figure 1). Although the effective receptive field appears as 2D Gaussian, a checkboard pattern exists inside. Therefore, the effective receptive field imperfectly matched the 2D Gaussian because of the internal checkboard pattern. Additionally, we accumulated ∂y ∂I to obtain an effective receptive field of output y. This is what we call the dead pixel test. In general, the entire pixels are expected to contribute almost equally to output y. However, even for the effective receptive field of y, we discovered that the checkboard pattern exists. The existence of this checkboard pattern implies that modern CNNs recognize images in a highly counterintuitive way. Some pixels are weak, partially dead, and hardly contribute to the output. Conversely, some pixels are strong and more sensitive to output. We call this phenomenon pixel sensitivity imbalance. As the checkboard pattern appears locally, even in adjacent pixels, the pixel sensitivity differs significantly. Why does the checkboard pattern appear? We found that it occurs when an odd-sized kernel is applied with stride 2 (Figure 2). For example, when a 3 × 3 convolution is applied with stride 2, overlapping regions appear. Pixels within the overlapping regions are referenced more in operation, while other pixels are not. As this phenomenon accumulates, some pixels become more influential while others do not. When viewed in 2D, a checkboard pattern appears. This phenomenon is highly similar to the checkboard pattern when using deconvolution in image generation tasks (Odena, Dumoulin, and Olah 2016). Extending this, we emphasize that the checkboard pattern exists from the perspective of gradient even when using convolution. Despite these potential problems, odd-sized kernels with stride 2 are widely used in modern CNNs (Huang et al. 2017;Szegedy et al. 2015;Iandola et al. 2016;Krizhevsky, Sutskever, and Hinton 2012;Ma et al. 2018;Sandler et al. 2018). For example, in the early stage of ResNets, 7×7 Conv with stride 2 and 3 × 3 Pool with stride 2 are used. Further, in the downsampling operation in the residual block, 1 × 1 Conv is used with stride 2, which subsamples only the specific input and shuts off the flow in other locations. Here, we would like to modify those problematic odd- Figure 2: When an odd-sized kernel is applied with stride 2, a checkboard pattern appears. Figure 3: To construct an even-sized kernel while using pretrained weights, we propose a kernel padding method. An even-sized kernel is constructed by applying zero-padding to the bottom and right sides of the kernel. After fine-tuning, the added weights along with the existing weights can be properly trained. sized kernels with stride 2. As ResNet and its variants have similar architectures, most can be modified with similar rules. Not all layers need to be modified. The operations to be modified are as follows: 7×7 Conv with stride 2 and 3×3 Pool with stride 2 in early stage, and 1 × 1 Conv with stride 2 and 3 × 3 Conv with stride 2 across all residual blocks. We replace those kernels with even sizes such as 8 × 8 or 4 × 4. However, when replacing with a new kernel, the existing pre-trained weights are discarded. To construct an evensized kernel while boosting training through pre-trained weights, we propose kernel padding method. For the target odd-sized pre-trained weight, zero-padding is applied to the bottom and right sides to obtain an even-sized kernel (Figure 3). As the kernel is zero-padded, the operation is equivalent to the previous one. Accordingly, pre-trained weights can be enjoyed. Moreover, as the new zero-padded weights are trainable, during fine-tuning, they can be merged into the existing weights. We applied kernel padding to the ResNets pre-trained on the ImageNet and then fine-tuned them on the Caltech-101 dataset. The training details used in fine-tuning are the same as the experiments in Section 3. Now the effective receptive field of our architecture has no checkboard pattern ( Figure 1). The degree of the pixel sensitivity imbalance can be measured through the smoothness of the effective receptive field R xy of output y. Here, we define two indices, first-order im-balance index L 1 and second-order imbalance index L 2 : In other words, we pass the effective receptive field through the difference filters and compute spatial average to evaluate its local variation and curvature. The smaller these values are, the more locally smooth the effective receptive field is. Conversely, the larger the value, the greater the imbalance. Using these two indicators, we evaluated the degree of pixel sensitivity imbalance before and after applying kernel padding (Figure 4). Existing ResNets show large L 1 and L 2 , which indicates that pixel sensitivity imbalance is significant even in adjacent pixels. After applying the kernel padding, the imbalance decreased across all ResNets. Discussion Is pixel sensitivity imbalance a bug or a feature? In other words, if the pixel sensitivity imbalance is reduced, can the superiority of the architecture be guaranteed? For both perspectives, we provide some conjectures. Pros: Pixel Sensitivity Imbalance is a Feature. Even with pixel sensitivity imbalance, ResNets have been widely used in various vision tasks so far. Although some pixels are partially dead, they are not entirely dead. The difference between strong and weak pixels is a matter of contribution degree, and they are all involved in the output. The phenomenon of blocking the flow of certain inputs is not so unfamiliar. For example, consider Dropout (Srivastava et al. 2014) or DropBlock (Ghiasi, Lin, and Le 2018). They improve the performance of neural networks by dropping some neurons or inputs. For understanding the global context of an image, it is fine if some trivial input is missing. Furthermore, dropping some input induces the CNN to understand the image in a different way, introducing a regularization effect. Further, pixel sensitivity imbalance can be interpreted as rescaling a given image according to strong and weak pixels. As the image is rescaled pixel-wise, when a translated image is given, it is recognized as a completely different image. Accordingly, pixel sensitivity imbalance increases image diversity, thereby boosting the effect of data augmentation. Here, we examined how pixel sensitivity imbalances affect the performance in a general vision task. As kernel padding reduces pixel sensitivity imbalance, we compared the performance of ResNet and its variants before and after applying kernel padding. We performed fine-tuning on the Caltech-101 dataset, and the experimental details such as the training method and data augmentation are the same as in Section 3. For each model, we measured the average test accuracy from three experiments (Table 3). We observed that the performance is rather decreased after kernel padding. This Figure 4: To quantitatively evaluate pixel sensitivity imbalance, we measured the two indices. In all the target architectures, pixel sensitivity imbalance is reduced after kernel padding. Model Before KP After KP Diff Table 3: To investigate whether pixel sensitivity imbalance helps training or not, we compared the test accuracy (%) before and after applying kernel padding. After kernel padding, the performance is rather decreased. Thus, for general vision tasks, pixel sensitivity imbalance is not a bug, and it is a feature. means that pixel sensitivity imbalance is not a bug for a general image classification task but is a feature that improves performance. Therefore, reducing pixel sensitivity imbalance does not guarantee architectural superiority. Cons: Pixel Sensitivity Imbalance is a Bug. Nevertheless, pixel sensitivity imbalance gives rise to several potential problems. First, consider the saliency methods that visualize the inner behavior of a neural network. Many saliency methods have investigated important pixels based on gradients (Simonyan, Vedaldi, and Zisserman 2013;Springenberg et al. 2015;Smilkov et al. 2017;Sundararajan, Taly, and Yan 2017;Shrikumar, Greenside, and Kundaje 2017). However, saliency methods do not reflect pixel sensitivity imbalance. In other words, the gradient-based saliency map is affected by the checkboard pattern of the neural network. Thus, the gradient-based saliency method is only suitable for examining the pixels that contribute to the output of the neural network and is unsuitable for evaluating the intrinsic importance of a pixel. As mentioned earlier, since pixel sensitivity imbalance in-troduces pixel-wise rescaling, the translated image is perceived as a completely different image. This increases the data augmentation effect but worsens the translation invariance of CNN (Zhang 2019;Azulay and Weiss 2019;Cohen and Welling 2016). In a practical application, for example, if 1-pixel translated image produces a different result, the vision system would be considered unreliable and unstable. Moreover, pixel sensitivity imbalance implies a positional difference for capturing a perturbation. Consider one-pixel attack (Su, Vargas, and Sakurai 2019), which attempts an adversarial attack to invert the output by perturbing a certain pixel. Here we can additionally exploit the fact that strong pixels are generally more sensitive. If we construct an attack strategy that focuses more on strong pixels, we can attack the neural network more easily. Here, we provide a mathematical formulation. Consider output y, the logit before softmax layer. For a CNN with ReLU -like activations, we can represent the output using piece-wise linear function (Srinivas and Fleuret 2018;Simonyan, Vedaldi, and Zisserman 2013) ∂y ∂Ixyz and C are evaluated at specific image I. Here, we approximate them using which results in a fixed linear model, obtained from the mean over images. We defineỹ(I), which is the output from the fixed linear model. Now, assume that we put a perturbation on I XY Z . Then, If pixel sensitivity imbalance exists, R XY differs depending on the (X, Y ). Thus, even if the same amount of is applied, ∆ỹ varies depending on where the perturbation is applied. For example, if we put perturbation to a strong pixel, the output can be significantly affected. In addition, Eq. 12 implies that when pixel sensitivity imbalance exists, it may be difficult to distinguish whether the change in output is due to the magnitude of the perturbation or the position of the perturbation. Then, if perturbations with different magnitudes are applied at random locations, can CNN distinguish the magnitudes of perturbations? Further, if the perturbation magnitude and the position are also randomly varied every time, and only the average of the perturbation magnitude has a difference, it will be quite a challenging problem. However, these problems are commonly encountered in practical vision tasks. Here, we propose a micro-object classification task. The templates are images from the Caltech-101 dataset. First, we select a random 8 × 8 region within the template. After changing the color of the selected area to RGB=(0, 0, 0), we label the image as class A. In the same way, for class B, select a random area, but replace it with RGB=(255, 0, 0). As such, we put a micro-object at a random location in the image to perform a binary classification task. In this task, not only the position of the perturbation but also the magnitude changes every time. Here, when pixel sensitivity imbalance exists, it may be difficult for CNN to capture the difference in perturbation magnitude between the two classes. In contrast, suppose the pixel sensitivity imbalance is reduced by kernel padding. In that case, as the influence of the random position decreases, the change in the magnitude of perturbation can be more easily captured. Experimental details such as training method and data augmentation are almost the same as in Section 3. Here, to better see the intrinsic architectural differences, we did not use pre-trained weights. The number of epochs was set to 50. The observed training curve is shown in Figure 5. Initially, the test accuracy was around 50%, and the difference in perturbation was not captured. After a certain epoch, the test accuracy increased rapidly, and the difference in the microobjects was captured with an accuracy of more than 99%. Here, the existing model without kernel padding required more epochs to capture the perturbation difference. In contrast, the model to which kernel padding is applied captured the perturbation difference faster. To verify this more strictly, we measured the number of epoch where the test accuracy first exceeded 90%. If it did not exceed 90% within 50 epochs, it was evaluated as 50 epochs. For ResNet-101 and its variant, five experiments Figure 5: Training curve before and after applying kernel padding for the micro-object classification task. After kernel padding, ResNets capture the small perturbation difference more easily and faster. Model Before KP After KP Diff were performed, and the average of the measured number of epochs was summarized (Table 4). Even in the same training environment, after kernel padding, the difference in the micro-objects was captured 13-18 epochs faster. Thus, for some special tasks, pixel sensitivity imbalance is harmful to training. Conclusion In this study, we investigated the behaviors of CNNs using effective receptive fields. First, we investigated the size of the receptive field. Contrary to popular belief, we found that the classification accuracy is not proportional to the size of the receptive field. In addition, we observed that the size of the effective receptive field saturates to a certain level even if the CNN becomes deeper. These observations suggest that we need to reconsider when controlling the size of the receptive field. Second, the pixels contributing to the output were investigated through the effective receptive field of the output. We discovered that in modern ResNets, the contribution to the output is different for each pixel. It was identified that the cause of this pixel sensitivity imbalance lies in the use of an odd-sized kernel with stride 2. To solve this, kernel padding was proposed. We quantitatively evaluated pixel sensitivity imbalance through two indices and found that pixel sensitivity imbalance decreases after kernel padding. We discussed that although the pixel sensitivity imbalance is a helpful feature for general vision tasks, it is a harmful bug for some tasks. These behaviors of CNNs should be understood and considered by practitioners.
2021-09-01T01:16:05.493Z
2021-08-31T00:00:00.000
{ "year": 2021, "sha1": "c27052d878c81564edfc4f6027b48ad684d94af0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c27052d878c81564edfc4f6027b48ad684d94af0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252717250
pes2o/s2orc
v3-fos-license
24-h movement behaviours in Spanish youth before and after 1-year into the covid-19 pandemic and its relationship to academic performance Most studies have shown a decline in the adherence to 24-Hour Movement Guidelines because of Covid-19 lockdown. However, there is little evidence regarding changes 1-year after the pandemic in these guidelines and their possible impact on academic performance. The study aims were: (1) to examine the possible changes in 24-Hour Movement Guidelines for youth (i.e., at least 60 min per day of moderate-to-vigorous physical activity, ≤ 2 h per day of recreational screen time, and 9 to 11 h of sleep per day for children and 8 to 10 h for adolescents) before and after 1-year into the Covid-19 pandemic, and (2) to examine the possible changes in the relationship between 24-Hour Movement Behaviours (physical activity, screen time, and sleep duration) and academic performance before and after 1-year into the Covid-19. This is a repeated cross-sectional study in two different samples of young Spanish at different times. Firstly, a total of 844 students (13.12 ± 0.86; 42.7% girls) completed a series of valid and reliable questionnaires about physical activity levels, recreational screen time, sleep duration and academic performance before Covid-19 pandemic (March to June 2018). Secondly, a different sample of 501 students (14.39 ± 1.16; 55.3% girls) completed the same questionnaires 1-year after Covid-19 pandemic (February to March 2021). Adherence to the three 24-Hour Movement Guidelines was significantly lower 1-year after into the Covid-19 pandemic (0.2%) than before the pandemic (3.3%), while adherence to none of these three recommendations was significantly higher 1-year after the Covid-19 pandemic (66.3%) than before the pandemic (28.9%). The positive relationship between physical activity levels and academic performance was no longer significant after 1-year into Covid-19 pandemic (β = − 0.26; p < 0.001). 1-year after Covid-19 pandemic, the relationship between recreational screen time (β = − 0.05; p > 0.05) and sleep duration (β = 0.05; p < 0.001) with academic performance did not change compared to pre-pandemic. The results suggest that 24-Hour Movement Behaviours have worsened among young people 1-year after Covid-19 pandemic compared to pre-pandemic period. Moreover, the physical activity benefits associated in terms of academic performance seem to have disappeared because of the Covid-19 pandemic. Therefore, there is a public health problem that requires priority and coordinated action by schools, policy makers, and researchers to mitigate the adverse effects of the pandemic on 24-Hour Movement Behaviours. May 2020), limiting people's mobility (except for work and essential activities such as going to the supermarket or doctor), and forcing the closure of many institutions such as schools, high schools, and kindergartens. This also resulted in the cessation of organised sports and recreational activities and limited access to outdoor places such as playgrounds and parks 3,4 . Restrictions or closure measures of Covid-19 meant prohibiting children and adolescents from leaving home for 6 or more weeks at a time, as well as replacing face-to-face lessons with home schooling and online learning activities 5 . A decrease in Covid-19 cases in Spain in the late spring and summer months led to a reduction in containment measures. In September 2020, most schools and high schools in Spain reopened their doors, using different safety measures such as 1.5 m distance, not sharing materials or the use of face masks. Sports and recreational activities were also resumed with new safety protocols 6 . Therefore, while all these public health restrictions were necessary to reduce the spread of Covid-19, the health measures adopted by Spain and other countries globally has been negatively related to physical, psychosocial, and cognitive health 7,8 . Several authors have suggested that promoting a healthy lifestyle could prevent the adverse health effects of Covid-19 8 . Particularly, there is clear evidence that high physical activity levels 9 , low recreational screen time 10 , and optimal sleep duration 11 , contribute to the overall health and youth development. Meeting these three 24-Hour Movement Guidelines for children and adolescents (at least 60 min per day of moderate-to-vigorous physical activity, ≤ 2 h per day of recreational screen time, and 9 to 11 h of sleep per day for children and 8 to 10 h for adolescents) could even maximize those health benefits in young people 12 . However, the systematic review and meta-analysis conducted by Tapia-Serrano et al. 13 reported that only 2.68% of adolescents met all three 24-Hour Movement Guidelines, while 28.59% did not meet any recommendation. The limited number of studies carried out in Spain indicated that between 1.7% and 5.4% of adolescents met all three recommendations, while between 8.7% and 10.2% did not comply with any recommendation 14,15 . The scoping review conducted by Paterson et al. 16 showed a decline of physical activity levels, an increase in recreational screen time, and irregular sleep patterns, particularly among adolescents, because of Covid-19-related restrictions. Particularly, in Spain, there were only three studies that compared the 24-Hour Movement Guidelines in children and adolescents before and during Covid-19 closure [17][18][19] . The results of these investigations showed negative effects of Covid-19 closure on meeting physical activity recommendations (34.6% to 60.0% before closure vs. 26.5% vs. 51.0% during closure) and sedentary behaviours, especially screen recreational behaviours guidelines (2.5% to 5.0% before closure vs. 1.8% vs. 2.4% during closure), but not on sleep duration guidelines (57.3% to 84.4% before closure 66.6% vs. 84.8% during closure). Nevertheless, most of the studies included in this scoping review examined changes in 24-Hour Movement Guidelines before and during Covid-19 lockdown 16 . To our knowledge, there is only one study that have examined trends of 24-Hour Movement Guidelines before and after approximately 1-year into the Covid-19 pandemic 20 . In this study, there were significantly fewer adolescents meeting the three 24-Hour Movement Guidelines during the autumn 2020 than before Covid-19 (5.5% vs. 1.1%). In addition, a significantly higher number of adolescents did not comply with any of the three recommendations during the autumn 2020 than before Covid-19 (50% vs. 17.1%). Specifically, a decrease in physical activity and sleep duration recommendations was identified, while there was no change in screen time guidelines 20 . However, these findings are not extensible to the rest of the world, as they only looked at children and adolescents in Montreal (Canada). Therefore, further research is required to examine the long-term impact of Covid-19 on 24-Hour Movement Guidelines of youth 16 . Moreover, a previous systematic review showed a negative effect of school closures on academic performance 21 . Given the adoption of a healthy lifestyle has been positively associated with brain development processes, cognitive function, and academic performance 22,23 , one would expect that the decline in 24-Hour Movement Guidelines would have negatively affected academic performance. Particularly, previous researches have shown that higher physical levels 9,24 , lower recreational screen time 25 , and optimal sleep duration 11 have been positively and independently related to academic performance. To the best of our knowledge, no previous studies have examined the possible changes in the relationship between these 24-Hour Movement Behaviours (i.e., physical activity, recreational screen time, and sleep duration) and academic performance before and 1-year after into the Covid-19 pandemic and, therefore, further researches are also required. It is important to know whether this relationship may have been altered by the Covid-19 pandemic, as, for example, the type of physical activity may be different (e.g., safe distance, use of face masks, avoidance of sharing materials, etc.) or the quality of sleep may have been impaired. Therefore, perhaps the relationship between 24-Hour Movement Behaviours and academic performance could have been altered because of Covid-19 pandemic. Thus, the first aim of this repeated cross-sectional study was to compare 24-Hour Movement Guidelines, separately and together, before (T1; March to June 2018) and 1-year after the Covid-19 pandemic (T2; February to March 2021) in two different subsamples of adolescents. Consistent with previous studies [17][18][19]26 that have shown a decline in adherence with the 24-Hour Movement Guidelines before and during Covid-19 closure, is also expected to be lower 1-year after the Covid-19 pandemic. The second objective was to examine whether the relationship between 24-Hour Movement Behaviours and academic performance was different before and 1-year after the Covid-19 pandemic. Because a decline in adherence to the 24-Hour Movement Guidelines has been observed 21 , the first hypothesis of the study was that adherence to these recommendations has declined 1-year after the Covid-19 pandemic. With regard to the second hypothesis, because Covid-19 may have altered 24-Hour Movement Guidelines 16 , it is expected that the relationship with academic performance may have changed after 1-year into the Covid-19. 27,28 . During this period (the third state of alert started from 9th November 2020, until 9th May 2021 in Spain), several measures were put in place to prevent the spread of the virus: the movement of people in public spaces was limited between 11:00 PM and 6:00 AM; the capacity of public and/or sporting venues (e.g., gyms, sports centres) was limited, playgrounds in the parks were closed, and several federated sports activities were suspended (See Fig. 1 for more detail) 27,28 . All data were collected in Extremadura, a region located in southwestern Spain. Both groups were similar in terms of age, sex, and socio-economic status, as all schools included belonged to neighbourhoods with similar socio-demographic characteristics 29 . This study was carried out in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of the University of Extremadura (89/2016). Measures. Socio-demographic data, physical activity, recreational screen time, sleep duration, and academic performance were measured during the two assessments in the two sub-samples. Sociodemographic characteristics. Student's self-reported age (in years), sex (male/female), weight and height. The body mass index (BMI) of each participant was calculated using Cole's international cut-off points based on their sex and age 30 . Physical activity. Physical activity was assessed using the Spanish version of a self-reported questionnaire called Physical Activity Questionnaire for Adolescents (PAQ-A) validated by Martínez-Gómez et al. 31 . This questionnaire has shown to be a valid (the PAQ-A correlation with total physical activity [r = 0.39] and moderate and vigorous physical activity [r = 0.34] assessed by accelerometer) and reliable (α = 0.79 and Intraclass Correlation Coefficient [ICC] = 0.71) instrument to assess physical activity levels in Spanish youth aged 12-17 years. In the present study, Cronbach's alpha of this scale was 0.87. The scale consists of nine items assessing participation in physical activity during the last seven days. Adolescents should respond to the frequency of physical activity participation in a list of activities such as physical education classes, school recess, at lunchtime, right after school, in the evening, and during the last weekend. Each response is scored from 1 to 5 using a Likert-type scale. The average of these scores results in the physical activity index score. Recreational screen time. Recreational screen time was assessed using the Youth Leisure-Time Sedentary Behaviour Questionnaire (YLSBQ) validated by Cabanas-Sánchez et al. 32 . The questionnaire is a valid (r = 0.36) and reliable (ICC = 0.75) measure to assess sedentary recreational screen time among Spanish young people aged 8 to 18 years 32 . Students' self-reported time spent on television, video games, computers, and mobile phones for both weekdays and weekend days. An average of recreational screen time was calculated for each screen-based behaviour in a ratio of 5:2 (e.g., [Daily TV viewing on weekdays × 5] + [Daily TV viewing on weekend days × 2]/7). The average daily recreational screen time was calculated by summing the different daily screen-based behaviours. Sleep duration. Sleep duration was measured using a Spanish version of Pittsburgh Sleep Quality Index validated in 2003 by Macías and Royuela 33 . Students self-reported the time they usually go to bed and wake up on both weekdays and weekends. These questions have been shown as a valid (r = 0.45-0.90) and reliable (ICC = 0.71-0.99) measure to assess sleep duration in youths 34 . The average sleep duration was calculated using the following formula: ([Sleep duration on weekdays × 5] + [Sleep duration on the weekend days × 2]/7). Adherence to the 24-Hour Movement Guidelines. Adherence to the 24-Hour Movement Guidelines established by Tremblay et al. 35 was also calculated in this study. Children and youth aged 5-17 years should accumulate at least 60 min of moderate-to-vigorous physical activity per day (in the present study, we used the cut-off points of 2.75 in the PAQ-A 36 ), spend less than two hours of recreational screen time per day, and sleep between 9 and 11 h per day (children aged 5-13) and 8 to 10 h per day (adolescents aged [14][15][16][17] to meet each of the three recommendations 35 . Academic performance. Academic performance was assessed through school records at the end of the academic year. Academic performance was based on four subjects: first language (Spanish), second language (English), mathematics, and physical education. The grade point average (GPA) was then calculated as an average of the scores in these four subjects. Previous studies have used these subjects as indicators to assess academic performance 37,38 . Procedure. The research team contacted the principal and teachers of the schools to conduct this study. Parents or legal guardians were informed of the purpose of the study by letter prior to data collection, and written informed consent was required from both participants and their parents/legal guardians. Only students who returned written informed consent signed by parents and themselves participated in this study. The paper-andpencil questionnaire was administered in a regular classroom by one member of the research team to unify the same protocol. The average time taken to complete the set of questionnaires was approximately 20 min. Data analysis. At first step, descriptive statistics were used to examine the average daily time spent in physical activity, recreational screen time, and sleep duration, as well as the adherence to the 24-Hour Movement Guidelines, both separately and for all possible combinations. Sex differences were tested using Student t-test and Chi-squared test for continuous and categorical variables, respectively. A linear regression analysis between 24-Hour Movement Behaviours and interaction sex*predictor was conducted to examine the interaction of sex www.nature.com/scientificreports/ and each of the movement behaviours (i.e., physical activity, recreational screen time, and sleep duration). As no significant interaction was found between sex and physical activity, recreational screen time, and sleep duration in relation to academic performance (p > 0.01), all analyses were performed on the entire sample. For the main analysis, mixed models were used to examine the association between 24-Hour Movement Behaviours and academic performance. Three separate models for each movement behaviour (i.e., physical activity, recreational screen time, and sleep duration) were estimated. For all models, the estimation of time as within factor (before vs. 1-year after Covid-19 pandemic) and a predictor (i.e., physical activity, recreational screen time, and sleep duration) were included, as well as the interaction time*predictor. A significant effect indicates that the association between movement behaviours and academic performance was statistically different in terms of time (i.e., before vs. 1-year after Covid-19 pandemic). In all models, age, sex, and BMI were included as covariates. All analyses were performed using SPSS version 23.0 for Windows (IBM, Armonk, New York). The level of significance was set at p < 0.05. Table 1 shows the participant's characteristics, 24-Hour Movement Behaviours, and academic performance before (T1) and 1-year after (T2) Covid-19 pandemic. Overall, adolescents evaluated before Covid-19 pandemic reported higher physical activity levels, sleep duration, and higher academic performance, as well as lower recreational screen time, compared to adolescents assessed after 1-year into the Covid-19 pandemic (all, p < 0.05). In addition, a significantly larger proportion of adolescents, examined before Covid-19 pandemic, met the 24-Hour Movement Guidelines for physical activity, recreational screen time, and sleep duration, independently and all together. Table 2 shows the associations between 24-Hour Movement Behaviours and academic performance before and 1-year after Covid-19 pandemic. In each model, the intercept, the covariate (i.e., sex, age, and BMI), the within factor (time: before vs. 1-year after Covid-19 pandemic), the predictor (i.e., physical activity, recreational screen time, or sleep duration) and the interaction time*predictor effects were included. Covariates and predictors were included as z-scores. Intercept represents an estimate of academic performance when all predictors are zero (i.e., an estimate of academic performance when all predictors are zero before Covid-19 pandemic). Time rows represent the difference in academic performance when comparing before and 1-year after into the Covid-19 pandemic. The predictor row shows the degree of association between the predictor (i.e., physical activity or recreational screen time or sleep duration) and academic performance before Covid-19 pandemic. Finally, the interaction rows represent the difference in the slope after closure compared to the slope before closure (i.e., the effect of closure on the association between predictors and academic performance). www.nature.com/scientificreports/ Figure 2 shows the independent association of physical activity, recreational screen time, and sleep duration with academic performance before and 1-year after Covid-19. After controlling for the effect of different covariates (age, sex, and BMI), (see Fig. 2, graph a), physical activity was positively and significantly associated with academic performance (β = 0.24; p < 0.001) among adolescents assessed before Covid-19 pandemic. For each one-unit increase in physical activity, the estimation of academic performance would increase by 0.24. However, the association became negative (β = 0.24 + ( − 0.40) = − 0.26; p < 0.001) among adolescents evaluated 1-year after Covid-19, that is, physical activity was negatively associated with academic performance, and differences in slopes were statistically significant (p < 0.001). Discussion The first aim of this study was to examine possible changes in 24-Hour Movement Guidelines for adolescents before (T1) and after 1-year (T2) into the Covid-19 pandemic. The second aim examined the possible changes in the relationship between 24-Hour Movement Behaviours and academic performance, before and after 1-year into the Covid-19 pandemic. The main findings of this study are as follows: 1) 24-Hour Movement Behaviours appear to have worsened among young people after 1-year into the Covid-19 pandemic compared to pre-pandemic; 2) adherence to the three 24-Hour Movement Guidelines seem to be significantly lower among Spanish adolescents after 1-year into the Covid-19 pandemic, particularly sleep duration recommendations; 3) the positive relationship between physical activity and academic performance seems to have disappeared 1-year after Covid-19 pandemic; (4) the non-significant relationship between recreational screen time and academic performance and the positive relationship between sleep duration and academic performance does not seem to have changed 1-year after Covid-19. With respect to the first study hypothesis, it was postulated that adherence to 24-Hour Movement Guidelines would be lower in adolescents assessed after 1-year into the Covid-19 pandemic. Before the Covid-19 pandemic, the adherence to the recommendations was 35.2% for physical activity, 13.2% for recreational screen time, and 50.5% for sleep duration among Spanish adolescents. Only 3.3% met all three 24-Hour Movement Guidelines, while 28.9% did not met with any of the three recommendations. These results are worrying because of the low compliance of Spanish adolescents with the recommendations prior to the Covid-19 pandemic. However, these values seem to have worsened 1-year after Covid-19 pandemic among young people, as significant changes were found in physical activity levels (− 0.17 in the physical activity index), recreational screen time (+ 0.93 h/day) and sleep duration (− 0.56 h/day). Similarly, fewer adolescents seem to meet the recommendations for physical activity (23.4%), recreational screen time (9.2%), and sleep duration (4.0%). Only 0.2% reported meeting all three 24-Hour Movement Guidelines, while 66.3% did not meet any of the three recommendations. The only study published to date that examined changes of 24-Hour Movement Guidelines before and 1-year after into the Covid-19 pandemic is consistent with our results 20 . However, in contrast to Dubuc's study 20 , our study also found significantly lower adherence to screen recommendations among young people. The decrease in physical activity and increase in recreational screen time could partly be explained by Covid-19-related restrictions in Spain during second data collection (e.g., social restrictions, "stay-at-home" recommendations, closure of structured activities, etc.). Although these public health restrictions have been able to reduce the spread of Covid-19, they may also have negatively affected physical activity levels and recreational screen time. Particularly, in this study, sleep recommendations seem to be the most affected 1-year after the Covid-19 pandemic. This could be explained by the increase in recreational screen time, due to the 24-h cycle 39 . It is also possible that the anxiety and depression caused by Covid-19 may have affected sleep quality, which may have a downstream effect on adolescents' sleep duration 40,41 . Therefore, this research suggests that the direct and indirect effects of the Covid-19 pandemic have negatively affected physical activity, recreational screen time, and sleep duration 1-year after into the Covid-19 pandemic. Regarding the second hypothesis, it was postulated that the relationship between 24-Hour Movement Behaviours and academic performance could have changed 1-year after Covid-19 pandemic. The results showed a positive relationship between physical activity and academic performance before the Covid-19 pandemic, whereas this relationship became negative among adolescents evaluated 1-year after Covid-19. The positive effect of physical activity on the brain may be the result of several factors such as increased cerebral blood flow, oxygen to the brain, synaptic plasticity activity, and neurotransmitter secretion levels, resulting in increased levels of arousal, attention, and effort, which have a positive impact on cognitive task performance immediately after physical activity 42 . However, restrictive measures during Covid-19 have increased stress and anxiety levels 43 , and limited structured physical activities, which has led to a reduction in physical activity levels, especially in adolescents 16 . The implementation of these restrictive measures, such as safe distance, use of face mask, avoidance of sharing materials, etc., has encouraged new forms and places to participate in physical activity, such as individual physical activity at home 16 . In this sense, it is likely that the lack of social interaction and enjoyment of these activities or that they are not performed outdoors could have impacted on the benefits of physical activity on academic performance 42,44 . However, since the relationship between physical activity and academic performance 1-year after Covid-19 has not been studied in depth, further mix-method studies examining this relationship are needed. www.nature.com/scientificreports/ Although the amount of recreational screen time appears to have increased after 1-year into the Covid-19 pandemic, there was no change in the relationship between recreational screen time and academic performance before and after 1-year into Covid-19 pandemic, remaining non-significant. These results suggest that recreational screen time does not seem to have affected young people's academic performance before and after 1-year Covid-19. In line with our results, a systematic review with meta-analysis 25 found that the amount of time spent using screen-based devices was not associated with academic performance. Specifically, the mentioned systematic review with meta-analysis conducted by Adelantado-Renau et al. 25 only found that television viewing and video game use were the only two screen-based devices negatively associated with academic performance. These authors suggest that the type of screen-based devices assessed (e.g., TV, video games, computer, mobile phone, tablets, etc.), the purpose (e.g., social communication, online networking, gaming, etc.), and the context in which screen media are used (e.g., educational: doing homework, studying, etc. or recreational: playing video games, etc.) may affect the relationship with academic performance 25 . The fact that our study also included computer and mobile phone use within the recreational screen time may explain the lack of association between these two variables. Finally, a positive relationship between sleep duration and academic performance was also found, both before and after 1-year into Covid-19 pandemic. It is important to note that previous studies have shown that both quantity and quality sleep are positively related to cognitive improvements such as better memory, attention, and executive control 45 . In addition, longer sleep duration has been shown to have a positive effect on adolescents' ability to retain learned information and make it accessible in the long-term memory 46 . Given that most teachers in Spain use theoretical tests for grading students, it is possible that Spanish adolescents who get enough sleep are better able to retain information more effectively and, consequently, perform better academically 47 . Therefore, although in the present study sleep duration seems to have decreased after 1-year into the Covid-19 pandemic, it does not seem to have been sufficient to cause changes in its relationship with academic performance. The present study has some limitations that that should be taken into account in future studies. First, this research uses a repeat cross-sectional design in two sub-samples. Although the two sub-samples were similar in terms of age, sex, and socioeconomic status, this design may lead to bias in the results. On the other hand, it does not allow us to examine the directionality of the relationship between 24-Hour Movement Behaviours and academic performance. Future longitudinal studies are needed to reinforce the findings of this study. Secondly, although Covid-19 has affected the entire world population, the restrictions have been different in different countries. Therefore, the results found cannot be extrapolated to the rest of the countries. Thus, more studies that examined changes in 24-Hour Movement Guidelines after 1-2 years into the Covid-19 pandemic are needed. Thirdly, although all questionnaires for measuring 24-Hour Movement Behaviours are valid and reliable, young people may have overestimated or underestimated the time spent on them. Future studies should use devicebased measures to assess these three movement behaviours throughout the 24-h period. Fourthly, it was not possible to assess weight and height in the second subsample due to Covid-19 restrictions in the schools evaluated. In this regard, future studies should assess weight and height through electronic scale with a measuring rod. Finally, the use of qualitative methodology could help to explore some of the reasons why adherence to the 24-Hour Movement Guidelines has declined among young people and its relationship with academic performance. Despite these limitations, this study has some strengths. This is one of the first studies that examined 24-Hour Movement Guidelines, separately and together, before and 1-year after the Covid-19 pandemic. In addition, this is the first study that compared the relationship between 24-Hour Movement Behaviours and academic performance before and after Covid-19 pandemic in adolescents. Finally, sex, age, and BMI were introduced as covariates in the analyses to avoid possible bias. Conclusions The results suggest that physical activity, recreational screen time, and sleep duration appear to have been negatively affected by the Covid-19 pandemic. Particularly, sleep recommendations seem to be the most affected movement behaviour 1-year after the Covid-19 pandemic. Furthermore, 1-year after the Covid-19 pandemic, the relationship between physical activity and academic performance was negative, while recreational screen time and sleep duration with academic performance did not change compared to before Covid-19 results. The reopening of many schools, playgrounds, parks, and organised sports activities in Spain seems to have been insufficient to mitigate the negative consequences of the Covid-19 lockdown on 24-Hour Movement Behaviours. Therefore, there is a serious public health problem that requires immediate and coordinated action by schools, policy makers, health practitioners, and researchers to mitigate the adverse effects of the pandemic on movement behaviours. Specially, it seems very important to design strategies to increase the duration and quality of sleep of young people. Similarly, reducing Covid-related restrictions on physical activity could have a positive impact on academic performance. Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
2022-10-06T13:40:31.983Z
2022-10-05T00:00:00.000
{ "year": 2022, "sha1": "2413e330f669edc7cc7d2a33c986d868693fee98", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "2413e330f669edc7cc7d2a33c986d868693fee98", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
254343721
pes2o/s2orc
v3-fos-license
Fabrication of CeCl 3 /LiCl/CaCl 2 Ternary Eutectic Scintillator for Thermal Neutron Detection : To date, 3 He gas has been commonly used to detect thermal neutrons because of their high chemical stability and low sensitivity to γ -rays, owing to their low density and large neutron capture cross-section. However, the depletion of 3 He gas prompts the development of a new solid scintillator for thermal neutron detection to replace 3 He gas detectors. Solid scintillators containing 6 Li are commonly used to detect thermal neutrons. However, they are currently used in single crystals only, and their 6 Li concentration is defined by their chemical composition. In this study, 6 Li-containing eutectic scintillators were developed. CeCl 3 was selected as the scintillator phase because of its low density (3.9 g/cm 3 ); high light yield (30,000 photons/MeV); and fast decay time with four components of 4.4 ns (6.6%), 23.2 ns (69.6%), 70 ns (7.5%) and >10 µ s (16.3%), owing to the Ce 3+ 5d-4f emission peak at approximately 360 nm. Crystals of the CeCl 3 , LiCl and CaCl 2 ternary eutectic were fabricated by the vertical Bridgman technique. The grown eutectic crystals exhibited Ce 3+ 5d-4f emission with a peak at 360 nm. The light yield was 18,000 photons/neutron, and the decay time was 10.5 ns (27.7%) and 40.1 ns (72.3%). Therefore, this work demonstrates optimization by combining a scintillator phase and Li-rich matrix phase for high Li content, fast timing, high light yield and low density. Introduction Technologies using neutron sources and their detectors are used in many application fields, such as materials analysis, crystal chemistry, imaging of interior structures, well logging, monitoring of nuclear facilities and basic research in condensed matter physics. Recently, Li-ion batteries have garnered attention with the increasing focus on environmental protection and demand for thermal neutron detectors suitable for the internal non-destructive testing of Li-ion batteries [1,2]. To date, thermal neutron detectors using 3 He gas have been used for many years due to their low density, low sensitivity for gamma-rays and large neutron capture cross-section. However, the recent increase in the demand for neutron detectors for national security purposes and to address the export restrictions on 3 He gas of the USA, which is the largest producer of 3 He gas, resulted in the significantly higher demand for 3 He gas than its supply [3]. Hence, the development of solid state scintillator materials for thermal neutron detection to replace 3 He gas counters has attracted considerable interest, and research is being conducted on alternative neutron detector technologies [4][5][6]. its supply [3]. Hence, the development of solid state scintillator materials for thermal tron detection to replace 3 He gas counters has attracted considerable interest, and res is being conducted on alternative neutron detector technologies [4][5][6]. Prospective materials for solid state neutron scintillators contain 6 Li because high thermal neutron capture cross-section, producing 6 Li(n, α) 3 H nuclear reaction w high Q value of 4.8 MeV (Equation 1) [7]. In addition, this high Q value can achiev containing scintillators with a high light yield for thermal neutrons. Therefore, in pre studies, 6 Libased single crystalline scintillators, such as LiCaAlF6 scintillators doped Ce 3+ and Eu 2+ , have been developed and used in practice, demonstrating good scintill properties under neutron excitation from a 252 Cf source [8][9][10][11][12] with the reaction: However, as these solid-state scintillators are single crystals, the 6 Li content is by their chemical compositions. Therefore, we developed eutectic crystals of fluor chlorides, bromides, iodides and other eutectic scintillators, such as 6 LiF/CaF2 6 LiF/ 6 LiYF4 [14], 6 LiF/ 6 LiGdF4 [15], 6 LiF/SrF2 [16], 6 LiF/LaF3 [17], 6 LiCl/CeCl3 6 LiCl/Li2SrCl4 [19], 6 Li2SrCl4/ 6 LiSr2Cl5 [20], 6 LiCl/BaCl2 [21], 6 LiBr/CeBr3 [22], 6 LiBr/ [23] and 6 LiSrI3/LiI [24]. The schematic for the use of eutectic materials in neutron sc lator applications is shown in Figure 1. In the Li-containing phase, only the incident trons were converted to secondary ionized particles by Equation (1). A certain numb α-rays passed through the small-sized, Li-containing phase to reach other phases. S quently, the α-ray is converted to scintillation light in the scintillator phase. Thus, the tillator phase can be considered as a luminescent material excited by α-ray. Furthe most prominent advantage of eutectic scintillators is the ability to increase their 6 Li tent, unlike single crystals, thereby enhancing their ability to capture thermal neutro In this study, the ternary eutectic of CeCl3/ 6 LiCl/CaCl2 was developed using the tical Bridgman (VB) technique as candidates for new thermal neutron detector mate CeCl3 single-crystal scintillators have low density of 3.9 g/cm 3 , fast decay time of ap imately 25 ns owing to the Ce 3+ 5d-4f emission peak at approximately 360 nm and light yields of 30,000 photons/MeV under the irradiation of gamma rays [25]. Thus, C was chosen as the scintillator crystal phase in this study. In a previous study, the gr rate of the eutectic crystal in 6 LiCl/CeCl3, which is a eutectic scintillator using CeCl3 a scintillator phase, was investigated to adjust the grain size of each phase, which is a portant factor in eutectic scintillators for thermal neutron detection. In particular, a cessively large grain shape resulted in the loss of the energy caused by the nuclear rea 6 Li(n, α) 3 H before reaching the scintillator phase. Therefore, in this study, in additi the scintillator phase selection, we focused on ternary eutectic crystals and attempt reduce the grain size of each phase by creating scintillator crystals in ternary eu In this study, the ternary eutectic of CeCl 3 / 6 LiCl/CaCl 2 was developed using the vertical Bridgman (VB) technique as candidates for new thermal neutron detector materials. CeCl 3 single-crystal scintillators have low density of 3.9 g/cm 3 , fast decay time of approximately 25 ns owing to the Ce 3+ 5d-4f emission peak at approximately 360 nm and good light yields of 30,000 photons/MeV under the irradiation of gamma rays [25]. Thus, CeCl 3 was chosen as the scintillator crystal phase in this study. In a previous study, the growth rate of the eutectic crystal in 6 LiCl/CeCl 3 , which is a eutectic scintillator using CeCl 3 as the scintillator phase, was investigated to adjust the grain size of each phase, which is an important factor in eutectic scintillators for thermal neutron detection. In particular, an excessively large grain shape resulted in the loss of the energy caused by the nuclear reaction 6 Li(n, α) 3 H before reaching the scintillator phase. Therefore, in this study, in addition to the scintillator phase selection, we focused on ternary eutectic crystals and attempted to reduce the grain size of each phase by creating scintillator crystals in ternary eutectic systems to efficiently transfer energy to the scintillator phase. The luminescence and radiation responses under the irradiation of thermal neutrons were also evaluated. Crystal Growth CeCl 3 , CaCl 2 and 6 Li-enriched (95%) LiCl powders (4N purity) were prepared as the initial raw material. CeCl 3 / 6 LiCl/CaCl 2 eutectic was fabricated at the chemical composition ratio of 17.4 mol% CeCl 3 : 62.7 mol% 6 LiCl: 19.9 mol% CaCl 2 [26]. The starting powders were weighed, mixed and placed in a 4-mm inner diameter quartz ampoule in a glove box filled with argon gas. The ampoule was then removed from the glove box and baked at 180 • C under~10 −1 Pa vacuum to eliminate water and air from the ampoule. At the end of the baking process, the ampoule was sealed. The ampoule was heated until the raw material melted and then the eutectic was fabricated by the VB technique at a pulling rate of 0.2 mm/min. Details of the fabrication method were described in the previous report [18][19][20][21][22][23]. Structural Analysis of the Eutectic Eutectic samples were cut and polished into 1-and 3-mm pieces with a wire saw. Moreover, 3-mm crystals were cut horizontally along the growth direction. Samples of 1-mm thickness in circumferential and perpendicular directions were taken polished from the fabricated eutectic and mirror. The eutectic structures on the transverse and vertical cross-sections were observed by backscattered electron imaging (BEI) using a Hitachi S3400N scanning electron microscope. The X-ray diffraction (XRD) in the 2θ range of 10-90 • was performed to identify crystal phases in the eutectic using a Bruker D8 Discover diffractometer using a CuKα X-ray source with a tube current of 40 mA and accelerating voltage of 40 kV. Evaluation of the Luminescence and Radiation Responses The radioluminescence (RL) spectra were obtained under X-ray excitation using an Andor Technology SR-163 spectrometer equipped with an Andor Technology iDus420-OE charge-coupled device (CCD) detector. In addition, the cathodoluminescence (CL) spectra were measured for each phase of the eutectic crystal using a Horiba JSM-7001F, MP-32M equipped with photomultiplier tube (PMT) as the CL detector (R943-0, Hamamatsu, Shizuoka, Japan) and CCD as the CL map (Synapse Plus BIUV, Horiba, Kyoto, Japan). The light yield was estimated from the pulse height spectra of the eutectic sample and a GS20 (Li-glass) standard with the light yield of 7000 photons/neutron under thermal neutron ( 252 Cf) excitation at room temperature using a Hamamatsu R7600U-200 photomultiplier tube (PMT) at an operating voltage of 600 V. The output signal was fed via a ORTEC 572A shaping amplifier and a two-channel USB Wave Catcher module into a personal computer. The decay curve was obtained using the same PMT. The output signal was recorded using a Tektronix TD5032B digital oscilloscope. Growth Crystal and Phase Identification A CeCl 3 / 6 LiCl/CaCl 2 ternary eutectic with a diameter of 4.0 mm and length of 5.8 mm was fabricated by the VB technique. The fabricated eutectic in the quartz ampoule and 1-mm thick polished circumferential wafer are shown in Figure 2. The polished wafer sample exhibited a visible transparency along the pulling direction, as indicated by the visibility of the black cross on the back. The powder XRD pattern of the fabricated eutectic is shown in Figure 3. Only CeCl3 (hexagonal, P63/m, 176), LiCl (cubic, Fm-3m, 225) and CaCl2 (bipyramidal, Pnnm, 58) phases were confirmed. The BEI of the wafer samples in the vertical and transverse crosssection are shown in Figure 4. The BEI results show the fiber-type eutectic structure of CeCl3/LiCl/CaCl2, which has the tendency to extend linearly along the crystal growth direction. The eutectic structure extended several tens to hundred micrometers in the growth direction with large variations in the grain sizes. Although the eutectic structures were obscured by the polishing scratches, their length was recorded to be several hundred micrometers. It is known as the Hunt-Jackson law that grain size in eutectics is inversely proportional to the square root of the solidification speed [27]. The difference in grain size in the eutectic suggests that there were areas where latent heat was efficiently eliminated, and the solidification rate was fast, and other areas where it was not. This is presumably due to the fact that unidirectional solidification did not follow the pulling direction and the solid-liquid interface was not flat. Uniformity of grain size is thought to be improved by optimizing eutectic growth conditions, such as the temperature gradient and pulling rate. From the powder XRD pattern and BEI results, the LiCl (black), CaCl2 (gray) and CeCl3 (white) phases were determined. In addition, the refractive indices of CeCl3, LiCl and CaCl2 at 380 nm were determined to be 2.20, 1.67 and 1.52, respectively, denoting a large difference [28][29][30]. The transparency of the sample wafers can be ascribed the elongation of the eutectic structure, as similarly noted in previous reports [22,24]. The powder XRD pattern of the fabricated eutectic is shown in Figure 3. Only CeCl 3 (hexagonal, P63/m, 176), LiCl (cubic, Fm-3m, 225) and CaCl 2 (bipyramidal, Pnnm, 58) phases were confirmed. The BEI of the wafer samples in the vertical and transverse crosssection are shown in Figure 4. The BEI results show the fiber-type eutectic structure of CeCl 3 /LiCl/CaCl 2 , which has the tendency to extend linearly along the crystal growth direction. The eutectic structure extended several tens to hundred micrometers in the growth direction with large variations in the grain sizes. Although the eutectic structures were obscured by the polishing scratches, their length was recorded to be several hundred micrometers. It is known as the Hunt-Jackson law that grain size in eutectics is inversely proportional to the square root of the solidification speed [27]. The difference in grain size in the eutectic suggests that there were areas where latent heat was efficiently eliminated, and the solidification rate was fast, and other areas where it was not. This is presumably due to the fact that unidirectional solidification did not follow the pulling direction and the solid-liquid interface was not flat. Uniformity of grain size is thought to be improved by optimizing eutectic growth conditions, such as the temperature gradient and pulling rate. From the powder XRD pattern and BEI results, the LiCl (black), CaCl 2 (gray) and CeCl 3 (white) phases were determined. In addition, the refractive indices of CeCl 3 , LiCl and CaCl 2 at 380 nm were determined to be 2.20, 1.67 and 1.52, respectively, denoting a large difference [28][29][30]. The transparency of the sample wafers can be ascribed the elongation of the eutectic structure, as similarly noted in previous reports [22,24]. The powder XRD pattern of the fabricated eutectic is shown in Figure 3. Only CeCl3 (hexagonal, P63/m, 176), LiCl (cubic, Fm-3m, 225) and CaCl2 (bipyramidal, Pnnm, 58) phases were confirmed. The BEI of the wafer samples in the vertical and transverse crosssection are shown in Figure 4. The BEI results show the fiber-type eutectic structure of CeCl3/LiCl/CaCl2, which has the tendency to extend linearly along the crystal growth direction. The eutectic structure extended several tens to hundred micrometers in the growth direction with large variations in the grain sizes. Although the eutectic structures were obscured by the polishing scratches, their length was recorded to be several hundred micrometers. It is known as the Hunt-Jackson law that grain size in eutectics is inversely proportional to the square root of the solidification speed [27]. The difference in grain size in the eutectic suggests that there were areas where latent heat was efficiently eliminated, and the solidification rate was fast, and other areas where it was not. This is presumably due to the fact that unidirectional solidification did not follow the pulling direction and the solid-liquid interface was not flat. Uniformity of grain size is thought to be improved by optimizing eutectic growth conditions, such as the temperature gradient and pulling rate. From the powder XRD pattern and BEI results, the LiCl (black), CaCl2 (gray) and CeCl3 (white) phases were determined. In addition, the refractive indices of CeCl3, LiCl and CaCl2 at 380 nm were determined to be 2.20, 1.67 and 1.52, respectively, denoting a large difference [28][29][30]. The transparency of the sample wafers can be ascribed the elongation of the eutectic structure, as similarly noted in previous reports [22,24]. Eutectics tend to take on a fiber-type structure when the volume ratio of the eutectic fiber phase is around 30% [22,24]. In this study, the theoretical volume ratio of CeCl3: 6 LiCl:CaCl2 was 31.9:37.8:30.3, which is considered optimal for the ternary eutectic system. Thus, a fiber-type eutectic was easily obtained. Luminescence and Radiation Responses The RL spectra of the CeCl3/ 6 LiCl/CaCl2 eutectic sample under X-ray irradiation are shown in Figure 5. The fabricated eutectic sample showed the expected Ce 3+ 5d-4f emission with a peak at approximately 360 nm, similar to that of CeCl3 single crystal. This result is consistent with that of a previous report [25]. The CL spectra on the corresponding position of the CeCl3, LiCl and CaCl2 phases in the wafer sample are shown in Figure 6. The electron beam irradiated all crystal phases, and the CL spectra in the wavelength range of 200-600 nm was obtained. The CL intensity on the CeCl3 phase was much higher than that of the other crystal phases. Thus, CeCl3 may be substituted for CaCl2. However, no Ce 3+ 5d-4f emission was observed on the CaCl2 phase. This indicates that only the CeCl3 phase works as the scintillator phase, whereas only the 6 LiBr phase works as the neutron reaction phase. The CL measurements showed additional emission peaks at around 455, 680 and 750 nm. This may be due to the influence of hydrates deposited on the sample surface. In the CL measurement, the sample was polished and put into the apparatus in air, and the hydrates were gradually deposited on b ross se tio ro th ire tio Eutectics tend to take on a fiber-type structure when the volume ratio of the eutectic fiber phase is around 30% [22,24]. In this study, the theoretical volume ratio of CeCl 3 : 6 LiCl:CaCl 2 was 31.9:37.8:30.3, which is considered optimal for the ternary eutectic system. Thus, a fiber-type eutectic was easily obtained. Luminescence and Radiation Responses The RL spectra of the CeCl 3 / 6 LiCl/CaCl 2 eutectic sample under X-ray irradiation are shown in Figure 5. The fabricated eutectic sample showed the expected Ce 3+ 5d-4f emission with a peak at approximately 360 nm, similar to that of CeCl 3 single crystal. This result is consistent with that of a previous report [25]. Eutectics tend to take on a fiber-type structure when the volume ratio of the eutectic fiber phase is around 30% [22,24]. In this study, the theoretical volume ratio of CeCl3: 6 LiCl:CaCl2 was 31.9:37.8:30.3, which is considered optimal for the ternary eutectic system. Thus, a fiber-type eutectic was easily obtained. Luminescence and Radiation Responses The RL spectra of the CeCl3/ 6 LiCl/CaCl2 eutectic sample under X-ray irradiation are shown in Figure 5. The fabricated eutectic sample showed the expected Ce 3+ 5d-4f emission with a peak at approximately 360 nm, similar to that of CeCl3 single crystal. This result is consistent with that of a previous report [25]. The CL spectra on the corresponding position of the CeCl3, LiCl and CaCl2 phases in the wafer sample are shown in Figure 6. The electron beam irradiated all crystal phases, and the CL spectra in the wavelength range of 200-600 nm was obtained. The CL intensity on the CeCl3 phase was much higher than that of the other crystal phases. Thus, CeCl3 may be substituted for CaCl2. However, no Ce 3+ 5d-4f emission was observed on the CaCl2 phase. This indicates that only the CeCl3 phase works as the scintillator phase, whereas only the 6 LiBr phase works as the neutron reaction phase. The CL measurements showed additional emission peaks at around 455, 680 and 750 nm. This may be due to the influence of hydrates deposited on the sample surface. In the CL measurement, the sample was polished and put into the apparatus in air, and the hydrates were gradually deposited on b ross se tio ro th ire tio The CL spectra on the corresponding position of the CeCl 3 , LiCl and CaCl 2 phases in the wafer sample are shown in Figure 6. The electron beam irradiated all crystal phases, and the CL spectra in the wavelength range of 200-600 nm was obtained. The CL intensity on the CeCl 3 phase was much higher than that of the other crystal phases. Thus, CeCl 3 may be substituted for CaCl 2 . However, no Ce 3+ 5d-4f emission was observed on the CaCl 2 phase. This indicates that only the CeCl 3 phase works as the scintillator phase, whereas only the 6 LiBr phase works as the neutron reaction phase. The CL measurements showed additional emission peaks at around 455, 680 and 750 nm. This may be due to the influence of hydrates deposited on the sample surface. In the CL measurement, the sample was polished and put into the apparatus in air, and the hydrates were gradually deposited on Crystals 2022, 12, 1760 6 of 9 the sample surface. The emission is obtained from the sample surface layer, so the influence of hydrates on the surface is inevitable in the CL. Crystals 2022, 12, x FOR PEER REVIEW 6 of 10 the sample surface. The emission is obtained from the sample surface layer, so the influence of hydrates on the surface is inevitable in the CL. Figure 6. CL spectra on each crystal phase in the eutectic sample. The pulse-height spectra of the fabricated eutectic and GS20 standard under thermal neutrons ( 252 Cf) irradiation are shown in Figure 7. The light yield of the eutectic was approximately 250% of GS20. Considering that the quantum efficiency (QE) at the emission peak of 395 nm for GS20 is 40% and the QE at the emission peak of 360 nm for the eutectic is 41%, the light yield of the eutectic was calculated to be 18,000 photons/neutron. The scintillation decay curve of the eutectic excited by thermal neutrons ( 252 Cf) is shown in Figure 8. The decay time was approximated using Equation (2), which is the sum of two exponents: The ratio was calculated as The pulse-height spectra of the fabricated eutectic and GS20 standard under thermal neutrons ( 252 Cf) irradiation are shown in Figure 7. The light yield of the eutectic was approximately 250% of GS20. Considering that the quantum efficiency (QE) at the emission peak of 395 nm for GS20 is 40% and the QE at the emission peak of 360 nm for the eutectic is 41%, the light yield of the eutectic was calculated to be 18,000 photons/neutron. The pulse-height spectra of the fabricated eutectic and GS20 standard under thermal neutrons ( 252 Cf) irradiation are shown in Figure 7. The light yield of the eutectic was approximately 250% of GS20. Considering that the quantum efficiency (QE) at the emission peak of 395 nm for GS20 is 40% and the QE at the emission peak of 360 nm for the eutectic is 41%, the light yield of the eutectic was calculated to be 18,000 photons/neutron. The scintillation decay curve of the eutectic excited by thermal neutrons ( 252 Cf) is shown in Figure 8. The decay time was approximated using Equation (2), which is the sum of two exponents: The ratio was calculated as Waveform area (a.u.) The scintillation decay curve of the eutectic excited by thermal neutrons ( 252 Cf) is shown in Figure 8. The decay time was approximated using Equation (2), which is the sum of two exponents: The ratio was calculated as where y 0 is the baseline; τ is the decay time; t is the time; A i is a coefficient; and I is an intensity. The decay times were converged to two components of 10.5 ns (27.7%) and 40.1 ns (72.3%). The decay time and component ratio differed from the previous report of CeCl 3 where 0 is the baseline; τ is the decay time; t is the time; Ai is a coefficient; and I is an intensity. The decay times were converged to two components of 10.5 ns (27.7%) and 40.1 ns (72.3%). The decay time and component ratio differed from the previous report of CeCl3 single crystal [25]. This is assumed to be the effect of partial substitution of the Li + and Ca 2+ into the Ce 3+ site. The density of the eutectic crystals was calculated to be as low as 2.98 g/cm 3 . The Li concentrations of the eutectic crystal, Li glass and Ce:LiCAF were 0.031, 0.028 and 0.016 mol/cm 3 , respectively [31]. The highest Li concentration was obtained in CeCl3/ 6 LiCl/CaCl2. Therefore, CeCl3/ 6 LiCl/CaCl2 eutectic growth was successfully applied in developing a promising scintillator for thermal neutron detection. Conclusions Ternary eutectic CeCl3/ 6 LiCl/CaCl2 with the diameter of 4 mm was prepared by the VB technique in the quartz ampoule. Li-enriched (95%) LiCl powder was used for the eutectic fabrication to evaluate the thermal neutron responses. The results of the BEI and powder XRD analyses confirmed the existence of LiCl, CaCl2 and CeCl3 phases. In the CeCl3/ 6 LiCl/CaCl2 eutectic, Li has a high content of up to 0.031 mol/cm 3 and small density of 2.98 g/cm 3 . The eutectic has an Ce 3+ 5d-4f emission peaking at 360 nm in the CeCl3 phase, and it is consistent with previous studies on CeCl3 single crystals. The light yield was approximately 18,000 photons/neutron with the two decay components of 10.5 ns (27.7%) and 40.1 ns (72.3%). These results indicate that the scintillator phase with excellent scintillation performance provides high light yield and fast decay time, while the Li-rich phase provides the high Li content thermal neutron scintillator. Although CeCl3/LiCl/CaCl2 has slight hygroscopic properties, it remains to be a promising material in terms of 6 Li concentration, density, light yield and decay time. As a fiber-type structure can be obtained in this eutectic, the transparency in the pulling direction and light yield can be enhanced by optimization of the eutectic fabrication conditions and diameter of fiber phases in future studies. The density of the eutectic crystals was calculated to be as low as 2.98 g/cm 3 . The Li concentrations of the eutectic crystal, Li glass and Ce:LiCAF were 0.031, 0.028 and 0.016 mol/cm 3 , respectively [31]. The highest Li concentration was obtained in CeCl 3 / 6 LiCl/CaCl 2 . Therefore, CeCl 3 / 6 LiCl/CaCl 2 eutectic growth was successfully applied in developing a promising scintillator for thermal neutron detection. Conclusions Ternary eutectic CeCl 3 / 6 LiCl/CaCl 2 with the diameter of 4 mm was prepared by the VB technique in the quartz ampoule. Li-enriched (95%) LiCl powder was used for the eutectic fabrication to evaluate the thermal neutron responses. The results of the BEI and powder XRD analyses confirmed the existence of LiCl, CaCl 2 and CeCl 3 phases. In the CeCl 3 / 6 LiCl/CaCl 2 eutectic, Li has a high content of up to 0.031 mol/cm 3 and small density of 2.98 g/cm 3 . The eutectic has an Ce 3+ 5d-4f emission peaking at 360 nm in the CeCl 3 phase, and it is consistent with previous studies on CeCl 3 single crystals. The light yield was approximately 18,000 photons/neutron with the two decay components of 10.5 ns (27.7%) and 40.1 ns (72.3%). These results indicate that the scintillator phase with excellent scintillation performance provides high light yield and fast decay time, while the Li-rich phase provides the high Li content thermal neutron scintillator. Although CeCl 3 /LiCl/CaCl 2 has slight hygroscopic properties, it remains to be a promising material in terms of 6 Li concentration, density, light yield and decay time. As a fiber-type structure can be obtained in this eutectic, the transparency in the pulling direction and light yield can be enhanced by optimization of the eutectic fabrication conditions and diameter of fiber phases in future studies.
2022-12-07T16:02:52.411Z
2022-12-04T00:00:00.000
{ "year": 2022, "sha1": "8dc6ca51ffd23943c8481663570a485a6dfc9a36", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4352/12/12/1760/pdf?version=1670155453", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d85294b0b15d004842dd4e404f4b036788b1a2ba", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [] }
259748782
pes2o/s2orc
v3-fos-license
Multi-Objective Optimization of a Two-Stage Helical Gearbox Using Taguchi Method and Grey Relational Analysis : This paper presents a novel approach to solve the multi-objective optimization problem designing a two-stage helical gearbox by applying the Taguchi method and the grey relation analysis (GRA). The objective of the study is to identify the optimal main design factors that maximize the gearbox efficiency and minimize the gearbox mass. To achieve that, five main design factors. including the coefficients of wheel face width (CWFW) of the first and the second stages, the allowable contact stresses (ACS) of the first and the second stages, and the gear ratio of the first stage were chosen. Additionally, two single objectives, including the maximum gearbox efficiency and minimum gearbox mass, were analyzed. In addition, the multi-objective optimization problem is solved through two phases: Phase 1 solves the single-objective optimization problem in order to close the gap between variable levels, and phase 2 solves the multi-objective optimization problem to determine the optimal main design factors. From the results of the study, optimum values of five main design parameters for designing a two-stage helical gearbox were first introduced. Introduction Optimizing gearboxes is a critical aspect of mechanical engineering as it has a direct impact on their efficiency, durability, and reliability, as well as the performance of other machinery and equipment.Multi-objective optimization of gearboxes involves simultaneously optimizing various performance parameters, such as load-carrying capacity, noise, mass, size, and efficiency, making it a complex and challenging task.To address these challenges, various optimization techniques have been developed in recent years.Helical gearboxes, one of many gearbox types, are widely used in industrial applications due to their superior load-carrying capacity, smooth operation [1], simple structure, and low cost.However, designing a helical gearbox involves numerous design parameters, making it difficult to optimize for multiple objectives. Numerous studies on the optimal design of helical gearboxes have been conducted thus far.This research looked into both single-objective and multi-objective optimization problems.Many authors are also interested in the single-objective optimization problem.I. Römhild and H. Linke [2] presented formulas for calculating gear ratios for two-, three-, and four-stage helical gearboxes in order to obtain the smallest gear mass.Milou et al. [3] presented a practical approach for reducing the mass of a two-stage helical gearbox in their paper.The method entailed analyzing data from gearbox manufacturers.Their findings suggested a center distance ratio (aw 2 /aw 1 ) between the second and first stages in the range of 1.4 to 1.6 to achieve the minimum mass.Once the optimal center distance ratio has been determined, the corresponding partial gear ratios are obtained using a lookup table.Various objective functions are utilized to solve the single-objective optimization problem of determining the optimal gear ratio of helical gearboxes.These objective functions include minimizing gearbox length [4][5][6][7], minimizing gearbox cross section area [6,[8][9][10], minimizing gearbox mass [6,11], minimizing gearbox volume [12], and minimizing gearbox cost [13][14][15][16].All of these studies share a common feature in that only one design parameter, the partial gear ratio, is defined in the form of an explicit model. The multi-objective optimization problem for designing helical gearboxes has also been of interest to researchers.M. Patil et al. [1] conducted a study in which a twostage helical gearbox was subjected to multi-objective optimization with a broad range of constraints using a specially formulated discrete version of non-dominated sorting genetical algorithm II (NSGA-II).The study formulated two objective functions, namely the minimum gearbox volume and minimum total gearbox power loss.Moreover, the study considered constraints, such as bending stress, pitting stress, and tribological factors.Wu, Y.-R.and V.-T.Tran [17] introduced a new microgeometry modification for helical gear pairs, leading to substantial enhancements in performance with regards to noise and vibration.In their research, C. Gologlu and M. Zeyveli [18] utilized a genetic algorithm (GA) to optimize the volume of a two-stage helical gearbox.To handle the design constraints, such as bending stress, contact stress, number of teeth on pinion and gear, module, and face width of gear, the objective function was subject to static and dynamic penalty functions.The results from the GA were compared to those of a deterministic design procedure, with the GA being found to be the superior method.D. F. Thompson and colleagues [19] presented a generalized optimization technique for reducing the volume of two-stage and three-stage helical gearboxes, while considering tradeoffs with surface fatigue life.In their study, Edmund S. Maputi and Rajesh Arora [20] explored multi-objective optimization by simultaneously considering three objectives: volume, power output, and center-distance.They employed the NSGA-II evolutionary algorithm to generate Pareto frontiers in their research.From the results of the study, insights for the design of compact gearboxes can be gained.The NSGA-II method was also applied by C. Sanghvi et al. [21] to solve the multi-objective optimization problem of a two-stage helical gearbox for minimum volume and maximum load.The results of the multi-objective optimization of the tooth surface in helical gears using the response surface method were presented by Park C.I. [22]. The best or optimal level of the selected criterion is determined by single-objective optimization, which is an absolute optimization.A multi-objective optimization problem is one with two or more simple goals (or criteria).As a result, the solution to the multiobjective optimization problem cannot best satisfy all criteria simultaneously.For instance, it is not possible to meet both the efficiency and cost requirements of the gearbox.In simple terms, determining a solution to a problem that is both "white" and "black" is impossible; only a "gray" solution can be determined.The gray solution is the one that falls between the best and worst solutions, or between "white" and "black" in the multiobjective optimization problem.As a result, it is known as optimization based on gray relation analysis.The original Taguchi method is used to solve the single-objective problem, and the Taguchi method and gray relation analysis are required to solve the multi-objective optimization problem. While numerous studies have focused on multi-objective optimization for helical gearboxes, the identification of optimal main design factors for such gearboxes has not received adequate attention.Furthermore, previous research on multi-objective optimization for helical gearboxes has not demonstrated the relationship between optimal input factors and total gearbox ratio.This is a critical issue to consider when designing a new gearbox.In this paper, we present a multi-objective optimization study for a two-stage helical gearbox, considering two single objectives: minimizing gearbox mass and maximizing gearbox efficiency.The proposal of five optimal main design factors for the two-stage helical gearbox is the most significant result of this research.These variables include the CWFW for both stages, the ACS for both stages, and the first stage's gear ratio.Furthermore, by combining the Taguchi method and the GRA in a two-stage process not previously described, we present a novel approach to addressing the multi-objective optimization problem in gearbox design.Additionally, a link between optimal input factors and the total gearbox ratio was proposed. Gearbox Mass Calculation The mass of the gearbox m gb can be found by the following equation: where m g , m gh , m b , and m s designate the mass of the gears, the shafts, the bearings, and the gearbox housing, respectively.The component mass will be specifically calculated below: Gearbox Housing Mass Calculation The gearbox housing mass (m gh ) can be found by: in which ρ gh is the weight density of gearbox housing materials referred; as the material of the gearbox housing is gray cast iron (the most common material for the gearbox housing), ρ gh = 7300 kg/m 3 ; V gh is the gearbox housing volume (m 3 ), which is determined by the following equation (see Figure 1): Appl.Sci.2023, 13, x FOR PEER REVIEW 3 of 18 Gearbox Mass Calculation The mass of the gearbox can be found by the following equation: where , ℎ , , and designate the mass of the gears, the shafts, the bearings, and the gearbox housing, respectively.The component mass will be specifically calculated below: Gearbox Housing Mass Calculation The gearbox housing mass ( ℎ ) can be found by: in which ℎ is the weight density of gearbox housing materials referred; as the material of the gearbox housing is gray cast iron (the most common material for the gearbox housing), ℎ = 7300 (/ 3 ); ℎ is the gearbox housing volume (m 3 ), which is determined by the following equation (see Figure 1): where , , and are the volumes of side A, B, and C (m 3 ), respectively. Substituting Equations ( 4) to ( 6) into (3) obtains: In the above equations, L, H, B1, and SG can be calculated by [2]: where V A , V B , and V C are the volumes of side A, B, and C (m 3 ), respectively. Gear Mass Calculation The gear mass can be determined by: in which m g1 and m g2 are the gear mass of the first and the second stages (kg), which can be calculated by: where ρ g is the weight density of gear material (kg/m 3 ); as the gear material is steel, ρ g = 7800 kg/m 3 ; e 1 and e 2 are the volume coefficients.Because the pinion has a small diameter, its structural form can be plain whereas the gear has a large diameter and thus requires a hub.As a result, the volume coefficient of the pinion e 1 = 1, and the volume coefficient of the gear is e 2 = 0.6 [13]; b w1 and b w2 are the gear width of the first and the second stages (mm); d w1i and d w2i are the pinion and the gear pitch diameters of the i stage (i = 1 and 2).These parameters are determined by: In the above equations, u 1 is the gear ratio of the first stage; the center distance of i stage a wi is found by the surface fatigue strength [23]: where k a is the material coefficient; k Hβ is the contacting load ratio for pitting resistance; AS i is the allowable contact stress of the i th stage (MPa); X bai is the wheel face width coefficient of ith stage; and T 1i is the drive shaft torque of the ith stage (N.mm), which can be calculated by: After calculating the gear parameters, the bending strength of the i th gear stage must be checked using the following formulas [23]: in which m i is the module of the i th gear stage (mm); K Fi is load factor; Y εi = 1/ε is the contact factor; ε is the contact ratio; Y βi = 1 − β/140 is factor taking into account the helix angle; and Y F1i and Y F2i are the geometry factor of the pinion and gear of i th gear stage. Calculation of Shaft Mass The shaft mass can be found by: where m si is the mass of i th shaft of the gearbox (kg), which is determined by: in which d j and l j are the diameter and the length of j th shaft part (mm); d bk and B k are the diameter and the width of k th shaft part on which the bearing is istalled.The values of d j and d bk also are determined by [23]: wherein [σ s ] is the allowable shaft stress (MPa), which can be determined by the material and size of the shaft [23].M e is the equivalent moment (Nmm), which is found by [23]: where M x and M y are the bending moment in x and y directions (Nmm); T is the torque (Nmm).These parameters can be defined based on the diagram for finding shaft dimensions.Figure 2 describes this diagram for calculation of the first shaft of the gearbox. In Equation ( 22), B k is the bearing width (mm).In this work, radical ball bearings with angular contact were used.From the data in [24], the following regression was proposed to calculate the width of the bearings (with R 2 = 0.9951): and size of the shaft [23]. is the equivalent moment (Nmm), which is found by [23]: where and are the bending moment in x and y directions (Nmm); T is the torque (Nmm).These parameters can be defined based on the diagram for finding shaft dimensions.Figure 2 describes this diagram for calculation of the first shaft of the gearbox. Calculation of Bearing Mass The bearing mass of the gearbox is calculated by: As mentioned above, radical ball bearings with angular contact were used in this work.From the data about these type of bearings in [24], the mass of the i th bearing can be found by the following regression equation (with R 2 = 0.9833): in which i is the number of bearings (i = 6); d bi is the diameter of the shaft part on which the i th bearing is mounted. Determination of Gearbox Efficiency The gearbox efficiency is determined by: where P l is the total power loss in the gear box [25]: where P lg is the power losses in the gears; P lb is the power loss in the bearings; and P ls is the power loss in seals.These factors can be determined as follows: (+) The power losses in the gears: Appl.Sci.2023, 13, 7601 7 of 19 in which P lgi is the the power losses in the gears of the i stage, which is found by: where η gi is the efficieny of the i stage of the gearbox, which can be determined by [26]: where u i is the gear ratio of i stage; f is the friction coefficient; β ai and β ri are the arc of the approach and recess on the i stage, which is calculated by [26]: in which R e1i and R e2i are the outside radius of the pinion and gear, respectively; R 1i and R 2i are the pitch radius of the pinion and gear, respectively; R 01i and R 01i are the base-circle radius of the pinion and gear, respectively; α is the pressure angle. From the data in [26], the friction coefficient can be determined by the folowing regression equations: -When the sliding velocity is v ≤ 0.424 (m/s), the friction coefficient is calculated by (with R 2 = 0.9958): f = −0.0877•v+ 0.0525 (37) -When the sliding velocity is v > 0.424 (m/s), the friction coefficient is calculated by (with R 2 = 0.9796): (+) The power losses in the bearings [25]: The power losses in rolling bearings can be found by: where f b is the coefficient of the friction of the bearing; as the radial ball bearings with angular contact were used, f b = 0.0011 [25]; F denotes the bearing load in Newtons (N) while v represents the peripheral speed.Additionally, i represents the ordinal number of the bearing, ranging from 1 to 6. (+) The total power losses in the seals are determined by [25]: in which i is the ordinal number of the seal (i = 1 ÷ 2); P si represents the power loss caused by the sealing for a single seal (w), which can be calculated by: where VG 40 is the ISO viscosity grades number. Objectives Functions The multi-objective optimization problem in this study comprises two single objectives: -Minimizing the gearbox mass: -Maximizing the gearbox efficiency: in which X is the design variable vector reflecting variables.In this work, five main design factors, including u 1 , Xba 1 , Xba 2 , AS 1 , and AS 2 were selected as variables, and we have: For a helical gear set, the maximal gear ratio is 9 [23].Additionally, the coefficient of the wheel face width of both gear stages of a two-stage helical gearbox ranges from 0.25 to 0.4 [23].In addition, the gear materials used in this work are steel 40, 45, 40X, and 35XM refining, with teeth hardness on the surface of 350 HB (These are the most commonly used gear materials in gearboxes).As a result of the calculated results, the allowable contact stresses of the first and second stages range from 350 to 420 (MPa).Therefore, the following constraints were derived from these comments: Methodology As stated in Section 2.3.1 five main design factors were selected as variables for the multi-objective optimization problem.Table 1 describes these factors and the min and max values of them.In this work, the Taguchi method and grey relation analysis were employed to address the multi-objective optimization problem with five variables.In order to easily determine the solutions of the optimization problem, the larger the number of levels of the variables, the better.To maximize the number of levels for each variable, the L25 (5 5 ) design was selected.However, among the variables mentioned, u 1 has a very wide distribution (u1 ranges from 1 to 9 as stated in Section 2.3.2).As a result, the gap between levels of this variable remained significant even with five levels (in this case, the gap is ((9 − 1)/4 = 2)).To reduce this gap, save time, and improve the accuracy of the solutions, a procedure for solving a multi-objective problem was proposed (Figure 3).This procedure consists of two phases: Phase 1 solves the single-objective optimization problem to close the gap between levels, and phase 2 solves the multi-objective optimization problem to determine the optimal main design factors.consists of two phases: Phase 1 solves the single-objective optimization pr the gap between levels, and phase 2 solves the multi-objective optimizatio determine the optimal main design factors. Single-Objective Optimization In this work, the direct search method is used to solve the single-objective optimization problem.Additionally, a computer program has been built using the Matlab language to solve two single-objective problems, including minimizing the gearbox mass and maximizing the gearbox efficiency.From the results of this program, the relation between the optimal value of the gear ratio of the first stage u 1 and the total gearbox ratio u t is shown in Figure 4. Additionally, new constraints for the variable u 1 were found, as shown in Table 2. In this work, the direct search method is used to solve the single-objective optimization problem.Additionally, a computer program has been built using the Matlab language to solve two single-objective problems, including minimizing the gearbox mass and maximizing the gearbox efficiency.From the results of this program, the relation between the optimal value of the gear ratio of the first stage u1 and the total gearbox ratio ut is shown in Figure 4. Additionally, new constraints for the variable u1 were found, as shown in Table 2. Multi-Objective Optimization The multi-objective optimization problem in this research aims to identify the optimal main design factors that satisfy two single-objective functions: minimizing the maximum optimization and maximizing gearbox efficiency in the design of a two-stage helical gearbox with a specific total gearbox ratio.To address this problem, a simulation experiment was conducted.The experiment was designed using the Taguchi method, and the analysis of the results was performed using Minitab R18 software.In addition, as noted above, the design L25 (5 5 ) was chosen for obtaining maximal levels of the variable.A computer program has been developed to perform these experiments.An investigation was conducted to minimize programming intricacy by examining the influence of five key design parameters on gearbox mass.The input first pinion speed of 1480 (rpm) was selected as it is the most common.The steel 45 was selected as the shaft material as it is a very common shaft material.The total gearbox ratios considered for analysis were 10, 15, 20, 25, 30, and 35.Employing a five-level Taguchi design (L25), a total of 25 simulation experiments were carried out for each total gearbox ratio mentioned above.Table 3 describes the main design factors and their levels, and Table 4 presents the experimental plan and the corresponding output results, encompassing the gearbox mass and efficiency, specifically for the total gearbox ratio of 15. Multi-Objective Optimization The multi-objective optimization problem in this research aims to identify the optimal main design factors that satisfy two single-objective functions: minimizing the maximum optimization and maximizing gearbox efficiency in the design of a two-stage helical gearbox with a specific total gearbox ratio.To address this problem, a simulation experiment was conducted.The experiment was designed using the Taguchi method, and the analysis of the results was performed using Minitab R18 software.In addition, as noted above, the design L25 (5 5 ) was chosen for obtaining maximal levels of the variable.A computer program has been developed to perform these experiments.An investigation was conducted to minimize programming intricacy by examining the influence of five key design parameters on gearbox mass.The input first pinion speed of 1480 (rpm) was selected as it is the most common.The steel 45 was selected as the shaft material as it is a very common shaft material.The total gearbox ratios considered for analysis were 10, 15, 20, 25, 30, and 35.Employing a five-level Taguchi design (L25), a total of 25 simulation experiments were carried out for each total gearbox ratio mentioned above.Table 3 describes the main design factors and their levels, and Table 4 presents the experimental plan and the corresponding output results, encompassing the gearbox mass and efficiency, specifically for the total gearbox ratio of 15.The multi-optimization optimization problem is solved by applying the Taguchi and GRA methods.The main steps for this process are as follows: (+) Determining the signal to noise ratio (S/N) by the following equations as the object of this work is to reduce the gearbox mass and to increase the gearbox efficiency: -For the gearbox mass objective, the-smaller-is-the-better S/N: -For the gearbox efficiency objective, the-larger-is-the-better S/N: where y i is the ouput response value; m is number of experimental repetitions.In this case, m = 1 because the experiment is a simulation; no repetition is required.The calculated S/N indexes for the two mentioned output targets are presented in Table 5.In fact, the data of the two considered single-objective functions have different dimensions.To ensure comparability, it is essential to normalize the data, bringing them to a standardized scale.The data normalization is performed using the normalization value Zij, which ranges from 0 to 1.This value is determined using the following formula: In the formula, "n" represents the experimental number, which in this case is 25. (+) Calculating the grey relational coefficient: The grey relational coefficient is calculated by: with i = 1, 2, . . ., n.In the formula, "k" represents the number of objective targets, which is 2 in this case; ∆ j (k) is the absolute value, ∆ j (k) = Z 0 (k) − Z j (k) width; Z 0 (k) and Z j (k) are the reference and the specific comparison sequences, respectively; ∆ min and ∆ max are the min and max values of ∆i(k); ζ is the characteristic coefficient, 0 ≤ ζ ≤ 1.In this work ζ = 0.5. (+) Calculating the mean of the grey relational coefficient: The degree of grey relation is determined by calculating the mean of the grey relational coefficients associated with the output objectives. where y ij is the grey relation value of the j th output targets in the i th experiment.To ensure harmony among the output parameters, a higher average grey relation value is desirable.As a result, the objective function of the multi-objective problem can be transformed into a single-objective optimization problem, with the mean grey relation value serving as the output. The impact of the main design factors on the average grey relation value (y) was analyzed using the ANOVA method, and the corresponding results are presented in Table 7. From the results in Table 7, AS 2 has the most influence on y (57.35%), followed by the influence of u 1 (25.16%),X ba1 (5.38%), X ba2 2.50%), and AS 1 (0.84%).The order of influence of the main design factors on y through ANOVA analysis is described in Table 8.Theoretically, the set of main design parameters with the levels that have the highest S/N values would be the rational (or optimal) parameter set.Therefore, the impact of the main design factors on the S/N ratio was determined (Figure 5).From Figure 3, the optimal levels and values of main design factors for multi-objective function were found (Table 9).The adequacy of the proposed model is assessed using the Anderson-Darling method, and the results are presented in Figure 6.From the graph, it is evident that the data points corresponding to the experimental observations (represented by blue dots) fall within the region bounded by upper and lower limits with a 95% standard deviation.Furthermore, the p-value of 0.226 significantly exceeds the significance level α = 0.05.These findings indicate that the empirical model employed in this study is appropriate and suitable for the analysis.Continuing from the previous discussion, the optimal values for the main design parameters corresponding to the remaining ut values of 10, 20, 25, 30, and 35 are presented in Table 10.Similarly, the optimal values for AS 1 and AS 2 are also their maximum values.This is because minimizing the gearbox mass requires maximizing the values of AS and AS 2 .By increasing these values, the center distance of gear stage i (as represented by Equation ( 19)) can be minimized.It will lead to the gear widths (as determined by Equations ( 15) and ( 16)), the pinion, and the gear pitch diameters of the ith stage (i = 1 and 2) (as calculated by Equations ( 17) and ( 18)), and therefore, the gear mass (as represented by Equations ( 13) and ( 14)) can be minimized. Figure 7 depicts an obvious first-order relationship among the optimal values of u 1 and u t .Additionally, the following regression equation (with R 2 = 0.9992) to find the optimal values of u 1 was found: After finding u 1 , the optimum value of u 2 can be determined by u 2 = u t /u 1 . To assess the effectiveness of the proposed method, the multi-objective optimization problem was solved using the constraints of u 1 , as shown in Table 1 (referred to as the solution by the traditional method).The optimal values for the main design parameters discovered by this method are shown in Table 11. Figure 8 depicts the optimal values of u 1 as determined by the traditional method (data from Table 10) and the new method (data from Table 9).This figure clearly shows that the optimal values of u 1 for the new method are easily determined and obey a very simple first-order function (Equation ( 52)).Furthermore, when determined by traditional methods, these values are distributed randomly rather than according to common rules (Figure 8), and they will almost certainly be less accurate than when determined by the new method.(19)) can be minimized.It will lead to the gear widths (as determined by Equations ( 15) and ( 16)), the pinion, and the gear pitch diameters of the ith stage (i = 1 and 2) (as calculated by Equations ( 17) and ( 18)), and therefore, the gear mass (as represented by Equations ( 13) and ( 14)) can be minimized.Figure 7 depicts an obvious first-order relationship among the optimal values of u1 and ut.Additionally, the following regression equation (with R 2 = 0.9992) to find the optimal values of u1 was found: To assess the effectiveness of the proposed method, the multi-objective optimization problem was solved using the constraints of u1, as shown in Table 1 (referred to as the solution by the traditional method).The optimal values for the main design parameters discovered by this method are shown in Table 11. Figure 8 depicts the optimal values of u1 as determined by the traditional method (data from Table 10) and the new method (data from Table 9).This figure clearly shows that the optimal values of u1 for the new method are easily determined and obey a very simple first-order function (Equation ( 52)).Furthermore, when determined by traditional methods, these values are distributed randomly rather than according to common rules (Figure 8), and they will almost certainly be less accurate than when determined by the new method. Conclusions The Taguchi method and the GRA are used in this paper to solve the multi-objective optimization problem in designing a two-stage helical gearbox.The goal of the study is to discover the optimal main design parameters that maximize gearbox efficiency while minimizing gearbox mass.To accomplish this, five major design factors were chosen: the CWFW for the first and second stages, the ACS for the first and second stages, and the first-stage gear ratio.In addition, the multi-objective optimization problem is solved in two stages.Phase 1 is concerned with solving the single-objective optimization problem of closing the gap between variable levels, while Phase 2 is concerned with determining the optimal main design factors.The following conclusions were proposed as a result of this work: -A novel approach to handling the multi-objective optimization problem in the gearbox design was presented by combining the Taguchi method and the GRA in a twostage process.The distance between the values of the lower and upper bounds of the constants of u1 is shortened as a result of this approach, which leads to an easier and a more accurate determination of optimal values.- The solution of the single-objective optimization problem bridges the gap between variable levels, making the solution of the multi-objective optimization problem easier and more accurate.-From the results of the study, optimal values for the five main design factors in the design of a two-stage helical gear gearbox were proposed (Equation (52) and Table 10).- The effect of the main design parameters on y ̅ was analyzed using the ANOVA method.The results revealed that AS2 had the highest influence on y ̅ (57.35%), followed by u1 (25.16%),Xba1 (5.38%), Xba2 (2.50%), and AS1 (0.84%).- The proposed model of u1 demonstrates a high level of consistency with the experimental data, validating their reliability.This model can be effectively utilized for multi-objective optimization of a two-stage helical gearbox, providing a valuable ap- Conclusions The Taguchi method and the GRA are used in this paper to solve the multi-objective optimization problem in designing a two-stage helical gearbox.The goal of the study is to discover the optimal main design parameters that maximize gearbox efficiency while minimizing gearbox mass.To accomplish this, five major design factors were chosen: the CWFW for the first and second stages, the ACS for the first and second stages, and the first-stage gear ratio.In addition, the multi-objective optimization problem is solved in two stages.Phase 1 is concerned with solving the single-objective optimization problem of closing the gap between variable levels, while Phase 2 is concerned with determining the optimal main design factors.The following conclusions were proposed as a result of this work: -A novel approach to handling the multi-objective optimization problem in the gearbox design was presented by combining the Taguchi method and the GRA in a two-stage process.The distance between the values of the lower and upper bounds of the constants of u 1 is shortened as a result of this approach, which leads to an easier and a more accurate determination of optimal values.- The solution of the single-objective optimization problem bridges the gap between variable levels, making the solution of the multi-objective optimization problem easier and more accurate.-From the results of the study, optimal values for the five main design factors in the design of a two-stage helical gear gearbox were proposed (Equation (52) and Table 10).- The effect of the main design parameters on y was analyzed using the ANOVA method.The results revealed that AS 2 had the highest influence on y (57.35%), followed by u 1 (25.16%),X ba1 (5.38%), X ba2 (2.50%), and AS 1 (0.84%). Figure 2 . Figure 2. Diagram for determining shaft dimensions.Figure 2. Diagram for determining shaft dimensions. Figure 2 . Figure 2. Diagram for determining shaft dimensions.Figure 2. Diagram for determining shaft dimensions. Figure 3 . Figure 3.The procedure for solving a multi-objective problem. Figure 3 . Figure 3.The procedure for solving a multi-objective problem. Figure 4 . Figure 4. Gear ratio of the first stage versus total gearbox ratio. Figure 4 . Figure 4. Gear ratio of the first stage versus total gearbox ratio. Figure 7 . Figure 7. Optimal gear ratio of the first stage versus total gearbox ratio. Figure 7 . 18 Figure 8 . Figure 7. Optimal gear ratio of the first stage versus total gearbox ratio.Appl.Sci.2023, 13, x FOR PEER REVIEW 16 of 18 Figure 8 . Figure 8. Optimal gear ratio of first stage of traditional and new method. Table 2 . New constraints of u 1 . Table 3 . Main design factors and their levels for u t = 15. Table 4 . Experimental plans and output responses for u t = 15. Table 5 . Values of S/N of each experimental run of u t = 15. Table 6 displays the calculated results of the grey relation value (y i ) and the average grey relation value of all experiments. Table 6 . Values of ∆ i (k) and y i . Table 7 . Factor effect on y. Table 8 . Order of main design factor effect on y. Table 9 . Optimal levels and values of main design factors. Table 10 . Optimum main design factors finding by traditional method. Table 11 . Optimum main design factors finding by new method. Table 11 . Optimum main design factors finding by new method.
2023-07-12T07:16:35.694Z
2023-06-27T00:00:00.000
{ "year": 2023, "sha1": "d16088738631595d96fbe16cff4b2598d9a17e6c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/13/13/7601/pdf?version=1687934998", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3080bb271f175a2919daa0fe6b104a4ffc808962", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
80975329
pes2o/s2orc
v3-fos-license
Pest categorisation of Colletotrichum gossypii Abstract The Panel on Plant Health performed a pest categorisation of Colletotrichum gossypii, the fungal agent of anthracnose and ramulosis diseases of cotton, for the EU. The identity of the pest is well established and reliable methods exist for its detection/identification. The pest is present in most of the cotton‐growing areas worldwide, including Bulgaria and Romania in the EU. Colletotrichum gossypii is listed as Glomerella gossypii in Annex IIB of Directive 2000/29/EC and is not known to occur in Greece, which is a protected zone (PZ). The only hosts are Gossypium species, with G. hirsutum and G. barbadense being the most susceptible. The pest could potentially enter the PZ on cotton seeds originating in infested third countries or EU infested areas. Entry into PZ by natural means from EU infested areas is possible, although there is uncertainty on the maximum distance the pest can travel by wind or insects. Bolls and unginned cotton are minor pathways of entry. Pest distribution and climate matching suggest that the pest could establish and spread in cotton‐producing areas of northern Greece. In the infested areas, the pest causes damping‐off, leaf/boll spotting, boll rot, witches’ broom symptoms and stunting resulting in yield and quality losses. It affects also the lint and seeds reducing fibres quality and seed germinability. It is expected that its introduction and spread in the EU PZ would impact cotton yield and quality. The agricultural practices and control methods currently applied in Greece would not prevent pest establishment and spread. Colletotrichum gossypii meets all the criteria assessed by EFSA for consideration as potential quarantine pest for the EU PZ of Greece. The criteria for considering C. gossypii as a potential Union regulated non‐quarantine pest are also met since cotton seeds are the main means of spread. . Background Council Directive 2000/29/EC 1 on protective measures against the introduction into the Community of organisms harmful to plants or plant products and against their spread within the Community establishes the present European Union plant health regime. The Directive lays down the phytosanitary provisions and the control checks to be carried out at the place of origin on plants and plant products destined for the Union or to be moved within the Union. In the Directive's 2000/29/EC annexes, the list of harmful organisms (pests) whose introduction into or spread within the Union is prohibited, is detailed together with specific requirements for import or internal movement. Following the evaluation of the plant health regime, the new basic plant health law, Regulation (EU) 2016/2031 2 on protective measures against pests of plants, was adopted on 26 October 2016 and will apply from 14 December 2019 onwards, repealing Directive 2000/29/EC. In line with the principles of the above mentioned legislation and the follow-up work of the secondary legislation for the listing of EU regulated pests, EFSA is requested to provide pest categorizations of the harmful organisms included in the annexes of Directive 2000/29/EC, in the cases where recent pest risk assessment/pest categorisation is not available. Terms of Reference EFSA is requested, pursuant to Article 22(5.b) and Article 29(1) of Regulation (EC) No 178/2002, 3 to provide scientific opinion in the field of plant health. EFSA is requested to prepare and deliver a pest categorisation (step 1 analysis) for each of the regulated pests included in the appendices of the annex to this mandate. The methodology and template of pest categorisation have already been developed in past mandates for the organisms listed in Annex II Part A Section II of Directive 2000/29/EC. The same methodology and outcome is expected for this work as well. The list of the harmful organisms included in the annex to this mandate comprises 133 harmful organisms or groups. A pest categorisation is expected for these 133 pests or groups and the delivery of the work would be stepwise at regular intervals through the year as detailed below. First priority covers the harmful organisms included in Appendix 1, comprising pests from Annex II Part A Section I and Annex II Part B of Directive 2000/29/EC. The delivery of all pest categorisations for the pests included in Appendix 1 is June 2018. The second priority is the pests included in Appendix 2, comprising the group of Cicadellidae (non-EU) known to be vector of Pierce's disease (caused by Xylella fastidiosa), the group of Tephritidae (non-EU), the group of potato viruses and virus-like organisms, the group of viruses and virus-like organisms of Cydonia Mill., Fragaria L., Malus Mill., Prunus L., Pyrus L., Ribes L., Rubus L. and Vitis L.. and the group of Margarodes (non-EU species). The delivery of all pest categorisations for the pests included in Appendix 2 is end 2019. The pests included in Appendix 3 cover pests of Annex I part A section I and all pest categorisations should be delivered by end 2020. For the above mentioned groups, each covering a large number of pests, the pest categorisation will be performed for the group and not the individual harmful organisms listed under "such as" notation in the Annexes of the Directive 2000/29/EC. The criteria to be taken particularly under consideration for these cases, is the analysis of host pest combination, investigation of pathways, the damages occurring and the relevant impact. Finally, as indicated in the text above, all references to 'non-European' should be avoided and replaced by 'non-EU' and refer to all territories with exception of the Union territories as defined in Article 1 point 3 of Regulation (EU) 2016/2031. Interpretation of the Terms of Reference Glomerella gossypii is one of a number of pests listed in the Appendices to the Terms of Reference (ToR) to be subject to pest categorisation to determine whether it fulfils the criteria of a quarantine pest or those of a regulated non-quarantine pest for the area of the European Union (EU) excluding Ceuta, Melilla and the outermost regions of Member States referred to in Article 355(1) of the Treaty on the Functioning of the European Union (TFEU), other than Madeira and the Azores. Glomerella gossypii has been renamed as Colletotrichum gossypii. Therefore, for the purposes of this pest categorisation, the current scientific name will be used. The pest is regulated in the protected zone of Greece only. Therefore, the scope of this pest categorisation is the EU protected zone (Greece), instead of the whole EU territory. 2. Data and methodologies 2.1. Data 2.1.1. Literature search A literature search on G. gossypii was conducted at the beginning of the categorisation in the ISI Web of Science bibliographic database. The search focussed on Glomerella gossypii, (including its synonyms) and its geographic distribution, life cycle, host plants and the damage it causes. The following search terms (TS) and combinations were used: TS = (("Glomerella gossypii" OR "Colletotrichum gossypii" OR Anthracnose OR Ramulosis) AND (geograph* OR distribution OR "life cycle" OR lifecycle OR host OR hosts OR plant* OR damag*) AND cotton). Relevant papers were reviewed and further references and information were obtained from experts, as well as from citations within the references and grey literature. Database search Pest information, on host(s) and distribution, was retrieved from the European and Mediterranean Plan Protection Organization (EPPO) Global Database (EPPO, online) and relevant publications. Data about the import of commodity types that could potentially provide a pathway for the pest to enter the EU and about the area of hosts grown in the EU were obtained from EUROSTAT (Statistical Office of the European Communities -online). The Europhyt database (online) was consulted for pest-specific notifications on interceptions and outbreaks. Europhyt is a web-based network run by the Directorate General for Health and Food Safety (DG SANT E) of the European Commission, and is a subproject of PHYSAN (Phyto-Sanitary Controls) specifically concerned with plant health information. The Europhyt database manages notifications of interceptions of plants or plant products that do not comply with EU legislation, as well as notifications of plant pests detected in the territory of the Member States (MS) and the phytosanitary measures taken to eradicate or avoid their spread. Methodologies The Panel performed the pest categorisation for C. gossypii following guiding principles and steps presented in the EFSA guidance on the harmonised framework for pest risk assessment (EFSA PLH Panel, 2010) and as defined in the International Standard for Phytosanitary Measures No 11 (FAO, 2013) and No 21 (FAO, 2004). In accordance with the guidance on a harmonised framework for pest risk assessment in the EU (EFSA PLH Panel, 2010), this work was initiated following an evaluation of the EU plant health regime. Therefore, to facilitate the decision-making process, in the conclusions of the pest categorisation, the Panel addresses explicitly each criterion for a Union quarantine pest and for a Union regulated non-quarantine pest in accordance with Regulation (EU) 2016/2031 on protective measures against pests of plants, and includes additional information required in accordance with the specific terms of reference received by the European Commission. In addition, for each conclusion, the Panel provides a short description of its associated uncertainty. Table 1 presents the Regulation (EU) 2016/2031 pest categorisation criteria on which the Panel bases its conclusions. All relevant criteria have to be met for the pest to potentially qualify either as a quarantine pest or as a regulated non-quarantine pest. If one of the criteria is not met, the pest will not qualify. A pest that does not qualify as a quarantine pest may still qualify as a regulated nonquarantine pest that needs to be addressed in the opinion. For the pests regulated in the protected zones only, the scope of the categorisation is the territory of the protected zone; thus, the criteria refer to the protected zone instead of the EU territory. It should be noted that the Panel's conclusions are formulated respecting its remit and particularly with regard to the principle of separation between risk assessment and risk management (EFSA founding regulation (EU) No 178/2002); therefore, instead of determining whether the pest is likely to have an unacceptable impact, the Panel will present a summary of the observed pest impacts. Economic impacts are expressed in terms of yield and quality losses and not in monetary terms, whereas addressing social impacts is outside the remit of the Panel, in agreement with EFSA guidance on a harmonised framework for pest risk assessment (EFSA PLH Panel, 2010). The Panel will not indicate in its conclusions of the pest categorisation whether to continue the risk assessment process, but following the agreed two-step approach, will continue only if requested by the risk managers. However, during the categorisation process, experts may identify key elements and knowledge gaps that could contribute significant uncertainty to a future assessment of risk. It would be useful to identify and highlight such gaps so that potential future requests can specifically target the major elements of uncertainty, perhaps suggesting specific scenarios to examine. If the pest is present in the EU but not widely distributed in the risk assessment area, it should be under official control or expected to be under official control in the near future The protected zone system aligns with the pest free area system under the International Plant Protection Convention (IPPC) The pest satisfies the IPPC definition of a quarantine pest that is not present in the risk assessment area (i.e. protected zone) Is the pest regulated as a quarantine pest? If currently regulated as a quarantine pest, are there grounds to consider its status could be revoked? Pest potential for entry, establishment and spread in the EU territory (Section 3.4) Is the pest able to enter into, become established in, and spread within, the EU territory? If yes, briefly list the pathways! Is the pest able to enter into, become established in, and spread within, the protected zone areas? Is entry by natural spread from EU areas where the pest is present possible? Are there measures available to prevent the entry into, establishment within or spread of the pest within the EU such that the risk becomes mitigated? Are there measures available to prevent the entry into, establishment within or spread of the pest within the protected zone areas such that the risk becomes mitigated? Is it possible to eradicate the pest in a restricted area within 24 months (or a period longer than 24 months where the biology of the organism so justifies) after the presence of the pest was confirmed in the protected zone? Are there measures available to prevent pest presence on plants for planting such that the risk becomes mitigated? Conclusion of pest categorisation (Section 4) A statement as to whether (1) all criteria assessed by EFSA above for consideration as a potential quarantine pest were met and (2) if not, which one(s) were not met A statement as to whether (1) all criteria assessed by EFSA above for consideration as potential protected zone quarantine pest were met, and (2) if not, which one(s) were not met A statement as to whether (1) all criteria assessed by EFSA above for consideration as a potential regulated nonquarantine pest were met, and (2) if not, which one(s) were not met Other common names: pink boll rot of cotton; seedling blight of cotton Colletotrichum gossypii was originally described from the USA and was reported to cause disease symptoms on all parts of cotton plants, but especially on seedlings and bolls (Southworth, 1891;Edgerton, 1909). Isolates identified as C. gossypii by Shear and Wood (1907) were reported to be associated in culture with a teleomorphic state belonging to the genus Glomerella. Later, Edgerton (1909) described G. gossypii from diseased, mature cotton plants in the USA. Biology of the pest Colletotrichum gossypii is carried both on and inside cotton seeds (Arndt, 1953) due to its ability to infect the fruits (bolls) (Hillocks, 1992). The survival potential of the pathogen in cotton seed, as indicated by the percentage of emerged infected seedlings, has been shown to be affected by the moisture content of the seed and the storage temperature (Arndt, 1946). More specifically, when the moisture content of infected cotton seeds ranged between 8% and 16%, the pest survived up to 17 months (max. period studied) but only when the seeds were stored at 1°C (Arndt, 1946). The pathogen also survives in infected cotton plant residues (EPPO, online) on which perithecia with ascospores of the teleomorph (G. gossypii) are produced (Watkins, 1981;Hillocks, 1992). Therefore, infected seed and crop residues provide the initial inoculum for infection of cotton crops (Hillocks, 1992). Like other Glomerella species (Kaiser and Lukezic, 1966), in the presence of water (rain, irrigation) or high humidity, ascospores are forcibly ejected from perithecia and are disseminated by air currents to infect susceptible hosts. The optimum conditions for infection are high humidity and 25°C. Infection is greatly reduced at temperatures below 20°C and does not occur at 36°C (Arndt, 1944). Davis et al. (1981) reported that the disease on cotton seedlings is severe at temperatures 20-26°C. Ling (1944) showed that a prolonged dry period with an average humidity lower than 70% after the emergence of cotton seedlings resulted in a low percentage of infection. In the USA, seed infection rates were high when frequent rainfall occurred after boll-split (Arndt, 1956). Nevertheless, according to Leakey and Perry (1966), in the presence of wounds (mechanical or insect feeding) the fungus causes an extensive rot of the boll wall and lint, irrespective of the humidity level. Usually, only the conidial stage of the pathogen (C. gossypii) is present on cotton plants during the growing period (EPPO, online). Conidia produced in acervuli in a mucilaginous mass and dispersed mainly by rain, wind-driven rain and insects (e.g. Dysdercus spp.) are responsible for the secondary infections of cotton plants (Cauquil, 1960;Davis, 1981). Converse (1919), Edgerton (1912), Weindling et al. (1941) and Cauquil (1960) showed that the pest is also able to survive a considerable length of time as a saprophyte on dead or apparently healthy stems and leaves of cotton without causing symptoms. During its saprophytic life, the fungus has many chances to contaminate the seeds when still in open bolls through the rain water and later during the ginning process. Weindling et al. (1941) further demonstrated that C. gossypii conidia could contaminate healthy seeds during the ginning process when the seeds were mixed with infected plant debris. This was thought to account for the considerable amount of inoculum on seed obtained from fields in the southern USA in which very little or no anthracnose symptoms were apparent on bolls during the growing period. Colletotrichum gossypii var. cephalosporioides differs from C. gossypii in virulence, aggressiveness, morphology, growth on various synthetic media and ability to grow at less than 30°C (Follin and Mangano, 1983). High relative humidity (100%) and temperatures between 21°C and 25°C for at least 8-10 h are required for infection of cotton plants by C. gossypii var. cephalosporioides and no infection occurs at 32°C. According to Do Nascimento et al. (2006), infection of cotton plants by C. gossypii var. cephalosporioides is favoured by high rainfall and temperatures between 25°C and 30°C. Nevertheless, a rDNA comparison study showed that C. gossypii and C. gossypii var. cephalosporioides are identical with 99.5% homology, which does not justify them to be distinct species (Bailey et al., 1996). Both C. gossypii and its variant belong to the C. gloeosporioides species complex (Bailey et al., 1996;Silva-Mann et al., 2005). Based on the above, the Panel decided to perform the pest categorisation at the species level of C. gossypii. Detection and identification of the pest Colletotrichum gossypii can be detected and identified based on host association, symptomatology, and cultural/morphological characteristics of its colonies and fructifications in agar media. Nevertheless, molecular methods are necessary for confirming the identification of the pest based on morphology. A rapid and reliable molecular method based on the b-tubulin gene is available for the identification of C. gossypii in culture and its differentiation from other related Colletotrichum species belonging to the C. gloeosporioides species complex (Nawaz et al., 2018). A seed testing method is also available for the detection of C. gossypii in cotton seeds (EPPO, online). Symptoms Anthracnose caused by C. gossypii affects all parts of cotton plants at all growth stages but are most serious on seedlings and bolls (Davis, 1981;Hillocks, 1992;EPPO, online). In young plants, which are more susceptible than mature plants, the pathogen causes spots on the cotyledons and a reddish-brown cortical rot at the base of the hypocotyl resulting in girdling, yellowing of the leaves and post-emergence damping-off and soreshin (Arndt, 1944;Cognee, 1960;Davis, 1981;Hillocks, 1992). Lesions may also develop on the stems and leaves of mature plants, sometimes producing a scald-like effect (Cai et al., 2009;EPPO, online). If infection is severe, large areas of leaf tissue around the main veins become necrotic (Hillocks, 1992). The initial symptoms on bolls usually occur near the tip, often due to infection during flowering, as small, round, water-soaked spots on the capsule, which rapidly enlarge, sometimes covering one-fourth to one-half of the boll surface, become sunken and finally develop reddish borders with pink centres (Davis, 1981). Under dry weather conditions, lesions may appear greyish in colour. If weather conditions favour the development of the pathogen, acervuli are formed on the diseased areas, which later may be covered with a pink, pasty conidial mass (Davis, 1981;Hillocks, 1992). Severely infected bolls become mummified (darkened and hardened) and never open. As soon as C. gossypii enters the boll, it spreads rapidly through the lint and seed (Davis, 1981). Lint from diseased bolls is frequently tinted pink and of inferior quality (EPPO, online). The fungus infects the seeds internally and remains entirely latent until the seeds are planted (Watkins, 1981;Bailey et al., 1992). Studies in Brazil showed that the fungus penetrated the embryo in 0.4-2% of the seeds (Lima et al., 1985). Both lint and seeds are often destroyed, even with little external evidence of the disease (Davis, 1981). If boll matures before the lint and seed are completely destroyed, usually opens (Davis, 1981). Seedlings emerging from infected seeds wilt and die (Davis, 1981;EPPO, online). Symptoms caused by anthracnose on cotton seedlings and mature plants resemble those caused by other pathogens, such as Rhizoctonia solani, Xanthomonas axonopodis pv. malvacearum, Fusarium spp., Nematospora spp., Alternaria spp., Nigrospora spp., Ascochyta gossypii, Diplodia gossypina, etc. First symptoms of ramulosis caused by C. gossypii var. cephalosporioides appear on leaves, petioles, and branches as nearly circular necrotic spots (Paiva et al., 2001;Monteiro et al., 2009), which enlarge with time resulting on crispy leaves and sporulating star-shaped lesions (Mathieson and Mangano, 1985;Ara ujo et al., 2003;Saran, 2009). Infected leaf tissue drops from the plant, causing an irregular shot-hole varying from 1 to 10 mm in diameter (Monteiro et al., 2009). During advanced stages of the disease, the fungus infects the apical meristem causing its necrosis, and subsequently, the extensive sprouting of lateral buds resulting in a witches' broom type of symptoms (Do Nascimento et al., 2006). In young plants (less than 60 days old), the pathogen infects the new branches emerging after the necrosis of the apical meristem (Juliatti and Algodão, 1997). Severely infected plants appear stunted with numerous branches and short internodes (Mathieson and Mangano, 1985;Ara ujo et al., 2003;Saran, 2009). Infected bolls remain green for a long time without opening (Watkins, 1981). Seeds also become infected by the pathogen and they often germinate abnormally inside the unopened bolls (Watkins, 1981;Lima et al., 1985;Lima and Chaves, 1992). Pre-bloom infection can lead to flower abortion and, in extreme cases, plants become totally unproductive (Juliatti and Algodão, 1997). According to Monteiro et al. (2009) studies conducted in controlled environment conditions, the incubation period of ramulosis, which varied according to temperature and length of wetness duration, was approximately 15 days at 15°C, 11 days at 20°C, 10 days at 25°C and 9 days at 30°C. Colonies formed in agar media are greyish-white to dark brown, usually with reduced aerial mycelium and often brownish on the reverse (Mordue, 1971;Sutton, 1992). Official statement that: (a) the seed has been acid-delinted, and (b) no symptoms of Glomerella gossypii Edgerton have been observed at the place of production since the beginning of the last complete cycle of vegetation, and that a representative sample has been tested and has been found free from Glomerella gossypii Edgerton in those tests. Official statement that the seed has been acid-delinted. EL, E (Andalucia, Catalonia, Extremadura, Murcia, Valencia) Annex V Plants, plant products and other objects which must be subject to a plant health inspection (at the place of production if originating in the Community, before being moved within the Community-in the country of origin or the consignor country, if originating outside the Community) before being permitted to enter the Community Part A Plants, plant products and other objects originating in the Community Section II Plants, plant products and other objects which are potential carriers of harmful organisms of relevance for certain protected zones, and which must be accompanied by a plant passport valid for the appropriate zone when introduced into or moved within that zone Without prejudice to the plants, plant products and other objects listed in Part I. Entry, establishment and spread in the EU Since the pest is regulated only in the EU protected zone of Greece, the pest potential for entry, establishment and spread were evaluated for the protected zone, instead of the whole EU territory. Host range Colletotrichum gossypii affects species of the genus Gossypium (cotton, Family Malvaceae) (EPPO, online). The two main species of Gossypium cultivated for cotton production, Gossypium hirsutum and G. barbadense (they account for about 95% and 3% of world production, respectively), are both susceptible to the pest (Anonymous, 2007a;EPPO, online). Gossypium hirsitum is the only species grown in the protected zone of Greece (Avgoulas et al., 2005). There are no reports of the pest affecting other genera of the Family Malvaceae (Bailey et al., 1996). Entry In the absence of the current EU legislation, the PLH Panel identified the following pathways for the entry of C. gossypii from infested third countries or EU infested areas into the protected zone of Greece: • Cotton seeds • Cotton fruits (bolls), and • Unginned cotton. In addition, the pest could potentially enter the protected zone of Greece by natural means (see Section 3.4.4) from EU infested areas. Of the above-mentioned pathways, the cotton seed is a major pathway of entry. The cotton fruits (bolls) and the unginned cotton pathways are of minor importance because the end-use of these plant parts (clothing, home furnishings, medical supplies, industrial thread, tarpaulins, oil for human consumption, oilseed cake for animal feed) makes unlikely the transfer of the pathogen from the pathway to cotton crops grown in the EU protected zone. Uncertainty exists on whether the pest could enter the protected zone of Greece by natural means from EU infested areas (i.e. Bulgaria, Romania) because there is lack of information on the maximum distance the pest can travel by air currents and/or insects. Therefore, the cotton fruits (bolls) and the unginned cotton pathways are not further considered in this pest categorisation. The current EU legislation prohibits the import into the protected zone of Greece of cotton seeds except for acid-delinted seeds that originate in a pest-free place of production or production site and have been found free of the pathogen in appropriate testing in the country of origin. According to Eurostat (online), during the period 2011-2016, Greece imported 89% of the total volume of cotton seeds imported into the EU28 (Table 5). Of those imports, 1% in 2011 originated from infested third countries. In 2011, 2013 and 2014, Greece imported 201, 96 and 53 tonnes of cotton seeds, respectively, originating in infested EU MS, i.e. Bulgaria (Table 6). Part B Plants, plant products and other objects originating in territories, other than those territories referred to in Part A Section II Plants, plant products and other objects which are potential carriers of harmful organisms of relevance for certain protected zones Without prejudice to the plants, plant products and other objects listed in I. Is the pest able to enter the EU protected zones? If yes, identify and list the pathways! Yes. Under the current EU legislation, the pest could potentially enter the EU protected zone (Greece) through the seed pathway. The entry of the pest through the cotton fruits (bolls) and unginned cotton pathways is unlikely because of the end-use of these plant parts. The pest could also potentially enter the protected zone of Greece via natural spread from EU infested areas. There is no record of interception of C. gossypii on cotton in the Europhyt database (onlinesearch performed on 10 March 2018). EU distribution of main host plants Cotton is grown in Greece, Spain and to a lesser extent in Bulgaria (Table 7; Source: Eurostat, data extracted on 26/3/2018). Based on FAOstat (data extracted on 20/4/2018), two tonnes of cotton were produced in Romania in 2014. However, no data was found on the area grown with cotton in Romania. According to ISTAT (data extracted on 28/3/2018), in Italy, an area between 0 and 2 ha/year has been grown with cotton during the last 10 years. Climatic conditions affecting establishment Colletotrichum gossypii is known to occur in two EU MSs, Bulgaria and Romania (Table 3), which are characterised by humid continental climate, specifically the Dfa (cold, without dry season, dry summer) and the Dfb (cold, without dry season, warm summer) Koppen-Geiger climate types (Peel et al., 2007) (Figure 2). The same climate types occur in the northern part of the protected zone of Greece (Macedonia, Thrace; Figure 2), where cotton is also grown (Tsaliki, 2005;Anonymous, 2007bAnonymous, , 2018. The only area in the rest of the world where these climate types are present in association with C. gossypii is the North of Tennessee (USA), which has a Dfa climate type (Figure 3). In the other cotton-growing areas of Greece (central Greece: Thessaly, Sterea Ellada), cotton is grown under Mediterranean climate (specifically, Csa: temperate, dry summer, hot summer). C. gossypii is not known to occur in areas characterised by Csa climate type. *: During the last 10 years, cotton is also grown in Italy on an area of up to 2 ha/year (ISTAT, online -data extracted on 28/4/2018). Is the pest able to become established in the EU protected zones? Yes. Colletotrichum gossypii is already established in the EU territory (Bulgaria and Romania), and the biotic (host availability) and abiotic (climate suitability) factors suggest that it could potentially establish in the protected zone of Greece. Therefore, the abiotic (climate suitability) factors suggest that the pest could potentially establish in the northern part of the EU protected zone (Greece). There is no evidence that the climatic conditions occurring in the other cotton-growing areas of the protected zone of Greece are suitable for the establishment of C. gossypii. However, uncertainty exists on whether irrigation, commonly applied to cotton crops in Greece, would favour the establishment of C. gossypii in those areas, too. Once established in the EU protected zone, C. gossypii could spread by both natural and humanassisted means. Spread by natural means. No specific information exists in the available literature on the spread potential of the pathogen by air currents and/or water splash. In general, ascospores are airborne and their discharge from perithecia is triggered by high humidity or rainfall (Kaiser and Lukezic, 1966). Based on the Lagrangian stochastic model of Savage et al. (2012), the majority of fungal spores having the characteristics of the Glomerella ascospores can travel up to a distance of 0.5-1 km and only < 1% of them can travel up to 10 km. Ascospores of another Glomerella species affecting apple, i.e. G. cingulata, have been shown to travel at distances > 60 m within apple orchards (Sutton and Shane, 1983). Conidia generated in water-soluble mucilage, such as those of the Colletotrichum species are dispersed over short distances by water run-off and splashed droplets (rain, overhead irrigation) (Nicholson and Moraes, 1980;Fitt et al., 1989;Rajasab and Chawda, 2009). It has been also shown that insects can carry spores of C. gossypii passively on their bodies, thus contributing to its spread (Leakey and Perry, 1966). Based on the above, uncertainty exists about the maximum distance the pest can travel by air currents and insects. Spread by human-assisted means. The pathogen can spread over long distances via the movement of contaminated or infected cotton seeds (Monteiro et al., 2009). Transmission rate from seeds has oppen-Geiger climate type world map from Peel et al. (2007) been found to be variable and dependent on several factors, such as environmental temperature, soil moisture, infection level and inoculum location in seeds (Teixeira et al., 1997). Impacts Although cotton anthracnose has become less important as a seedling disease since the general practice of seed treatment with fungicides (EPPO online), it is still prevalent on seedlings and bolls in the more humid parts of eastern USA (Simpson et al., 1973). In north-west Côte d'Ivoire (Boundiali sector), C. gossypii has been shown, either alone or in combination with insect larvae, to reduce boll production by about 25%, with 15-18% of bolls being mummified (EPPO, online). The disease also causes reduced length and thickness of fibres and abnormal seed weight (Weir et al., 2012), whereas infected seeds show reduced rate of germination (Leakey, 1962;Tanaka, 1995). In Senegal in the 1970s, rot caused by fungi, including C. gossypii, affected 2.7% of bolls, although, in severe cases, 40-60% losses on bolls have been reported. In India, anthracnose became serious in 1953 and, by 1959, it was the limiting factor in cotton production (EPPO, online). Ramulosis is the most important cotton disease in the Brazilian savanna (Do Nascimento et al., 2006;Moreno-Moran and Burbano-Figueroa, 2017). Without an effective fungicide spray programme, severe yield losses may occur (Cia and Fuzatto, 1999;Paiva et al., 2001;Silva-Mann et al., 2002). The disease severity is high on plants of less than 60 days old, because the new branches emerging after the apical meristem death also become infected (Cia, 1977;Kimati, 1980;Juliatti and Algodão, 1997). Depending upon the climatic conditions and cultivar susceptibility, yield losses can reach more than 85% and individual farmers frequently report total crop losses (Cia, 1977;Carvalho et al., 1994;Do Nascimento et al., 2006). The Sin u Valley, the largest cotton-producing area of Colombia, is the region most severely affected by ramulosis (Oliveira et al., 2010). Without timely fungicide sprays, the disease can provoke total crop loss, especially on smallholders. Based on the above, it is expected that the introduction and spread of the pathogen in the EU protected zone (Greece) would cause yield and quality losses to cotton production. Availability and limits of mitigation measures Measures for preventing the entry of the pest into the EU protected zone include: • sourcing cotton seeds from pest-free areas or pest-free places of production; • import only certified cotton seed; • import only acid-delinted and fungicide-dressed cotton seeds; • phytosanitary certificate for the import into the protected zone of cotton seeds originating in infested third countries; • phytosanitary passport for the movement of cotton seeds from infested EU areas to the protected zone of Greece; • laboratory testing of cotton seeds both at the place of origin and at the entry point of the protected zone. Measures for preventing the establishment of the pest in the EU protected zone: • surveillance for the early detection of the pathogen; • use of sanitation measures (e.g. removal of infected plants); Would the pests' introduction have an economic or environmental impact on the EU protected zones? Yes, the introduction of the pest would potentially cause yield and quality losses to cotton crops grown in the EU protected zone of Greece. Are there measures available to prevent the entry into, establishment within or spread of the pest within the EU protected zones such that the risk becomes mitigated? Yes, the likelihood of pest entry into the EU protected zone of Greece can be mitigated if cotton seeds are sourced from pest-free areas or pest-free places of production and are acid-delinted and fungicide-dressed as well as lab tested for the detection of C. gossypii both at the place of origin and at the entry point of the protected zone. In the infested areas, agricultural practices combined with sanitation and chemical control measures are applied for disease management. • application of fungicide sprays to the crops. Measures for preventing the spread of the pest in the EU protected zone: • prevent the movement within the EU protected zone of cotton seeds sourced from infested areas/places of production; • prevent the movement within the EU protected zone of cotton seeds, except for acid-delinted seeds that are fungicide-dressed and laboratory tested; • phytosanitary passport for the movement of cotton seeds within the protected zone. Phytosanitary measures In the current EU legislation, the following phytosanitary measures are relevant for the EU protected zone of Greece: • Pest free place of production • Seed treatment (i.e. acid-delinted seed) • Laboratory testing • Plant health inspection • Phytosanitary certificate • Phytosanitary passport. These measures can mitigate the risk of entry of C. gossypii into the protected zone of Greece, but they cannot completely eliminate the pathogen being present on cotton seeds originating in infested countries, as during its saprophytic phase, the pathogen may contaminate the seed of cotton plants that show little or no disease symptoms during the growing season (latently infected plants) (see Section 3.1.2). Using acid to delint seeds is also not fully effective as a phytosanitary measure (see Section 3.6.1.1). 3.6.1.1. Biological or technical factors limiting the feasibility and effectiveness of measures to prevent the entry, establishment and spread of the pest The following biological and technical factors could potentially limit the feasibility and effectiveness of measures to prevent the entry into, establishment in and spread of C. gossypii within in the EU protected zone (Greece): • The similarity of symptoms caused by C. gossypii on cotton seedlings, leaves, stems, bolls and lint with those caused by other cotton pathogens (e.g. Rhizoctonia solani, Fusarium spp., Ascochyta gossypii, Nematospora spp., Xanthomonas axonopodis pv. malvacearum, etc.) and the absence of symptoms on infected cotton seeds make visual inspection for the detection of the pathogen difficult (see Section 3.1.3). • The acid-delinting procedure may eliminate the inoculum present as contaminant on the surface of the seed, but not that located inside the seed. • Fungicide dressing usually applied to cotton seed for sowing may reduce the effectiveness of lab testing for the detection of the pathogen. Pest control methods In the infested areas, use of high-quality pest-free seed, treating seed with fungicides or acid, application of fungicide sprays during the growing season and crop rotation are the most important measures for the management of anthracnose (Davis, 1981;Hillocks, 1992). Cultural practices, such as destruction of crop residues and fall ploughing, are also used for the reduction of inoculum sources in the field (Davis, 1981). Application of pesticides for the control of insects also reduces infection of bolls by microorganisms, including C. gossypii (Pinckard et al., 1981). In Brazil, management of ramulosis is based on crop rotation and sanitation to reduce inoculum sources, use of cultivars with some level of resistance, and fungicide sprays (Miranda and Suassuna, 2004). Fungicide sprays are required for disease management because most producers plant susceptible cultivars due to the market demand (Cia and Fuzatto, 1999). Growers start applying fungicides for the control of ramulosis when disease severity reaches 2%. This threshold is usually reached within 3 weeks after plant emergence. After that, a calendar-based schedule is followed in which fungicides are applied 4-5 times per crop cycle at intervals of 4 weeks. If increased severity level is detected, the interval between applications is reduced to 3 weeks, and sometimes 2 weeks. In some cases, as many as eight fungicide sprays are applied during the growing season. Cultivars may show some tolerance to C. gossypii infection and are often used against the more aggressive variant of the pest (C. gossypii var. cephalosporioides (Carvalho et al., 1984). Currently, in the EU protected zone of Greece, the only Gossypium species cultivated for cotton production, i.e. G. hirsutum (Avgoulas et al., 2005), is susceptible to infection by the pest (EPPO, online) and there are no fungicides registered for the control of other diseases on cotton crops (http:// wwww.minagric.gr/syspest/SYSPEST_CROPS_skeyasma.aspx). Therefore, it is expected that the agricultural practices and chemical control methods currently applied to cotton crops in the protected zone of Greece would not prevent the establishment of C. gossypii. Uncertainty 1) Entry. Uncertainty exists on whether the pest could enter the protected zone of Greece by natural means from EU infested areas because there is lack of information on the maximum distance the pest can travel by air currents and/or insects (see Section 3.4.4). 2) Establishment. Uncertainty exists on whether the irrigation applied to cotton crops could make the microclimate in cotton-growing areas of central Greece more favourable for the establishment of the pathogen (see Section 3.4.3.2). 3) Spread. Uncertainty exists on the maximum distance ascospores and conidia of C. gossypii can be disseminated by natural means because of lack of information in the available literature (see Section 3.4.4). 4) Eradication. It is unknown whether C. gossypii has ever been eradicated somewhere. Uncertainty exists whether the spread of the pest by natural means will prevent eradication in case of introduction of C. gossypii in a limited area of the protected zone. Conclusions Colletotrichum gossypii meets the criteria assessed by EFSA for consideration as a potential quarantine pest for the EU protected zone of Greece (Table 8). The criteria for considering C. gossypii as a potential regulated non-quarantine pest for the EU are also met since cotton seeds are the main means of spread. The identity of the pest (Colletotrichum gossypii) is clearly defined and there are reliable methods for its detection and identification The identity of the pest (Colletotrichum gossypii) is clearly defined and there are reliable methods for its detection and identification None Absence/ presence of the pest in the EU territory (Section 3. 2) The pest is present in Bulgaria and Romania and is not known to occur in the protected zone of Greece The pest is present in Bulgaria and Romania and is not known to occur in the protected zone of Greece The pest could potentially enter into, become established in and spread within the EU protected zone of Greece Pathways of entry: Cotton seeds originating in infested third countries and/or EU infested areas The pest could also potentially enter the protected zone of Greece by natural spread (wind, insects) from EU infested areas The pest could potentially spread in the EU protected zone through the movement of cotton seeds and by natural means Cotton seeds is a main means of spread 1) It is not known whether the pest could potentially enter the protected zone by natural means from EU infested areas (Uncertainty 1) 2) There is uncertainty whether the irrigation applied to cotton crops could make the microclimate in cottongrowing areas of Central Greece more favourable for the establishment of the pathogen (Uncertainty 2) 3) There is no information on the maximum distance ascospores and conidia of the pest can travel by natural means (Uncertainty 3) Potential for consequences in the EU territory (Section 3.5) The introduction of the pest into the protected zone of Greece would impact cotton yield and quality The spread of the pest in the EU protected zone of Greece could potentially cause yield and quality losses as regards the intended use of cotton seeds None There are measures available to prevent the entry into, establishment and spread of the pest within the EU protected zone. These include pest-free area, pest-free place of production, certified cotton seed for sowing, seed treatment with acid and fungicides, application of fungicide sprays to the crop, management of crop residues, crop rotation, etc. Nevertheless, the currently applied phytosanitary measures are not fully effective in preventing the entry of the pest into the protected zone Eradication after introduction of the pest in a new area is considered as difficult because of existing natural means of spread There is not information about successful eradication of the pest somewhere There are no fully effective measures to prevent pest presence on cotton seeds It is not known whether the spread of the pest by natural means would prevent eradication in case of introduction of C. gossypii in a limited area of the protected zone (Uncertainty 4) Conclusion on pest categorisation (Section 4) Colletotrichum gossypii meets all the criteria assessed by EFSA for consideration as potential quarantine pest for the EU protected zone of Greece The criteria for considering C. gossypii as a potential regulated nonquarantine pest for the EU are also met since cotton seeds are the main means of spread
2019-03-18T13:58:30.760Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "4ac81bf370860bd771833c38244a46d6da14c57d", "oa_license": "CCBYND", "oa_url": "https://doi.org/10.2903/j.efsa.2018.5305", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "68ddc77431e12eb083b040aa7bef78b8fd603f42", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
250358323
pes2o/s2orc
v3-fos-license
Young children’s voices in an unlocked Sweden during the COVID-19 pandemic Aims: During the COVID-19 pandemic, Sweden was one of the few countries that rejected lockdowns in favour of recommendations for restrictions, including careful hand hygiene and social distancing. Preschools and primary schools remained open. Several studies have shown negative impacts of the pandemic on children, particularly high levels of anxiety. The study aim was to explore how Swedish school-aged children aged 6–14 years, experienced the COVID-19 pandemic and their perceived anxiety. Methods: In total, 774 children aged 6–14 years and their guardians answered an online questionnaire containing 24 questions, along with two instruments measuring anxiety: the Children’s Anxiety Questionnaire and the Numerical Rating Scale. A convergent parallel mixed-methods design was used for analysing the quantitative and qualitative data. Each data source was first analysed separately, followed by a merged interpretative analysis. Results: The results showed generally low levels of anxiety, with no significant sex differences. Children who refrained from normal social activities or group activities (n=377) had significantly higher levels of anxiety. Most of the children were able to appreciate the bright side of life, despite the social distancing and refraining from activities, which prevented them from meeting and hugging their loved ones. Conclusions: These Swedish children generally experienced low levels of anxiety, except those who refrained from social activities. Life was nonetheless mostly experienced as normal, largely because schools remained open. Keeping life as normal as possible could be one important factor in preventing higher anxiety and depression levels in children during a pandemic. Background Sweden was one of few countries that did not have a lockdown during the COVID-19 pandemic; recommendations for thorough hand hygiene and restrictions such as social distancing were initiated. This meant limiting close contact with people you do not live with, both indoors and outdoors [1]. Preschools and primary schools have also remained open throughout the pandemic, to prevent adverse effects such as loss of learning opportunities and a negative impact on children's mental and physical health. Children in Young children's voices in an unlocked Sweden during the COVID-19 pandemic general were not found to become severely ill with a COVID-19 infection [1]. When the World Health Organization (WHO) classified the outbreak of the coronavirus disease COVID-19 as a pandemic on 11 March 2020, more than 200 countries decided to lock down large parts of their society in an attempt to curb the spread of the infection. Many researchers have investigated how the pandemic has affected different adult populations, and in an earlier review, increased levels of post-traumatic stress syndrome and depression following infection with the COVID-19 virus were reported [2]. Historically, children in all societies have been severely affected by epidemic diseases. By the middle of the 20th century, polio epidemics were widespread around the world, causing early death or lifelong paralysis, but today the disease is on the verge of extinction, thanks to vaccinations [3]. In 2009, swine flu (the influenza A virus H1N1) spread around the world, particularly affecting children and young people, and schools were then closed in many countries to reduce the spread of infection [4]. Beyond purely medical research, there is a lack of studies highlighting children's perspectives on these past epidemics. Thus, it is important to evaluate how children experience the COVID-19 pandemic. In a recently published study from Brazil, participating children expressed being more worried during the ongoing pandemic than during normal conditions [5]. In a large Swedish study, where 1700 adolescents aged 15-19 years responded to an online survey, the adolescents reported being compliant with rules and regulations but at the cost of their psychosocial functioning. They also experienced poorer mental health than before the pandemic [6]. According to the WHO, children living in socioeconomically disadvantaged areas have been reported to be particularly exposed to lockdowns due to the COVID-19 pandemic [5,[7][8][9]. Since there was no lockdown in Sweden but only certain restrictions, it is of interest to see how this has affected children aged 6-14 years. Aim The study aim was to explore how a convenience sample of Swedish schoolchildren, aged 6-14 years, experienced the COVID-19 pandemic and their perceived anxiety. Study design A convergent parallel mixed-methods design [10] was chosen, where quantitative and qualitative data were collected at the same time but analysed separately and then merged, leading to a combined result. The quantitative research questions: • Has the experience of refraining from social activities during the COVID-19 pandemic affected children's perceived anxiety? • Have sociodemographic factors influenced the relation between refraining from social activities and children's anxiety during the COVID-19 pandemic? The qualitative research question: • What are children's thoughts about their situation during the COVID-19 pandemic? Participants In total, 774 children participated in the study (Table I). The inclusion criterion was children aged 6-14 years, and they participated together with their guardians. The survey was sent to the guardians, who gave their written consent to participate as well as the child. Data collection An online survey was distributed between 7 July-8 November 2020 using the web platform esMak-erNX3, version 3.0 (Entergate AB, Halmstad, Sweden). A convenience sampling method through snowballing was used, in which the Web survey was distributed by the research group and their social network contacts and posted on social media, primarily through Facebook and Instagram. The survey was also sent to primary schools within the researchers' network, mainly across three counties of Sweden, for help with the distribution. It took approximately 5-10 min to fill out the survey. The guardians answered questions 1-18 and the children answered questions 19-24 themselves or with the guardian's help if needed. The questionnaire The questionnaire, which consists of 24 items, is based on a questionnaire developed and used in a study in Brazil investigating the prevalence of anxiety among children during the COVID-19 pandemic [5]. The Swedish-adapted version of the questionnaire was first tested in a pilot study with 33 participants, where 20 participants were aged 6-14 years and 13 participants were aged 15-19 years. After the pilot study, some adjustments were made to further clarify the questions. The pilot test data were not included in the main data collection. The questionnaire included demographic questions on residency (housing and community size), household size, education level and employment status of the guardian, the age and sex of the child and whether the child had chronic diseases or disabilities. In addition, there were questions related to the pandemic: whether anyone in the family had had COVID-19, whether the guardian's monthly income had decreased during the pandemic, whether the child had attended school or had distance education and to what extent the child refrained from social activities. One open question was included: 'Is there something you would like to add?', allowing the children to express their own thoughts in relation to the COVID-19 pandemic. To measure level of anxiety, two visual scales were used, the Children's Anxiety Questionnaire (CAQ) [11], consisting of four pictorial items showing facial expressions, each representing a different type of emotion. The CAQ scores ranged from 4-12 points, with 4 points signifying low anxiety and 12 points signifying the highest level of anxiety. CAQ scores above 9 points are classified as intense anxiety [5]. The second scale was the Numerical Rating Scale (NRS), an 11-point scale indicating their current anxiety. The NRS score ranges from 0-10 levels, where 0='calm' and 10='very anxious' [12,13]. Quantitative analyses Statistical analyses were conducted using The Statistical Package for the Social Sciences (SPSS) versions 25.0 and 27.0 for Windows (IBM Corp., Armonk, New York, USA). The prevalence of intense anxiety was measured by the CAQ and NRS and the differences between categories was tested using the Pearson's chi-squared test. The difference between the children reporting intense anxiety and those with lower anxiety levels on the pandemic-related variables (contextual and individual factors) were tested with the independent samples t-test. Correlations between the independent variables were tested with Spearman's rank correlation coefficient and no correlations exceeded 0.13; thus, the assumption of multicollinearity among the independent variables was rejected. The residuals of the dependent variable (CAQ) were treated as normally distributed due to the rather large sample size and levels of skewness (0.99) and kurtosis (0.97) [14]. A general linear model (GLM) was used to analyse the association between refraining from social activities and perceived anxiety. Perceived anxiety (CAQ score) was used as the dependent variable in the GLM analyses. The independent variable Refrains from social activities had three response alternatives: 1 'totally', 2 'partly' and 3 'not at all', which were dichotomised into 'yes' for alternatives 1 and 2 and 'no' for alternative 3. The GLM crude model (Model 1) tested associations between the variable Refrains from social activities and perceived anxiety. Thereafter in Models 2-4, the covariates sex, age, chronic disabilities and reduced income were added stepwise to the regression model. The significance level was set to p<0.05. Qualitative analyses There were 326 answers to the open question 'Is there something you would like to add?'. Of these, 160 children answered 'no' and were not analysed further. Answers which were considered to originate from the guardian were excluded (n=15). This resulted in 151 answers ranging from a few words to longer responses with several sentences. The answers were read several times and were subjected to an inductive content analysis according to Elo and Kyngäs [15], in which words or phrases sharing a common meaning (meaning units), are distilled into content-related categories. Ethical aspects Ethical approval was obtained from the Swedish Ethical Review Authority (ref. 2020-02547) and the participants received information about the study in a separate part of the online survey. The survey was anonymous. results The total study population consisted of 774 children aged 6-14 years (mean age 9.5 years), with a higher proportion of girls than boys (52.5% vs 47.5%). The participating guardians were mainly mothers (83.4%). Most of the participants lived in cities with less than 500,000 inhabitants (88.6%), and a high proportion of the guardians had a university degree (77.9%). Most of the children had not had distance education during the pandemic (89.8%) ( Table I). Quantitative results All 774 children were included in the quantitative analysis. The level of perceived anxiety was low, both when measured with the NRS spanning 0-10 (mean 2.4, standard deviation (SD) 2.3, median 2) and when measured with the CAQ scale spanning 4-12 (mean 5.8, SD 1.5, median 5). There were no significant differences in anxiety between boys and girls ( Table II). The correlation between NRS and CAQ was 0.54 (p<0.001). The level of anxiety was significantly higher in the subgroup answering the open question (n=151), versus the 623 who gave no answer, CAQ 6.07 versus 5.68, respectively (p=0.016) ( Figure 1) and NRS 2.87 versus 2.3, respectively (p=0.017). Only refrains from social activities and reduced income showed a statistically significant correlation with perceived anxiety measured with CAQ (Table II). Accordingly, these factors, together with sex and age (6-9, 10-12 and 13-14 years) were included as covariates in the GLM model (Table III). The prevalence of children with intense anxiety (CAQ score >9 or NRS score >7) in the total study population was 2.5% (CAQ score) and 2.7% (NRS score) (Table II). Considering the factors included in the regression models (the child's sex, age, chronic disease/disabilities and reduced parental income during the pandemic), the prevalence of intense anxiety measured by CAQ was significantly higher among children who refrained from social activities compared to children who did not (4.5% vs 0.5%, p=0.001). The pattern was almost the same for children reporting intense anxiety on NRS (Table II). Children whose parents had a reduced income due to the pandemic showed a significantly higher prevalence of intense anxiety on CAQ in contrast to children whose parents' income was not affected by the pandemic (6.6% vs 1.9%, p=0.004). Association between social distancing and children's perceived anxiety. There was an association between refraining from social activities and children's level of anxiety measured by CAQ (Model 1, Table III). Refraining from social activities explained 3.5% (R 2 ) of the variance in the dependent variable perceived anxiety. Children who refrained from social activities showed a higher level of perceived anxiety compared with those who did not refrain from social activities during the pandemic. Adjusting for sex and age, chronic diseases/disabilities and reduced income, children who refrained from social activities still showed a significantly higher level of perceived anxiety (Models 2-4, Table III). When the association between refraining from social activities and perceived anxiety was adjusted for all four covariates (Model 4), the beta coefficient decreased from -0.57 to -0.52 and R-square increased from 0.035 to 0.060 (Table III). This indicates that those four background variables together contributed only 6% of the variation in the variable children's perceived anxiety. Thus, the association between refraining from social activities and children's perceived anxiety was robust and only marginally influenced by other pandemicrelated factors. Qualitative results Out of the 774 participating children, 151 children (89 girls and 62 boys) answered the open question (Table I). The qualitative analysis resulted in four categories: seeing the bright side of life, worrying about others and themselves, missing their loved ones, and feeling limited in their usual activities. Despite the pandemic situation, the children emphasised the bright side of life. The consequences of the pandemic affected their ordinary life, and the children were unable to attend their usual activities. The restrictions became obvious to them and interfered with their relationships. Seeing the bright side of life. The children were able to appreciate the bright side of the situation brought about by the COVID-19 pandemic. The fact that school was still open was seen as positive. 'I have only been at school, not staying at home. I think it's good' (girl 10 years). One aspect of the restrictions was that it meant more time with their parents, which they enjoyed. 'It's been fun being home a lot with mom and dad. We have built a lot of Lego' (boy 6 years). Furthermore, although they were unable to continue with their usual leisure activities, they appreciated having some close friends they were allowed to meet and play with. 'Good thing we picked out some friends I can hang out with' (boy 11 years). The children also emphasised the fact that outdoor activities during school time and breaks increased, which they thought was good. 'Think it is good to be playing outdoors more at school' (girl 9 years). Worrying about others and themselves. The children expressed worries about COVID-19 and that it is lethal. However, the children mostly did not indicate that they were worried about their own health, but it was clear that the COVID-19 virus was experienced as frightening. 'I get scared and worried when I think of Corona' (girl 13 years). Being afraid of the disease also came up, and they were thinking about how they as children could be affected by COVID-19. 'Children can be sick for a reeeeaaallllllyyyyyyy long time . . .' (boy 11 years). More prominent was their worry for their loved ones, their parents and younger siblings, but especially their grandparents, who were seen as being vulnerable. 'What worries me is that my grandmother or grandfather will get it because they are at a higher risk in more ways than I am' (girl 7 years). In addition, children expressed altruistic thoughts, feeling responsible to avoid transmitting the virus to other more vulnerable persons or loved ones. 'I'm careful because I don't want to infect someone who can infect someone old, because then the old one dies' (girl 8 years). Furthermore, they were worried that this pandemic might continue for a long time. The children expressed concern about the future. 'I wonder when the corona will end. I'm worried it's going to last for years' (boy 11 years). Missing their loved ones. The pandemic situation meant the absence of people they cherished. The children missed loved ones, often grandmothers and grandfathers, and they missed being able to hug them. 'I can't wait to hug my family' (girl 7 years). They missed their usual social interactions with their loved ones and the things they normally did together, which often made them cry. 'I cry quite often because I miss them. Mom is crying too, and we are hugging. She says no one knows when it will be over. I hate Corona' (boy 12 years). Feeling limited in their usual activities. The children expressed not being able to do what they usually do and, because the restrictions limited their activities, they felt disconnected both socially and physically. 'Boring not to be able to do the same things as usual, as mom and dad want to be careful' (boy 12 years). They highlighted the limits on outdoor activities such as sports, competitions and social life with friends, as well as expressing how they missed even their ordinary activities. 'I miss going to the swimming pool and being at my swimming school. It's sad that you can't play sports the same way' (girl 12 years). Integration of qualitative and quantitative results The integration of the results from the qualitative and quantitative results is shown in Figure 2. Generally, low levels of anxiety were found and an ability to appreciate the bright side of life was shown, even though they were worried about others and missed loved ones, especially their grandparents. Only a few of the children experienced high levels of anxiety, and this was mainly children who were refraining from social activities and children whose parents had a reduced income due to the pandemic. The restrictions imposed due to the pandemic limited the children's usual activities and their social contact. Discussion The main finding of this study was that most children involved in this Swedish survey experienced low levels of anxiety and an ability to appreciate the bright side of life during the pandemic. At the same time, they felt worried about others and missed loved ones. The year 2020 was a year like no other for many children around the world, and lockdowns and social distancing have been highlighted as important measures to limit the spread of COVID-19 [14]. However, different measures were taken in Sweden compared to many other countries, and the degree of lockdown was lower [16]. For young children, life did not change very much; rather, their everyday contact with their parents increased and, in most cases, they could continue their outdoor activities with their friends. Anxiety is according to Berde and Wolfe [17] defined as subjective senses of unease and dread and, due to the complexity of measuring anxiety in children, two different visual scales were chosen to capture the children's emotions. The present study showed that children generally reported low levels of anxiety, with no differences between boys and girls. However, the children who refrained from social activities showed a somewhat higher level of anxiety compared to those who did not refrain. Teens have shown more mental health problems than younger children during the pandemic and the prevalence of depression and anxiety symptoms was higher in girls than boys [18][19][20]. The effect of school closures as an important measure to minimise the spread of COVID-19 has scarcely been evaluated. However, studies from the severe acute respiratory syndrome (SARS) outbreak in China and Singapore indicate that school closures did not contribute to the control of the epidemic [21]. Despite this, most countries chose to close schools, even though it is known that lockdowns can have a lifelong impact on children's health [22]. Unlike many other countries, schools for children aged up to 16 years have remained open in Sweden [23]. Children were able to see friends outdoors, which made some aspects of life more normal or, for some children, even better than normal. With the exception of primary school, social distancing was generally recommended in Sweden. In our study, refraining from social activities only explained 6% of the variance in the dependent variable level of experienced anxiety, which could be seen in comparison with the study from Brazil [5], where the children experienced more social distancing and eight times the prevalence of anxiety found in our study. Children in Sweden have been able to go to school and to meet small groups of friends outdoors, but there could still be an increased risk that children might experience high rates of depression and anxiety. It is therefore necessary for school health staff to increase awareness about anxiety and depression in school-aged children. It is also important to initiate strategies of early prevention of mental illness in order not only to limit but also highlight the risk, especially among children who are more isolated due to the pandemic [24]. In our study, the children expressed worries arising from the pandemic situation; a few were worries for themselves, but mostly the worries were for elderly relatives. In order to reduce the children's concerns it is important that adults, perhaps above all the schoolteachers, talk to and educate children about COVID-19 in an honest and age-appropriate way to help them deal with their feelings and fears [25,26]. Furthermore, if parents discuss the pandemic situation with their children, there is less risk that the child will experience depression, anxiety and stress [27]. In general, the children in this study reported low levels of anxiety, and most of them could probably cope with their situation. It has been shown that children's ability to adapt to social distancing restrictions can affect their well-being; for example, by adapting to restrictions and social distancing. Another study, conducted in Chile during the COVID-19 pandemic and lockdown, found that a higher family functioning reduced the likelihood of behavioural and peer problems [28]. The COVID-19 pandemic has had a special impact on the children who have lived through it [9]. To promote a good health and well-being for children the United Nations (UN) Sustainable Development Goals in the 2030 Agenda and WHO and UNICEF have called on the world's decisionmakers to consider the best interests of children [29]. In order to fully understand how children in different countries are affected by the pandemic, there is therefore a need to conduct more long-term studies over the coming years. Limitations A few factors might have influenced the results of this study. The sample of respondents may have been skewed since the survey was initially distributed from the research group's network; the demographics showed that most of the participants lived in a city with 100,000-499,999 inhabitants, and the guardian was in most cases working, had a university degree and no reduction in income during the pandemic. For the youngest children, the guardian may have influenced the result. The use of convenience sampling meant that we could not systematically ensure that all regions of Sweden were included in the final sample. These factors could be seen as a limitation, since the data does not reflect the diversity of the general population in Sweden. Another limitation might be that we only present data from the first phase of the pandemic and are unable to present data from the following phases. Conclusion Our study showed that most children in our Swedish sample experienced low levels of anxiety, which is in contrast with many other studies. The association between refraining from social activities and level of anxiety was robust and showed that children who refrained from social activities experienced more anxiety. All restrictions were implemented voluntarily in Sweden, where no lockdown was carried out, which might have had an impact on our result. Even though the children experienced the pandemic in some ways as an intrusion on their lives, they also reported positive aspects of the pandemic. Keeping life as normal as possible could be one important factor in preventing higher anxiety and depression levels in children during a pandemic. Acknowledgements All of the authors give their sincere thanks to the children and their guardians for participating in the online survey. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship and/ or publication of this article. Funding The author(s) received no financial support for the research, authorship and/or publication of this article.
2022-07-09T06:17:25.601Z
2022-07-07T00:00:00.000
{ "year": 2022, "sha1": "c38626eea8783917f3de497d77db213bd0452e61", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14034948221108250", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "a19100621b27127c0c0d8e13b6f24fa89e66757a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
16716172
pes2o/s2orc
v3-fos-license
Surface electromyography in orthodontics – a literature review Electromyography is the most objective and reliable technique for evaluating muscle function and efficiency by detecting their electrical potentials. It makes it possible to assess the extent and duration of muscle activity. The main aim of surface electromyography is to detect signals from many muscle fibers in the area of the detecting surface electrodes. These signals consist of a weighted summation of the spatial and temporal activity of many motor units. Hence, the analysis of the recordings is restricted to an assessment of general muscle activity, the cooperation of different muscles, and the variability of their activity over time. This study presents the main assumptions in the assessment of electrical muscle activity through the use of surface electromyography, along with its limitations and possibilities for further use in many areas of orthodontics. The main clinical uses of sEMG include the diagnostics and therapy of temporomandibular joint disorders, an assessment of the extent of stomatognathic system dysfunctions in subjects with malocclusion, and the monitoring of orthodontic therapies. Background Electromyography (EMG) is the most objective and reliable technique for evaluating muscle function and efficiency by detecting their electrical potentials [1]. It makes it possible to assess the extent and duration of muscle activity. One type of EMG uses intramuscular electromyography in which a needle and fine-wire electrodes are inserted through the skin into the muscle tissue. This technique detects single motor unit potential (motor unit action potential -MUAP). Another type of EMG is surface electromyography (sEMG), which uses surface electrodes and detects superimposed motor unit action potentials from many fibers, as opposed to the single ones recorded by the intramuscular type [2]. In recent years the value of the high-density surface electromyography (HD-sEMG) has been extensively proven. In this new technique, signals are detected by the use of specially designed surface electrodes. The sensitivity and selectivity of sEMG are almost the same as those provided by the intramuscular type. It also allows for single motor unit analysis and gives information about muscle fiber conduction velocity (MFCV) [3,4]. The aim of this study was to present the main assumptions in the assessment of electrical muscle activity through the use of surface electromyography. Moreover, it is hoped that this paper will clarify the limitations and possibilities of the application of sEMG in orthodontics. Main Assumptions in the Assessment of Electrical Muscle Activity The main aim of surface EMG is to detect signals from many muscle fibers in the area of the detecting electrode. These signals consist of a weighted summation of the spatial and temporal activity of many MUs. Hence, the analysis of the recordings is restricted to general muscle activity, the cooperation of different muscles, and the variability of their activity over time [1]. This non-invasive and painless way of registering the results through the use of surface electrodes compensates for the aforementioned limitations, and its non-invasiveness is one of the most important advantages of this method [2]. Apart from the fact that these electrodes are not very selective, using them is limited to detecting the signals only from the muscles located close to the skin, thus masseter and anterior temporalis muscles are the most frequently evaluated. To register the activity of medial and posterior fibers of the temporalis muscles, removal of the hair is necessary, which is normally disliked by patients, and anatomical difficulties exist in the case of the pterygoid muscles. An additional disadvantage of surface electromyography is its sensitivity to imbalances in impedance [2]. Inconsistency in impedance is the main reason for the low accuracy and precision of EMG measurements, resulting in low reproducibility. This reproducibility is also questioned because of the different inter-electrode distances and their various locations over the muscles. Thus, the inter-electrode distance should be fixed and templates should be used to eliminate variability in electrode placement [5][6][7]. The most common solution to the inconsistency in impedance, which affects the reliability of sEMG, is an adequate quantitative electromyographic analysis with normalization procedures. The normalization of sEMG results consists of converting them into quotient indices. Thus, electrical activity is presented as a percentage of another high-reproducible activity of this muscle recorded under the same conditions (recordings are performed by using the same electrodes). Maximum voluntary contraction (MVC) seems to be a highly reproducible activity. The recorded electrical activities are therefore presented as a percentage of their activity in the MVC (%MVC). The main assumption of the normalization procedure is the constancy and good reproducibility of the forces generated during maximum voluntary clench [8,9]. The other possibility of the quantitative analysis is to relate the electrical activity of the muscles to the reference values obtained from the recordings performed in the submaximal voluntary contraction (subMVC). A high correlation coefficient was found between the electrical activity of the muscles and the forces generated by them in the subMVC [8]. Castroflorio et al. [6] verified these assumptions by analyzing the electrical activity of the masseter and temporalis muscles in 3 experimental sessions separated by 1 week. The subjects performed voluntary contractions at 80% MVC and occlusal forces were measured by compressive-force sensors. The reproducibility of these measurements was estimated at 71.9%. Moreover, the influence of the inter-electrode distance on the reproducibility of the EMG variables was observed. The high data reproducibility of the sEMG indices computed for 75% of the healthy subjects, estimated at a 6-month interval, was revealed by De Felicio et al. [10]. Visser et al. [11] also did not observe any statistically significant discrepancies between the activity and the asymmetry indices of both masseter and temporalis muscles estimated in healthy patients in the next 2 days (for activity index p>0.20, for asymmetry index p>0.10). The studies described above confirm that quantitative electromyographic analysis permits a reliable and accurate assessment of the electrical activity of muscles. Many of the most important issues related to the methodology of sEMG recordings were recently unified by a multi-national consensus initiative called SENIAM (Surface Electromyography for Non-Invasive Assessment of Muscles) and ISEK (The International Society of Electromyography and Kinesiology). Both of these organizations give recommendations for electrode placement, sEMG signal processing, and modelling. The electrical activity of the masticatory muscles can be recorded and assessed during static tests (rest, maximal, or submaximal voluntary clenching) or during active tests (opening or closing the mouth, protrusion, retrusion, or lateral deviation of the mandible, mastication, swallowing, or speaking). From a biomechanical point of view, the most important are dynamic activities such as mastication and 2 ambivalent static activities such as rest and the maximal isometric contraction of the muscles. Rest activity is usually performed at the clinical rest position, the so-called postural position. There is no isoelectric line observed in the sEMG recordings in this position determined by freeway space (2-4 mm). Moreover, the muscles are active [12]. These conclusions are supported by Suvinien et al. [13], who observed minimum muscle activity with an average opening of 15.4 mm, while postural position was determined by a 2-4 mm range of opening. Similar results, based on observations in a group of 40 subjects aged 22-34 years, were reported by Michelotti et al. [14]. The clinical rest position was determined with an average opening of 1.4 mm, and the lowest electrical activity was observed with a 7.7 mm average opening of the mouth. The analysis of the electrical muscle activity during isometric contraction, without any shortening of the muscle's fibers, can be performed during 3 to 5 seconds of clenching the teeth with maximum force, usually in intercuspal position, or during clenching of the teeth with controls (cotton rolls positioned on the mandibular second pre-molar and molars) (subMVC) [15,16]. Ferrario et al. [15], in a study in which 30 healthy subjects with Angle class I and overbite and overjet ranging from 2 to 5 mm were examined, observed larger standardized potentials in MVC in the temporalis anterior muscle (91.1 µV/µV%) than in the masseter muscle (85.45 µV/µV%). It was noted that the potentials were standardized against the MVC carried out with cotton rolls positioned on the posterior teeth, and in this condition the temporalis anterior normally contracted with a lower intensity, which is why the standardized potential of these muscle resulted in a higher value. It is also very important to assess the electrical activity of the muscles in the fatigue test -a continuous 10-second submaximal or maximal isometric muscle contraction. This makes it possible to evaluate the muscle fatigue that occurs because of the biomechanical processes, which determine muscles achieving the required or exerted forces. The most objective sEMG parameters used to evaluate muscle fatigue are median power frequency (MPF) and time-frequency distributions. Lodetti et al. [17], in a study of 29 healthy patients aged 20-35 years, observed higher values of MPF in the masseter, temporalis anterior, and trapezius muscles recorded during maximal clenching of the teeth in the intercuspal position, than during clenching the teeth with controls (cotton rolls). The highest values for the mean power frequency was achieved by the temporalis muscle (157 Hz), then the masseter (140 Hz), with the lowest being the trapezius muscles (68 Hz). This was explained by greater forces exerted during clenching of the teeth by the masseter and temporalis muscles than by the trapezius muscles. Among the dynamic activities, mastication is the one that is the most frequently analyzed. It is defined as the most important physiologic activity of the stomatognathic system. Because of the high variability of the movements that contribute to this activity, an assessment of mastication is very difficult and includes parameters such as the duration of the masticatory act, the number of cycles, and its effectiveness depending on the generated forces and the consistency of the food [18,19]. The mean values for the number of masticatory cycles recorded in healthy subjects for 15 seconds were greater for hard bolus such as paraffin wax (11.60) or an apple (11.60) compared to soft bolus such as a banana (10.60). The same was true for the mean values of the duration of the masticatory act involving the same substances. This was estimated for paraffin (498.67 ms), apple (457.33 ms), and banana (436.33 ms) [18]. To widen the quantitative electromyographic analysis to include all static and dynamic activities, electromyographic indices such as the activity index (Ac), symmetry (percentage overlapping coefficient -POC), and torque coefficient (To, Tc) should be evaluated. They make it possible to assess the activity, coordination, and symmetry of the homologous, synergistic, and antagonistic muscles. Common estimates of these indices confirm their high clinical value. Ferrario et al. [15] confirmed symmetric muscular patterns in MVC for masseter and temporalis muscles by using recordings performed in healthy patients with no temporomandibular disorder. The percentage overlapping coefficient (POC) in MVC for the masseter and temporalis anterior was estimated at 88.06% and 89.34%, respectively. Moreover, in MVC, the torque coefficient (TC) was low -6.36% -which indicated no laterodeviating effect on the mandible caused by unbalanced temporalis and masseter muscles of either right or left sides. This is in accordance with the results presented by De Felicio et al. [10]. In MVC, POC masseter was 87.11% and POC temporalis was 88.11%. TC was estimated at 8.79%. The symmetrical muscular pattern for the masseter, temporalis, and sternocleidonomastoid muscles in subjects without temporomandibular disorders was also corroborated by the EMG recordings performed in a group of 27 males and 35 females. POC in MVC for all the muscles ranged from 80.7% to 87.9%, and there were no significant differences between the groups. Precise analysis of the electrical activity in muscles through the use of surface electromyography performed in different tests simplifies the quantitative analysis of the stomatognathic system and permits an objective assessment of the muscles. Factors Affecting the Electrical Activity of the Masticatory Muscles Current knowledge does not permit an explicit evaluation of the influence of sex on masticatory muscle activity. Ferrario et al. [20] did not observe any differences between the rest activity of the masseter and temporalis muscles in males and females according to the recordings obtained from 92 healthy subjects. However, differences were recorded for the maximal voluntary contraction (MVC) in the intercuspal position. The mean MVC potentials for the masseter and temporalis muscles were higher in males (181.9 µV, 216.2 µV) than in females (161.7 µV i 156.8 µV). This does not correspond with the results presented by Pinho et al. [21]. Overall, the resting activities of the masseter and temporalis muscles were higher in women (2.64 µV) than in men (1.37 µV). Just as in the previous study, for MVC, higher muscle activity was observed in females. This was estimated at 65.17 µV in females and 51.24 µV in males. Rilo et al. [22], on the basis of an assessment of 40 subjects without any signs or symptoms of temporomandibular joint dysfunctions, noticed similar activities for the masticatory muscles in both sexes during clenching and maximum opening of the mouth. Age is a very important sociomedical factor, which should a priori be taken into consideration during any assessment of the activity of muscles. The 24-hour recordings of sEMG described by Ueda et al. [23] indicated a longer duration for the activity of the temporalis muscles in children and the masseter muscles in adults. The authors attributed this to the incomplete development of dentition and temporomandibular joints, as well as the immaturity of the muscles in children. The next factor that modifies the electrical activity of muscles is the difference in the activation of the motor units during the day and night. Tabe et al. [24] and Hiyama et al. [25] confirmed the decrease in the activity of the masticatory muscles at night. This is supported by the results presented by Saifuddin et al. [26], who compared the resting activities of the muscles assessed during the day and night with the activities of the muscles during mastication in 2 registration sessions. The lowest activity for both the masseter and temporalis muscles was recorded at night. sEMG in the Diagnosis and Treatment of TMD Patients Temporomandibular dysfunction (TMD) is a widely comprehended term that includes disorders of the masticatory muscles and temporomandibular joints. Many theories have been presented relating to the etiology of these disorders. Some clinicians indicate occlusal disturbances, while others cite psycho-emotional factors as being the main etiological factors for TMD. Li et al. [27] investigated the short-term impact of occlusal disturbances such as an occlusal 0.5 mm high spot that was placed on the right lower first molar. On the 3 rd day following placement of these high spots, all the patients complained of headaches in the right temporal region and the activity of the right anterior temporalis muscle significantly increased at rest. Moreover, on the 3 rd and 6 th day with the high spot, the EMG activity of the tested muscles significantly decreased in the maximum voluntary contraction (MVC) and the asymmetry index of bilateral anterior temporalis significantly increased. The high diagnostic value of the sEMG recordings in the diagnosis of TMD was also reaffirmed by Pinho et al. [21]. The results of their study indicated a satisfactory sensitivity in the discrimination of TMD patients on the basis of EMG muscle analysis in both static activities. The overall mean resting activity of the masticatory muscles was lower in the healthy subjects (1.92-1.20 µV) than in the TMD patients (2.52±1.25 µV). Conversely, the recordings obtained in the maximal voluntary contraction (MVC) were higher in the healthy subjects. Overall mean activity in MVC was 110.30±82.97 µV in the healthy group and 66.77±35.22 µV in the group consisting of TMD patients. Similar conclusions were presented by Tartaglia et al. [16]. Surface electromyography of the masseter and temporalis muscles was performed during maximum teeth clenching in 103 TMD patients and compared to 32 control subjects. The standardized total muscle activity was significantly higher in healthy subjects (131.7 µV/µV%) than in TMD subjects (88.7-117.6 µV/µV%). Moreover, symmetry in the temporalis muscles was larger in the control group (86.3%) than in TMD patients (80.5-84.9%). The importance of a parameter such as muscular symmetry was also mentioned by Liu et al. [28]. EMG recordings were performed in 24 TMD symptomatic (mean age 26.7 years) and 20 normal (mean age 27.1 years) subjects. The results indicated that asymmetry of the masseter during maximal clenching (MVC) was significantly pronounced in TMD patients (30.5%) compared to normal subjects (19.1%). The asymmetry index of the posterior temporalis in MVC was also larger in symptomatic (30.1%) compared to healthy patients (17.4%). The asymmetry of the anterior temporalis was more pronounced in TMD patients during 70% MVC and was estimated at 28.6%, and in normal subjects it was 19.6%. The asymmetry of anterior digastricus in the mandibular rest position was also higher in the symptomatic (17.2%) compared to the asymptomatic (8.8%) group. The validity and objectivity of the sEMG studies in distinguishing normal and TMD patients was also confirmed by Woźniak [29]. The most important in this respect were the recordings of the temporalis muscles in MVC (AUC=0.918) and changes in the mean power frequency (MPF%) of the masseter during a 10-second maximal voluntary contraction in intercuspal position (AUC=0.911). The results of the sEMG studies presented above help identify TMD patients. Moreover, the sEMG analysis can be useful in assessing the effectiveness of treatments for these dysfunctions. Such a study was presented by Ferrario et al. [30]. sEMG recordings, which were performed to assess neuromuscular equilibrium, confirmed the immediate effect of a stabilization splint on muscle activity in TMD patients. The 2 mmthick splint reduced the electrical activity of the masseter and temporalis muscles at rest and made the muscles more equilibrated both between the left and right side (larger symmetry in the masseter muscle, p<0.05) and between the temporal and masseter muscles (activity index, p<0.01). A similar influence of the splint was reported by Botelho et al. [31]. The sEMG recordings of 15 TMD patients after installing a splint confirmed a higher symmetry between the temporalis and masseter muscles during MVC; the symmetry index values were similar to those in the control group. The importance of sEMG is that it is used as biofeedback as part of a safe and adverse effects-free therapy. Such a therapeutic procedure, which uses EMG instruments to measure, process, and give reinforcing information as feedback, helps patients learn how to control muscle tension levels previously under automatic control. By monitoring sEMG recordings, patients attempt to relax those muscles that are tense in TMD subjects. The importance of biofeedback was supported by Turk et al. [32]. Two studies were conducted to assess the differential efficiency of 2 commonly used TMD therapies -intraoral appliances and biofeedback (BF)-separately and in combination. Improvements to the benefits of treatment observed in the follow-up to BF therapy supported the importance of performing dental and psychological treatments for successfully helping patients with TMD. EMG should be used for a deeper understanding of the pathologies of dysfunctional patients. It complements standard clinical assessments, providing quantitative data on the function of the stomatognathic system, with minimal discomfort to the patients and without invasive procedures. It is also a useful tool that helps to create algorithms of treatment procedures and makes it possible to monitor them. EMG Recordings in Patients with Malocclusions It is very difficult to accurately define the relationships between facial morphology and the function of the stomatognathic system because of the many etiological factors for malocclusions, large inter-individual variability, and the plurality of the predictors that describe dentoalveolar and morphological disorders. Hence, the main aim of the studies based on EMG recordings is to find such an association. The influence of vertical malocclusions on the electrical activity of the muscles was described by Yousefzadeh et al. [33]. EMG recordings of the temporalis, masseter, orbicularis oris, and digastric muscles were performed in patients aged 10.1-13.2 years with an anterior open bite. The patients with malocclusions exhibited lower activity in the muscles during clenching and higher activity in the muscles of the balancing side during chewing compared with healthy subjects. Studies by Ciccone de Faria et al. [34] paid attention to the different activities of the muscles in patients with either a skeletal or dentoalveolar malocclusion. Healthy patients presented the highest electrical activity in the temporalis and masseter muscles during MVC (85.27%). Significantly lower activity was detected in subjects with a dentoalveolar anterior open bite (61.52%), and the lowest in patients with a skeletal open bite (42.13%). Moreover, patients with a skeletal malocclusion showed the lowest electrical activity in the muscles during chewing. The aim of the study by Moreno et al. [35] was to determine the influence of sagittal malocclusion on the electrical activity of the masticatory muscles. The results obtained indicated that patients with Angle class II showed higher activity than other classes for the temporalis muscles in deglutition and chewing; subjects with class III achieved the highest activity for the temporalis and masseter muscles during MVC. The values of temporalis activity in MVC for patients with I, II, and III Angle classes were significantly different: 185.40 µV, 123.46 µV, and 226.80 µV, respectively. A very interesting study that investigated the electrical activity of the anterior temporal (TMA) and masseter muscles (MMA) in different facial skeletal types described by the angles as ANB and SN-GoMe was presented by Cha et al. [36]. There were no significant differences in resting MMA among all groups; resting TMA was significantly higher in patients with class III and SN-GoMe >36°. As when at rest, TMA during MVC was also higher in the latter group. Many studies have also determined the influence of the transversal malocclusions on the function of the masticatory muscles. Moreno et al. [35] observed that the posterior crossbite resulted in a large decrease of ipsilateral masseter activity during a maximum effort test, thus most of the force was generated by the anterior temporalis muscle. Another study showed that this malocclusion also affected mastication [37]. The percentage of reverse cycles when chewing was 59.0% (soft bolus) and 69.7% (hard bolus) for the affected side, and 16.7% (soft bolus) and 16.7% (hard bolus) for the non-affected one. Moreover, it was once more proved that masseter activity was reduced on the crossbite side and unaltered or increased on the non-affected side [37]. Slightly divergent results were presented by Tecco et al. [38]. The sEMG activity for the masseter muscles between patients with crossbite and the control group was similar, suggesting that the occlusal alteration being investigated had no predictable effect on the activity pattern of these muscles. Nevertheless, they observed a significant difference in sEMG activity for the anterior temporal muscle, which was higher at rest on the crossbite side. They also observed significantly lower activity in the sternocleidomastoid muscles during MVC in the control group compared to the group with transverse malocclusion. Analysis of the studies presented above confirms that craniofacial morphology has a considerable influence on the electrical activity of the masticatory muscles. These studies also clarify the anatomical and physiological coincidence in the stomatognathic system. Therefore, sEMG extends the number of tools that are useful in the clinical diagnosis of sagittal, transversal, and vertical malocclusions. sEMG in Monitoring of Orthodontic Therapies Because of the inextricable association between function and morphology, one of the possibilities for orthodontic treatment is functional therapy. The objective of this kind of treatment is to enhance the equilibrium of the muscles and correctly balance the forces inducing the growth and the development of craniofacial skeletal features [39,40]. This justifies EMG recordings of the masticatory muscles before, during, and after orthodontic therapies in order to monitor or assess their effectiveness. The main example of a functional removable appliance is the activator, invented by Andresen. Erdem et al. [40] evaluated the activities of the masticatory muscles in children with class II division 1 malocclusion treated with this appliance and compared to untreated control patients at the start of the therapy and 12 months later, to check the effectiveness of this functional appliance. The activity of the temporalis and masseter muscles during clenching, chewing, and swallowing increased in both groups, particularly in the treatment group. The activity of the orbicularis oris during whistling increased significantly only in the treatment group. sEMG recordings performed in a study by Saccucci et al. [39] confirmed that the functional device employed (Occlus-o-GuideOrtho-Tain Inc., Toa Alta, Puerto Rico) also achieved the aim of this orthodontic functional therapy. The study sample consisted of thirteen 9-year-old children with class II, deep bite, and labial incompetence, and 15 children of the same age with normal occlusion. The electrical potentials of the orbicularis oris (OO) were investigated before therapy, as well as after 3 and 6 months of treatment during many functional tests. The treatment group showed significantly lower values in the muscle tone of the lower OO at rest (1.7 mV�s -1 ) and during protrusion of the mandible (31.9 mV�s -1) with respect to the control group (at rest 3.1 mV�s -1 ; protrusion of the mandible 52.1 mV�s -1 ). In the treated group there was a significant increase in the muscle tone of the lower OO at rest after 3 months of therapy (from 1.7 mV�s -1 to 3.5 mV�s -1 ). The upper OO showed a significant increase from 9.3 mV�s -1 to 28.5 mV�s -1 during protrusion of the mandible recorded between the 3 rd and 6 th months of treatment. Patients after treatment reached a muscular activity similar to that in the control group, where no changes in muscle tone were observed. The EMG studies were also helpful in defining the requirements for the application time of the functional appliances. To estimate this, the activities of the muscles at different times of day and night were compared. The results of the study by Tabe et al. [24] confirmed the low effectiveness of functional therapy during the night. The activity of the masseter, temporalis, and digastric muscles with the appliance in the mouth significantly decreased at night compared to daytime. The authors recommended use of functional appliances mostly during the day in combination with voluntary biting to achieve adaptation by the masticatory muscles, due to the high electrical activity during MVC and the higher activity of the muscles during the day than the night. Similar conclusions were presented by Hiyama et al. [25]. They analyzed the nocturnal activity of the masseter and suprahyoid muscles during therapy with a functional appliance such as the bionator. There were no significant changes in the maximal EMG activities of the muscles recorded during the first 3 hours without the appliance inserted and after 3 hours with the bionator in the mouth. This supports findings of the previous study, that it is not advisable to use functional appliances during sleep to obtain the desired treatment effects. EMG studies were also used to monitor therapy with fixed functional appliances, such as the Herbst appliance [41,42] or its modification, the Forsus Fatigue Resistant Device (FFRD) [43]. Studies by Leung and Hägg [41] permitted an analysis of the activity of the masseter and the temporalis muscles during treatment with the Herbst appliance, and determined the optimal time for such a therapy was 6 months. Similar changes in the activity of the same masticatory muscles during gradual advancement of the mandible with the Herbst appliance were described by Du and Hägg [42]. The electrical activity increased, especially in the masseter muscles. Moreover, the stability of the treatment's effects was assessed by monitoring muscle activities in the follow-up period after treatment. Further studies by Sood et al. [43] that described the muscle response during treatment with the Forsus Fatigue Resistant Device demonstrated that the appropriate neuromuscular adaptations occurred at the end of the 6 th month of the therapy provided by this kind of fixed appliance. After 1 month of treatment there was a decrease in masticatory muscle activity during the swallowing of saliva and maximal voluntary clenching as a result of the instability of the occlusion due to the protrusion of the mandible. Electromyographic recordings also permit an assessment of interdisciplinary orthodontic-surgical treatments, whose aim is to improve not only facial features but also the function of the stomatognathic system. Trawitzki et al. [44] found an increase in the EMG activities of the masseter and temporalis muscles during MVC and mastication in patients who underwent surgical correction of class III. Despite such an improvement, the activities of the muscles were still lower than those recorded in patients without malocclusion. Van den Braber et al. [45], in a sample of retrognathic patients, did not report any changes in either chewing efficiency or MVC activity in the masticatory muscles after surgical correction of this deformity. Moreover, all the recorded values were lower than those obtained from healthy controls. Conclusions This systematic review of the above studies confirms the high value of surface electromyography as a non-invasive, objective, and precise tool that expands our knowledge about the anatomy, physiology, and pathology of the stomatognathic system.
2018-04-03T01:18:36.612Z
2013-05-31T00:00:00.000
{ "year": 2013, "sha1": "03556064e98d1d537b742e4e793691da2fa0a647", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc3673808?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "03556064e98d1d537b742e4e793691da2fa0a647", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263281785
pes2o/s2orc
v3-fos-license
A Case Report of Chilaiditi’s Syndrome With Sigmoid Volvulus Chilaiditi’s syndrome is the hepatodiaphragmatic interposition of the colon. It can be caused by any pathology of intestinal, hepatic, and diaphragmatic factors. Any anatomic variations or functional abnormalities can increase the development of Chilaiditi’s syndrome. It is usually asymptomatic and is found indecently in radiological studies. It is treated conservatively as long as any complications do not arise. This case of Chilaiditi’s syndrome was associated with sigmoid volvulus and multiple tubercles on its surface. A 35-year-old male patient presented to the outpatient department (OPD) with complaints of weight loss, bilateral flank pain, abdominal distention, decreased appetite, vomiting, and diarrhea. CT scan showed a grossly distended loop of the colon with sigmoid volvulus and Chilaiditi’s sign. A laparotomy was done, sigmoid volvulus was relieved, a biopsy of tubercles was taken for histopathology, and a colostomy was done. The biopsy result showed abdominal tuberculosis. The colostomy was later reversed. Chilaiditi’s syndrome is usually treated surgically because it is associated with other complications in the gastrointestinal tract. Previous studies showed the management of cases by colonic resection with primary anastomosis; however, there was one case that reported mortality due to an anastomosis leak. In this article, we present a case of Chilaiditi’s syndrome associated with sigmoid volvulus and abdominal tuberculosis as seen on biopsy, which was managed surgically by colostomy followed by colostomy reversal on follow-up. Introduction Chilaiditi's sign is the hepatodiaphragmatic interposition of the colon described by Chilaiditi in 1910 [1].Chilaiditi's sign along with clinical symptoms is called Chilaiditi's syndrome.Chilaiditi's sign is a rare finding and is seen incidentally on abdominal or chest radiographs with 0.025%-0.28%incidence [2]. The pathogenesis of Chilaiditi's syndrome is dependent on intestinal, hepatic, and diaphragmatic factors.The interposition of the colon between the diaphragm and liver is prevented by the colon's fixation and suspensory ligaments, which support it [3].However, in rare cases, there are anatomical variations that include pathologies such as congenital malposition and suspensory ligament pathologies, which include elongation, laxity, or the complete absence of dolichocolons.Functional disorders can also lead to the development of Chilaiditi's syndrome including constipation, cirrhosis of the liver, obesity, multiple pregnancies, ascites, aerophagia, diaphragmatic paralysis, and chronic lung disease.Mental abnormalities such as schizophrenia can also result in Chilaiditi's syndrome [4,5]. In the majority of cases, the condition is asymptomatic and mostly diagnosed on radiological investigations as an incidental finding, but if symptomatic, it shows mostly pathological abdominal signs.Conservative treatment is limited to symptomatic relief only as it cannot change the course of the disease as well as its complications and its recurrence in the future, for which invasive surgical techniques are the best modality of choice as compared to conservative options available even as a preventive measure [6,7]. Case Presentation A 35-year-old male patient presented to the outpatient department (OPD) with chief complaints of weight loss for the last three months along with bilateral flank pain, abdominal distention, decreased appetite, vomiting, and diarrhea for one and a half months.According to the patient, he was fine three months back after which he had a progressive weight loss of 50 kg over three months.The patient also complained of bilateral flank pain, gradual in onset, colicky in nature, radiating to the back, aggravating with food intake, and relieved with IV analgesics.It was associated with vomiting after every meal, copious in amount, yellowish in color, and mixed with mucus.The vomiting would be relieved after vomiting food contents.Further complaints by the patient were abdominal distention, decreased appetite, and diarrhea.The frequency of diarrhea was three to five episodes per day, which was watery in nature.The patient's past medical and surgical history was not significant with any known allergies or any regular medications. On general physical examination, the patient was well-oriented in time, and space.There were no peripheral stigmata with a blood pressure of 130/80 mmHg, pulse of 85 beats per minute, respiratory rate of 20 beats per minute, and oxygen saturation of 98% at room air.Other systemic examinations were unremarkable.On abdominal examination, his abdomen was distended, and there were no visible pulsations, veins, scars, masses, or striations.The abdomen was soft and mildly tender in the left lower quadrant and right upper quadrant.On percussion, the abdomen was dull with positive fluid thrill.Bowel sounds were absent on auscultation.On digital rectal examination, the rectum was empty.Baseline laboratory investigations, such as complete blood count (CBC), urine routine/examination (RE), hepatitis B surface antigen (HBsAg), hepatitis C virus (HCV) Ag, and blood culture, were normal.Liver function tests also showed no significant findings.However, serum electrolytes revealed hyponatremia (122 mEq/L), hypokalemia (2.37 mEq/L), and hypochloremia (90.9 mEq/L).On radiological investigations, an x-ray abdomen was done, which showed sigmoid volvulus with severe dilation of the sigmoid and transverse colon.The descending colon was also dilated, and the rectum collapsed as seen in Figure 1. FIGURE 1: X-ray scan showing a grossly distended loop of the colon with sigmoid volvulus with air under the right side of the diaphragm On CT liver dynamic study (covering chest) axial view, there was a severely distended sigmoid colon (13.5 cm) and transverse colon (8 cm) with twisting of mesentery with the whirling of mesenteric vessels, which suggests sigmoid volvulus.Mesenteric vessels were patent.There were also gross ascites with diffuse peritoneal thickening and omental nodularity as seen in Figure 2. FIGURE 2: CT scan showing the dilated sigmoid with twisting of the mesentery and gross ascites Another section of the CT scan axial view (Figure 3) showed part of the large bowel in the right subphrenic region.No focal lesion was seen in the liver, spleen, kidneys, pancreas, gall bladder, and adrenals. FIGURE 3: CT scan showing the interposition of the colon between the liver and diaphragm There were a few tiny subcentimeter bilateral renal calculi.There were no large para-aortic lymph nodes.No obvious osseous lesions were seen.CT chest revealed few atelectasis bands in both lower lobes.A small pleural effusion was also seen. A diagnosis of sigmoid volvulus was made for which laparotomy was done.On exploration, the sigmoid volvulus was examined, and multiple tubercles were present throughout the abdomen for which a biopsy was taken.The sigmoid colon was untwisted, and part of it was resected because it was ischemic and had adhesions.A double-barrel colostomy was done.A 6-week follow-up was advised to the patient.Biopsy results revealed abdominal tuberculosis for which anti-TB drugs were prescribed. The patient came for his follow-up after six weeks for his colostomy reversal.Medications were provided at home, and advice was given on how to properly dress the wound, mobilize the patient, avoid heavy lifting, and consume a healthy diet. Discussion The patient in our study had Chilaiditi's syndrome with sigmoid volvulus, but multiple tubercles were also seen on the sigmoid colon.A colostomy was done, and a biopsy was taken.The patient was counseled afterward, and a follow-up was suggested when the biopsy report was available.As seen in this study, the mainstay treatment of the condition was surgery.Many cases have been reported globally that are in accordance with the treatment provided in this case [8][9][10]. Previous studies conducted by separate researchers, such as Williams et al., on Chilaiditi's syndrome with colonic volvulus have shown that the treatment repeatedly given was partial colonic resection, followed by a primary anastomosis, of which one of them reported mortality because of anastomosis leak [11]. A case report study by Erdem et al. reported a patient with the chief complaint of shortness of breath.The patient in our study did not have this chief complaint [12].The presence of right subphrenic airspace on a chest X-ray in Chilaiditi's syndrome has many differential diagnoses, such as subdiaphragmatic abscess, pneumoperitoneum, and diaphragmatic hernia [13].Chilaiditi's syndrome can cause numerous complications, including volvulus of the cecum, splenic flexure, transverse and sigmoid colon, cecal perforation, and subdiaphragmatic appendicitis perforation.Undiagnosed Chilaiditi's sign can increase the risk of colonic perforation during the procedure of colonoscopy and liver biopsy [14]. Chilaiditi's syndrome can be managed conservatively on its own.Therefore, a thorough complete radiological workup must be done to exclude other differential diagnoses and prevent unnecessary intervention where it is not needed.Kamiyoshihara et al. presented a case of a 75-year-old patient involved in a road traffic accident who was misdiagnosed with a diaphragmatic hernia.When an explorative laparotomy was performed, it turned out to be a Chilaiditi's syndrome case, which could have been managed conservatively [15]. Conclusions We presented a rare case of sigmoid volvulus with multiple tubercles present on its surface in an adult with Chilaiditi's syndrome.We conclude with surgical correction of the sigmoid volvulus along with a biopsy of tubercles followed by colostomy and secondary anastomosis after the arrival of biopsy results.In the absence of volvulus or ischemia of the colon, Chilaiditi's syndrome should be managed conservatively.
2023-10-01T15:23:10.883Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "c7911b715ebef575911354935128d0f955799b13", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/164791/20230929-27657-14tbu3m.pdf", "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "dd1c0bb829800a2260382061ea02aad3f21b1e0a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
260061707
pes2o/s2orc
v3-fos-license
Positive outcome of diaphragm covering and total pleural covering techniques for catamenial pneumothorax Abstract Catamenial pneumothorax (CP) is reported to be caused by the endometriosis of diaphragm, lung and parietal pleura. Therefore, the resection of endometriotic lesion in these organs is reported as effective surgical treatment. Overlooking of endometrial tissues during the operation is believed to be the cause of recurrence after surgical treatment. To address this problem, we underwent total diaphragm covering (TDC) and total pleural covering with sheets of oxidized regenerated cellulose mesh. This report described two CP cases that underwent total diaphragm covering (TDC) and total pleural covering. Both patients were followed up for 1 year without recurrence. INTRODUCTION Catamenial pneumothorax (CP) is reported to be caused by the endometriosis of diaphragm, lung and parietal pleura. To prevent its recurrence, surgical resection of the endometrial tissues of these organs is performed as effective treatment [1]. However, the postsurgical recurrence rate is relatively high [1,2], because of microscopic thoracic endometrial tissue overlooked during the operation, and the re-dissemination of endometrial tissue from pelvic endometriosis [1]. To prevent possible recurrence, we attempted to cover the entire surface of diaphragm and lung using total pleural covering (TPC) technique reported by Kurihara et al. [3] reported for lymphangioleiomyomatosis (LAM) management. Here we report two patients with CP who were successfully treated with total diaphragm covering (TDC) and TPC techniques. Case 1 A 42-year-old woman with a history of endometriosis who was treated with hormonal therapy visited our hospital because of recurring pneumothorax of the right lung. The last pneumothorax was 1 year ago. The pneumothorax persisted for 20 days with chest tube insertion, and video-assisted thoracoscopic surgery (VATS) was scheduled. During the surgery, endometrial lesions of the diaphragm and parietal pleura were observed ( Fig. 1A and B). No lung lesions were observed. The diaphragm endometriosis was resected using an endoscopic stapler, and hand sutures were added (Fig. 1C). The lesions in the parietal pleura were removed by limited parietal pleurectomy with endoscopic scissors (Fig. 1D). The entire diaphragm was covered with oxidized regenerated cellulose (ORC) mesh sheets (TDC) (Fig. 1E). To prevent overlooking the microscopic thoracic endometrial tissues of the lung, we covered the entire lung using ORC mesh sheets (TPC) (Fig. 1F). The patient was followed up for 1 year without evidence of recurrence. Case 2 Case 2 was a 40-year-old woman with a history of endometriosis treated with hormonal therapy who came to our hospital because of recurring pneumothorax. The last pneumothorax was 3 months ago. The pneumothorax persisted for 7 days with chest tube insertion, and VATS was scheduled. During the surgery, many endometrial lesions of the diaphragm and lung were observed ( Fig. 2A and B). We resected as many lesions as possible using an endoscopic stapler, and TDC and TPC were performed to cover the residual lesions (Fig. 2C). The patient was followed up for 1 year without evidence of recurrence. COMMENTS The recurrence rate after surgery of CP was relatively high (30-32%) [1,2] compared with that of normal primary spontaneous pneumothorax (3-7%) [4] because of microscopic thoracic endometrial tissue overlooked during the operation, and the redissemination of endometrial tissue from pelvic endometriosis [1]. To prevent this, we attempted coverage using the TPC technique. TPC is a surgical technique, which the surgeons cover the entire visceral pleura with ORC mesh for reinforcement, reported by Kurihara et al for the treatment of LAM. After TPC, the visceral pleura is approximately five times thicker than the untreated portions, whereas extensive pleural symphysis accompanying fibroblast proliferation and collagen deposition [3]. We hypothesized that this thickened pleura may cover the overlooked or re-disseminated endometrial tissues, and extended its application to the surface of the diaphragm. In this report, both patients who were treated with TDC and TPC were followed up for 1 year without evidence of recurrence. This suggests that the effectiveness of the treatments. In Case 2, there were more endometrial lesions than we could resect, and the residual lesions were covered with ORC mesh. The positive outcome of Case 2 suggests that TDC and TPC method may also be effective for cases that endometrial lesions could not be completely resected. In conclusion, TDC and TPC methods may be effective in reducing CP recurrence. Further treatment using this strategy may be performed and reported.
2023-07-23T05:10:01.154Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "eda9d7769fc35bad4b318aa8ed043645d85ebcfd", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "eda9d7769fc35bad4b318aa8ed043645d85ebcfd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
260527502
pes2o/s2orc
v3-fos-license
The interplay of personality traits, anxiety, and depression in Chinese college students: a network analysis Background Anxiety and depression are among the greatest contributors to the global burden of diseases. The close associations of personality traits with anxiety and depression have been widely described. However, the common practice of sum scores in previous studies limits the understanding of the fine-grained connections between different personality traits and anxiety and depression symptoms and cannot explore and compare the risk or protective effects of personality traits on anxiety and depression symptoms. Objective We aimed to determine the fine-grained connections between different personality traits and anxiety and depression symptoms and identify the detrimental or protective effects of different personality traits on anxiety and depression symptoms. Methods A total of 536 college students from China were recruited online, and the average age was 19.98 ± 1.11. The Chinese version of the Ten-Item Personality Inventory, Generalized Anxiety Disorder-7, and Patient Health Questionnaire-9 was used to investigate the personality traits and symptoms of anxiety and depression of participants after they understood the purpose and filling method of the survey and signed the informed consent. The demographic characteristics were summarized, and the scale scores were calculated. The network model of personality traits and symptoms of anxiety and depression was constructed, and bridge expected influence (BEI) was measured to evaluate the effect of personality traits on anxiety and depression. The edge accuracy and BEI stability were estimated, and the BEI difference and the edge weight difference were tested. Results In the network, 29 edges (indicating partial correlations between variables) bridged the personality community and the anxiety and depression community, among which the strongest correlations were extraversion-fatigue, agreeableness-suicidal ideation, conscientiousness-uncontrollable worry, neuroticism-excessive worry, neuroticism-irritability, and openness-feelings of worthlessness. Neuroticism had the highest positive BEI value (0.32), agreeableness had the highest negative BEI value (−0.27), and the BEI values of neuroticism and agreeableness were significantly different from those of most other nodes (p < 0.05). Conclusion There are intricate correlations between personality traits and the symptoms of anxiety and depression in college students. Neuroticism was identified as the most crucial risk trait for depression and anxiety symptoms, while agreeableness was the most central protective trait. Introduction Anxiety and depression are among the greatest contributors to the global burden of diseases (1), with detrimental effects on mental and physical health, such as an increased risk of suicidal thoughts and suicide attempts (2,3), insomnia (4), daily maladaptive health behaviors (5), Parkinson's disease, and cardiovascular disease (6,7). In particular, early adulthood -college undergraduate students are usually in this stage -is a critical period, with heightened susceptibility to anxiety and depressive disorders, with more than 20% of young adults meeting the criteria for anxiety disorders and the prevalence of depression reaching 25% among undergraduate students (8,9). Recent studies showed that the prevalence of anxiety and depression among Chinese college students was also high (10-13), for example, at 22.7 and 46.8%, respectively, according to a study (13). In addition to common consequences, college students who have developed anxiety or depression may have difficulties in academic functioning and suffer from low quality of life, heightening the necessity for research into this issue in China. Given that anxiety and depression are prevalent and often accompanied by a decline in quality of life, unpleasant symptoms, and impaired social relationships, it is important to identify the pathogenesis of anxiety and depression. Personality traits have been identified as risk/protective factors for anxiety and depression (9,(14)(15)(16)(17)(18). The Big Five (i.e., five facets of personality traits including neuroticism, extraversion, agreeableness, conscientiousness, and openness) is one of the most relevant frameworks when examining the personality trait related to anxiety and depression (17,19,20). As the Big Five represents personality traits that usually precede the symptoms of anxiety and depression, they are considered risk/ protective factors for anxiety and depression (20,21). A crosssectional study on Taiwan college students found that neuroticism was significantly positively associated with anxiety and depression scores while agreeableness was significantly inversely associated with anxiety and depression scores (21). Another study also revealed that agreeableness was inversely associated with depression while neuroticism was significantly associated with depression scores (17). Studies have reported that high levels of conscientiousness and extraversion were protective against the deleterious effects of high levels of neuroticism on depressive mood (22). Consistently, a systematic review showed that neuroticism was considered a risk factor while extraversion and conscientiousness were protective factors for affective disorders such as depression and anxiety (20). As described above, the relationships between the Big Five personality traits and anxiety and depression have been extensively investigated. Many researchers have used total scores to measure anxiety and depression when investigating the relationships between the Big Five and anxiety and depression, assuming that anxiety or depression is a holistic psychological construct. For example, a previous study used the standardized Depression Anxiety Stress Scale-21 (DASS-21) to assess anxiety and depression and five subscales of the Big Five Inventory (BFI) to assess the Big Five, then examined the associations between the DASS-21 total score and the Big Five (21). According to the cumulative risk hypothesis, the overall risk of a negative outcome, such as depression and anxiety, is magnified by interaction with distinct personality traits (23). For instance, openness can moderate the influence of extraversion, and these two personality traits work together to reduce the risk of anxiety (24). In addition to the fact that personality involves the interaction of different traits, depression and anxiety are subsumptions containing different interacting symptoms (25). However, considering that anxiety and depressive disorders consist of distinct symptoms, the common practice of sum scores, according to previous studies (26, 27), obscures the relative importance of different symptoms and limits the understanding of the fine-grained connections between different personality traits and anxiety and depression symptoms. Imagine two individuals have the same sum score, they would commonly be considered to have the same degree of depression; however, one may have a high score on anhedonia and a low score on fatigue while the other may demonstrate the reverse pattern; their depression actually differs according to the relative importance of anhedonia (core symptom) and fatigue (28). Hence, analysis at a symptom level may provide a way forward, which is essential to understanding psychopathological pathways and effective intervention targets that could not be discovered by relying solely on total scores (27,29,30). To the best of our knowledge, there is a paucity of data on symptom-level links between personality traits and symptoms of anxiety and depression, which hampers the identification of more efficacious targets to intervene. This knowledge gap motivated the present study. One approach to achieve the study objectives is network analysis, which is a data-driven method used to reveal the connections among individual variables, regardless of whether these variables are symptoms (31-35). In the network theory, psychopathological constructs are represented and visualized as networks emerging from interactions between distinct variables, which indicates that the variables (symptoms or otherwise) and their active interactions lead to the development and maintenance of constructs (32,33,36). The network commonly consists of nodes, which represent variables, and edges, representing correlations between variables (37). Network analysis can yield important findings. It can be used to examine the fine-grained relationships between individual variables (38)(39)(40), shedding light on the important psychopathological pathways between constructs via edge weights. The approach also identifies bridge nodes, which connect to nodes of another community composed of a theory-based group of variables and are key regarding the impacts of one community on another; the identified bridge variables can be considered promising and effective targets for prevention, intervention, and treatment (40-43). Network analysis has been used to study the relationships between personality traits and psychopathological constructs. For example, a study examined the network structure of substance use disorder, the Big Five personality traits, impulsivity, and psychopathological constructs, including depression and anxiety; however, the presence of anxiety or depression was the sole node in its community, which did not reveal the symptom-level relationships between personality traits and depression and anxiety (44). In another previous study, the network structure of schizotypal personality, autistic traits, obsessivecompulsive traits, depression, and anxiety was investigated; however, this analysis also regarded depression or anxiety as merely one node in the network rather than as a symptom-level construct (45). To the best of our knowledge, there is a lack of network analysis studies investigating the symptom-level relationships of the Big Five personality traits with symptoms of anxiety and depression. To fill this knowledge gap, we constructed a network consisting of the Big Five personality traits, anxiety symptoms, and depression symptoms. We aimed to elucidate the important pathways linking personality traits with anxiety and depression and to identify important bridge nodes that maximally link nodes among different communities. Based on the findings, we attempted to provide theoretical insights into the specific pathways between distinct personality traits and individual anxiety or depression symptoms and provide implications for prevention and intervention in light of the risk and protective roles that personality traits play. Participants We designed the online survey powered by www.wjx.cn, and the systematically trained investigators sent the quick response code of the survey to the WeChat group of students from three colleges in Xi'an and Shanghai, China. We explained the purpose and the filling method of the survey and obtained the informed consent of the participants. From April to May 2022, 536 students completed the survey. The Chinese version of the Ten-Item Personality Inventory (TIPI-C), Generalized Anxiety Disorder-7 (GAD-7), and Patient Health Questionnaire-9 (PHQ-9) was used to investigate the personality traits and symptoms of anxiety and depression of college students. Questionnaires that were not fully answered were excluded. A total of 507 (94.59%) questionnaires were valid. This study strictly followed the tenets of the Declaration of Helsinki and was approved by the Ethics Committee of the Xijing Hospital of the Air Force Medical University (KY20224106-1). TIPI-c The TIPI-C is the Chinese version of a scale developed by Li (46) based on the TIPI (47) to assess personality traits, and the reliability and validity of the scale meet the demand of psychometrics for measuring the personality traits of Chinese people (46). It is a short scale with 10 items and five factors, namely, neuroticism, conscientiousness, agreeableness, openness, and extraversion. The scale uses a 7-point Likert scale, ranging from one (completely disagree) to seven (completely agree). GAD-7 The GAD-7 was compiled by Spitzer et al. (48), as a self-screening tool to assess anxiety symptoms. The Chinese version of GAD-7 used for this study was revised by He et al. (49), and it has good reliability and validity in the Chinese population (49). It contains seven items scored on a 4-point Likert scale, from zero (not at all) to three (almost every day). The GAD-7 total score ranges from zero to 21, and a higher score represents severer anxiety symptoms. The GAD-7 had high reliability in this study (Cronbach's α coefficient = 0.90). PHQ-9 The PHQ-9 was developed by Kroenke et al. (50) to measure the severity of depression in the past 2 weeks. The Chinese version of PHQ-9 (51) was used in this study, and it has been proven to have good reliability and validity among Chinese adolescents (52). The scale contains nine items scored on a 4-point Likert scale from zero (not at all) to three (almost every day). The total score of the PHQ-9 ranges from zero to 27, and the higher the score the severer the depression symptoms. Cronbach's α coefficient of the PHQ-9 in this study was 0.88. SPSS 22.0 was used to summarize the demographic characteristics of participants and calculate the scale scores. R 4.1.1 software was used to construct the network model, measure bridge centrality, and test the robustness of the network. Network model construction The R package qgraph (53) was used for network model construction. In the network, nodes represented dimensions of the TIPI-C and items of the GAD-7 and PHQ-9. The term "community" in network analysis is used to indicate a theory-based group of nodes that correspond to a psychological structure or psychiatric disorder (40). The nodes in this study were divided into two communities, namely, the personality community and the anxiety and depression community. Edge in the network represented a partial correlation of two nodes after statistical control for the interference of other nodes (54). The least absolute shrinkage and selection operator (LASSO) regularization (55) and extended Bayesian information criterion (EBIC) (56) were used in combination to obtain a comprehensible network by setting trivial edges to a weight of zero (37). We set the EBIC hyperparameter γ to 0.5 and used the Fruchterman-Reingold (57) algorithm to lay out the network. Bridge centrality measurement The R package network tools were used in the bridge centrality measurement (40). The bridge expected influence (BEI) was used in this study given its suitability for networks with positive and negative edges (58). The BEI of a node is defined as the sum of the edge weights between the node and nodes from another community; the higher the BEI value of a node the stronger the influence of the node on the other community (58). Frontiers in Public Health 04 frontiersin.org Network robustness test The R package bootnet was used to test the robustness of the network (37). The non-parametric bootstrapping method (1,000 bootstrapped samples) was used to estimate the 95% confidence intervals of edge weights to test the edge accuracy. Case-dropping bootstrapping (1,000 bootstrapped samples) was used to test the BEI stability, and the correlation stability (CS) coefficient was calculated to quantify the stability. Ideal stability is indicated by a CS coefficient higher than 0.5 (37). The bootstrapping method (1,000 bootstrapped samples) was used to test the BEI difference of nodes and the edge weight difference of node pairs in the network (α = 0.05). Demographic characteristics and descriptive statistics Demographic characteristics of the college students are shown in Table 1. The means, standard deviations, and BEI values of nodes in the network are shown in Table 2. Figure 1A displays the structure of the personality trait-anxiety and depression network. The network contained 125 non-zero edges (edge weights ranged from-0.15 to 0.23), among which 29 edges bridged the personality community and the anxiety and depression community (23.20%). Of these cross-community edges, EXT was negatively correlated with A7 "fear that something might happen, " D1 "anhedonia, " D4 "fatigue, " D7 "concentration difficulties, " and D9 "suicidal ideation"; the strongest correlation was with D4 "fatigue" (edge weight = −0.06). AGR was positively correlated with D3 "sleep difficulties" (edge weight = 0.02) and negatively correlated with eight nodes of the anxiety and depression community, among which the strongest correlation was with D9 "suicidal ideation" (edge weight = −0.06). CON was negatively correlated with A1 "nervousness or anxiety, " A2 "uncontrollable worry, " D1 "anhedonia, " D4 "fatigue, " and D6 "feelings of worthlessness"; the strongest correlation with CON was A2 "uncontrollable worry" (edge weight = −0.05). NEU was positively correlated with six nodes of the anxiety and depression community, among which the strongest correlations were with A3 "excessive worry" (edge weight = 0.14) and A6 "irritability" (edge weight = 0.11). OPE was negatively correlated with A7 "fear that something might happen, " D1 "anhedonia, " D2 "depressed or sad mood, " and D6 "feelings of worthlessness"; the strongest correlation was with D6 "feelings of worthlessness" (edge weight = −0.06). The correlation matrix of the network is displayed in Supplementary Table S1 of the Supplementary Material. The personality trait-anxiety and depression network structure As shown in Supplementary Figure S1 of the Supplementary Material, the relatively narrow 95% CIs of edge weights indicated that these edge weight estimations were accurate. The results of difference tests on edge weights are shown in Supplementary Figure S2 Figure 1B shows the BEI values of the nodes in the personality trait-anxiety and depression network. In the personality community, NEU had the highest positive BEI value (0.32), and AGR had the highest negative BEI value (−0.27). In the anxiety and depression community, A3 "excessive worry" had the highest BEI value (0.14). BEI values of nodes in the network As shown in Supplementary Figure S3 in the Supplementary Material, the BEI values of NEU, AGR, and A3 "excessive worry" were significantly different from the BEI values of most other nodes (p < 0.05). Supplementary Figure S4 in the Supplementary Material displays the results of the BEI stability test. The CS coefficient of BEI in the network was 0.67, suggesting ideal stability. Discussion There are two prevailing views on the relationship between mental disorders and symptoms. The perspective of classification diagnosis considers symptoms to reflect mental disorders, as in the DSM-5 (32). The dimension diagnosis perspective considers mental disorders as a compound of different symptom dimensions (59). However, both of these perspectives overlook the interactions between symptoms, a fundamental phenomenon in mental disorders (60). According to the network theory, mental disorders are a dynamic system composed of interacting symptoms (61); in the network structure, the edges that bridge communities reveal the psychopathological interactions among different psychological structures and mental disorders (62). As the nodes in the network belong to two different communities measuring personality traits and symptoms of anxiety and depression respectively, the nodes within a community have high consistency; the connections within a community are closer than those across the community. However, the understanding of crucial cross-community edges is based on the network theory of the relationship between mental disorders and symptoms, which can enhance our knowledge of the fine-grained relationships between personality traits and anxiety and depression symptoms. In view of this, we discussed the strongest cross-community edges in the network. Among symptoms correlated with extraversion, "fatigue" had the strongest negative edge weight. Fatigue is a general feeling of weariness and lack of energy and motivation (63). Extroversion is often accompanied by low allostatic load and great aerobic capacity (64), and high extraversion is correlated with better health outcomes and well-being, such as lower frailty, fewer affective symptoms, and fewer sleeping problems (65)(66)(67)(68); thus, individuals high in extraversion are prone to experience low levels of fatigue. Indeed, the advantages of extraversion may explain the positive effects of cognitive behavioral therapy in chronic fatigue syndrome, alleviating daily fatigue and pain, both mentally and physically (68). "Suicidal ideation" was negatively correlated with agreeableness. As one of the most empirically tested theories, the interpersonal-psychological theory of suicide provides a theoretical framework that explains this relationship. The theory assumes that there are three factors underpinning suicidal thoughts and behaviors, namely, perceived burdensomeness, thwarted belongingness, and acquired capability for suicide (the degree to which one is able to enact suicide attempts) (69). Perceived burdensomeness is an individual's belief that they are a burden to their family, friends, or society, and thwarted belongingness is an unmet need for social connection. Suicidal ideation increases when perceived burdensomeness and thwarted belongingness cooccur (70); however, this cooccurrence is less likely in individuals with high agreeableness, who tend to be good-natured, modest, cooperative, and emotionally satisfied by social interaction (71). Conscientiousness was negatively correlated with "uncontrollable worry" in college students. Conscientiousness is characterized by individual differences in the propensity for self-discipline, orderliness, and reliability in the pursuit of work completion (72). According to Gao et al. (73), self-control is a key component in the structure of conscientiousness, and individuals with high conscientiousness are adept at controlling unnecessary worries. Neuroticism was positively correlated with "irritability" and "excessive worry. " Neuroticism is defined as emotional negativity and instability; people high in neuroticism are irrational perfectionists and tend to catastrophize difficulties (74); they are more prone to irritability when encountering setbacks (75). A correlation between neuroticism and worry was previously reported (76), and the neural basis of excessive worry helps explain how neuroticism contributes to psychopathological vulnerability (77). Openness was negatively correlated with "feelings of worthlessness. " From a cognitive perspective, individuals who tend to seek, comprehend, and utilize complex patterns of information in the world might be more adaptable and therefore less susceptible to depression (78). However, the results of multiple meta-analyzes have revealed no direct association between openness and depression (79,80). The contradictory results may stem from complicated associations between openness and specific symptoms of depression; the non-correlation between openness and depression on an overall level covered up the internal finegrained relationships. BEI reflects the role of a given variable in maintaining the interaction between different psychological structures or mental disorders (81); variables with high BEI values can be regarded as bridges and serve as potential targets to reduce or enhance this interactive influence (58, 81). In the present study, BEI value helped to identify the detrimental or protective effects of different personality traits on anxiety and depression symptoms. Neuroticism was identified as the most crucial risk trait for depression and anxiety symptoms, while agreeableness was the most central protective trait according to their highest positive and negative BEI values, respectively. Neuroticism reflects emotional instability and is a powerful predictor of negative emotional experiences, including depression and anxiety (82)(83)(84). A previous study suggested that neuroticism and depression share a common genetic basis (85). A plausible neural mechanism underlying the vulnerability of individuals high in neuroticism to depression is sensitivity to stressrelated reductions in response of the ventral striatum to reward (86). Individual differences in neuroticism and trait anxiety were predicted by volume variation in the left amygdala (18). The negative correlations between agreeableness and anxiety and between agreeableness and depression have been confirmed by many studies (87, 88). Individuals high in agreeableness are prone to positive emotions, and agreeableness has been found to be a protective factor against anxiety induced by the COVID-19 pandemic (89). From a network perspective, compared with other intervening personality traits, reducing neuroticism and enhancing agreeableness have more advantages in reducing anxiety and depression. In addition, digital applications have been proven to be an effective method for personality intervention (90). There are many studies on the relationship between the different Big Five personality traits and anxiety and depression (22,(91)(92)(93)(94)(95). However, these studies treated anxiety or depression as a whole and overlooked the interconnections of different symptoms. In addition, due to the limitations of statistical methods, the risk or protective effects of different personality traits on anxiety and depression cannot be compared. Based on network analysis, this study provides us with a clear understanding of the risk or protective effects of each personality trait on different anxiety and depression symptoms and helps quantitatively compare the overall risk or protective effects of The structure of the personality trait-anxiety and depression network among college students and the BEI values of nodes. (A) The structure of the personality trait-anxiety and depression network. Blue edges represent positive partial correlations, and red edges represent negative partial correlations. The wider the edge, the stronger the partial correlation. (B) The BEI values of nodes in the network (raw scores). EXT, Extraversion; AGR, Agreeableness; CON, Conscientiousness; NEU, Neuroticism; and OPE, Openness. A1, nervousness or anxiety; A2, uncontrollable worry; A3, excessive worry; A4, trouble relaxing; A5, restlessness; A6, irritability; A7, fear that something might happen; D1, anhedonia; D2, depressed or sad mood; D3, sleep difficulties; D4, fatigue; D5, appetite changes; D6, feelings of worthlessness; D7, concentration difficulties; D8, psychomotor agitation/retardation; D9, suicidal ideation. Frontiers in Public Health 07 frontiersin.org different personality traits on anxiety and depression, thus providing a reference for potential intervention targets. Several limitations should be noted when interpreting the findings of the current study. First, a cross-sectional design was used; thus, the directionality and causality of the relationship between personality traits and symptoms of anxiety and depression could not be determined (96). Second, the study used a convenience sample of Chinese college students, potentially limiting external validity and generalization to the broader population. Third, TIPI-C is a brief scale, and the facets of the personality traits TIPI-C covered may not be as sufficient as other Big Five personality scales. Last, the selection of psychological intervention targets was based on the network analysis theory; relatively weak effects of the relationships between personality traits and symptoms of anxiety and depression may indicate limited practical application of the results, and further intervention research is needed to verify whether interventions targeting the identified bridge variables will be effective. In summary, the present study represents the first utilization of network analysis to elucidate the relationship between personality traits and anxiety and depression in college students. The intuitive network models help to develop a comprehensive understanding of the fine-grained correlations of personality traits with anxiety and depression. BEI values facilitated the identification of the key variables bridging personality traits with anxiety and depression and highlighted potential targets for psychological intervention. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by The Ethics Committee of the Xijing Hospital of the Air Force Medical University (KY20224106-1). Written informed consent to participate in this study was provided by the participants or their legal guardian/next of kin. Author contributions YG, XL, and XZ: concept and design and critical revision of the manuscript. TY and ZG: acquisition of data and drafting of the manuscript. ZG and TY: analysis and interpretation of the data. All authors contributed to the article and approved the submitted version. Funding This study was funded by the Air Force Medical University (KJZFJ2020-1, 2021JQ-335).
2023-08-05T15:04:33.795Z
2023-08-03T00:00:00.000
{ "year": 2023, "sha1": "73956459b319c263dea86aed6cf5e5384a023113", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "81cadbfe5c279805959cfd888f8d377f1b36f13a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
128852283
pes2o/s2orc
v3-fos-license
Implications of Changing from Grazed or Semi-natural Vegetation to Forestr Y for Carbon Stores and Fluxes in Upland Soils Implications of Changing from Grazed or Semi-natural Vegetation to Forestry for Carbon Stores and Fluxes in Upland Organo-mineral Soils in the Uk In the UK, as organo-mineral soils are a significant store of soil organic carbon (SOC), they may become increasingly favoured for the expansion of upland forestry. It is important, therefore, to assess the likely impacts on SOC of this potentially major land use change. Currently, these assessments rely on modelling approaches which assume that afforestation of organo-mineral soils is ‘carbon neutral’. This review evaluates this assumption in two ways. Firstly, UK information from the direct measurement of SOC change following afforestation is examined in the context of international studies. Secondly, UK data on the magnitude and direction of the major fluxes in the carbon cycle of semi-natural upland ecosystems are assessed to identify the likely responses of the fluxes to afforestation of organo-mineral soils. There are few directly relevant measurements of SOC change following afforestation of organo-mineral soils in the UK uplands but there are related studies on peat lands and agricultural soils. Overall, information on the magnitude and direction of change in SOC with afforestation is inconclusive. Data on the accumulation of litter beneath conifer stands have been identified but the extent to which the carbon held in this pool is incorporated into the stable soil carbon reservoir is uncertain. The effect of afforestation on most carbon fluxes is small because the fluxes are either relatively minor or of the same magnitude and direction irrespective of land use. Compared with undisturbed moorland, particulate organic carbon losses increase throughout the forest cycle but the data are exclusively from plantation conifer forests and in many cases pre-date current industry best practice guidelines which aim to reduce such losses. The biggest uncertainty in flux estimates is the relative magnitude of the sink for atmospheric carbon as trees grow and mature compared with that lost during site preparation and harvesting. Given the size of this flux relative to many of the others, this should be a focus for future carbon research on these systems. Introduction As a party to the United Nations Framework Convention on climate change, the UK is required to protect and enhance the sinks and reservoirs for greenhouse gases.Within the UK, approximately 30% (1357 Tg to a depth of 1 m) of the soil organic carbon (SOC) stock is held in organic (peat) soils with a further 22% in organo-mineral soils (Bradley et al., 2005).In relation to land use, only 9% of the UK SOC stock resides in forest and woodland soils, although the carbon density of these soils is relatively high (25 kg m 2 or 250 tC ha 1 ) compared with pasture and arable soils (16 and 12 kg m 2 or 160 and 120 tC ha 1 respectively; Bradley et al., 2005).Organo-mineral soils may well become increasingly important for the future expansion of forestry in the uplands if the extensive grazing agriculture they currently support becomes economically marginal because of CAP reform.An assessment of the likely effects of this potentially major change in land use on SOC stocks is, therefore, required. Assessments of the effects of afforestation on SOC stocks in the UK have relied heavily on modelling approaches (Dewar and Cannell, 1992;Cannell et al., 1993;Cannell and Dewar, 1995), which have subsequently contributed to the national carbon inventories for the UK (Cannell et al., 1999;Milne et al., 2000).In 1998, the uptake of carbon into UK forest biomass, litter, soil and forest products was estimated as 2.8 Tg of which 67% was in forest biomass and 22% in soils and litter (Cannell et al., 1999).As far as SOC is concerned, Cannell et al., (1999) consider three types of soil; peats, upland organo-mineral soils and mineral soils.For the purposes of the carbon inventory, the last named are assumed to be planted with broadleaved trees and are predicted to gain SOC as the trees add substantial quantities of litter to soils, including former agricultural land with relatively low organic matter content (Cannell and Dewar 1995).For peats, information from field measurements of net ecosystem carbon exchange suggest that there will be a small net source of carbon resulting from the effects of drainage and soil drying by the trees (Hargreaves et al., 2003).Upland organo-mineral soils under plantation conifer forest are considered to be carbon neutral, having gained as much carbon from forest litter as has been lost through accelerated decomposition during site preparation and drainage.The purpose of this review is to evaluate this assumption. Planting trees can affect the ecosystem carbon balance in two opposing ways.Firstly, drainage and disturbance during site preparation and planting may lead to higher SOC losses as increased microbial respiration rates, aeration and disruption of soil aggregates lead to higher rates of organic matter decomposition.Secondly and conversely, carbon will be sequestered by tree growth and needle fall will contribute to the accumulating litter layer at a faster rate than for the pre-existing ground vegetation.Tree roots and root litter will also contribute to the below ground carbon stock. This review considers two primary approaches to assessing the likely effects of changing land-use from grazed acid grassland or semi-natural vegetation to forestry on SOC in organo-mineral soils of the UK.These are: (1) direct measurement of change in soil organic carbon, and (2) measurement of biogeochemical fluxes to produce a carbon budget. As there are relatively few directly relevant studies, the review will draw on studies of related systems to draw inferences and conclusions about the likely effects of afforestation on the SOC balance of organo-mineral soils. Direct measurement of change in soil organic carbon Relatively few data are directly relevant to describing the effects on SOC accumulation following the conversion of land on organo-mineral soils in the UK from permanent, extensively grazed grassland/semi-natural moorland to forestry.Changes in carbon accumulation in peat following drainage and afforestation have been measured directly at Lochar Moss (Harrison et al., 1997) and Mindork Moss (Jones et al., 2000) using C:Pb ratios to determine the change in the carbon stock to the ploughed depth (Table 1) Although the sites differed considerably in productivity, the changes in SOC were similar and equally variable.At Mindork Moss, the authors concluded that the effect of tree planting on the peat was carbon neutral whereas Harrison et al. (1997) proposed that Lochar Moss was accumulating carbon because of the greater productivity of Lochar Moss, where the inputs of needle, branch and root litters were much higher than at Mindork Moss.Changes in SOC storage in the upper organic-rich horizons of mature forest were measured in 1949/50 by FitzPatrick (1951) and re-sampled some 38 years later by Billett et al. (1990).The fifteen soil profiles were located in stands planted in the 1880s and 1930s.The organic carbon concentration in the soil declined by approximately 5%, although carbon stocks had increased due to an increase in the depth of the horizon (Table 2) Numerous soil properties were measured at Gisburn Forest and compared with data collected from grassland control plots (Moffat and Boswell, 1990).There were few significant differences in soil characteristics between grassland control and tree plots after 32 years of forest growth, partly because, across the site, natural variation in many soil properties confounded tests for species effects.There was little difference in soil organic carbon concentration in the A horizon expressed as % loss-on-ignition (% LOI) between the tree species (pine, oak, alder and spruce) and grazed plots; the ungrazed plots had significantly higher LOI.There were variations in horizon thickness, with conifers having thicker F and H horizons but thinner A horizons compared with the other treatments.Unfortunately, bulk density data were not available so that SOC stocks could not be estimated. In Wales the amounts of carbon stored in the L and F layers have been measured at a chronosequence of Sitka spruce stands aged between 14 and 55 years (Stevens et al., 1994).The quantity of carbon held in the undecomposed litter (L) layer increased sharply to a maximum of c. 4 tC ha 1 at age 25 and then, with the exception of one site, decreased steadily with age (Fig. 1). The input of falling litter followed a similar pattern although the decrease over time from a maximum of 5.5 tC ha 1 yr 1 at age 24 was much steeper (Fig. 2).Quantities of litter deposited in older (c.1.4 tC ha 1 yr 1 at 50+ years) stands were only slightly greater than in young stands (<20 yrs).Overall, the litter carbon pool accounted for approximately 2.6 tC ha 1 (1.2 to 3.8 tC ha 1 ) or 16% of the L+F layer carbon pool.There was a steep increase in F layer carbon storage with age up to 53 years (Fig. 3) where the maximum amount of carbon was measured (26 tC ha 1 ).However, at the oldest site (Beddgelert Forest, 55 years), the F layer carbon store amounted to only 8 tC ha 1 with 2 tC ha 1 stored in the L layer.This suggests that carbon may have been lost from surface horizons at this site which is very nitrogen-rich and has a relatively open canopy. In an extensive literature survey, Post and Kwon (2000) provide a geographically wide-ranging summary of soil carbon accumulation rates during forest establishment after 2. Changes in SOC in Alltcailleach Forest between 1949/50 and 1987(Billett et al., 1990; positive notation represents accumulation by the soil). Old stand Younger stand Change in thickness (cm) +6 (0 to +10) +3.5 (0 to +8) Change in carbon concentration (%C) 6.0 (11.3 to +0.3) 4.5 (18.3 to +17.4) Change in SOC store (gC m 2 yr 1 ) +56.8 (2.2 to +115.1) +21.1 (34.0 to +60.2) agricultural use.The systems considered range from cool temperate forest to wet tropical forest and rates of soil carbon accumulation vary widely from very large losses of 51 gC m 2 yr 1 following intensive site preparation in subtropical moist forest to very large accumulation rates of 300 mgC m 2 yr 1 in subtropical wet hardwoods.In cool temperate moist forest systems, accumulation rates ranged from small losses of the order 34 gC m 2 yr 1 to accumulations of 66 gC m 2 yr 1 .In general, accumulation rates increased from temperate to subtropical systems. The data for cool temperate moist forest systems provided by Post and Kwon (2000) are summarised in Table 3, omitting studies of the conversion of mine spoil and constructed systems to forest.Table 3 also shows annual carbon accumulation rates for the studies described above (Billett et al.,1990;Harrison et al., 1997;Jones et al., 2000) and data from the Rothamsted long-term experiments at Geescroft Wilderness and Broad-balk Wilderness (Jenkinson, 1971). The data in Table 3 suggest that conversion of land to forest results in SOC accumulation, even in systems where drainage and disturbance have preceded forest planting.Furthermore, the data from Alltcailleach Forest indicate that carbon is continuing to accumulate in mature forests.However, important caveats have to be applied to the interpretation of these data. There are significant potential sources of error in the direct measurement of change in carbon stocks in forest soils.A high degree of spatial variation is often encountered in forest soil carbon measurements (Huntington et al., 1988;Trettin et al., 1999) especially at sites which have been disturbed by drainage and ploughing.Conen et al. (2004) noted that the variance in the estimate of soil carbon stocks in temperate and boreal forests increased with mean carbon content.In the extreme case of a disturbed site with a large soil carbon content, in excess of a thousand samples may be required to detect a soil carbon change of 5 tC ha 1 .The depth of soil sampling can also affect the measurement, as soils can change with respect to depth and bulk density in response to forest development (Jenkinson, 1971;Cannell et al., 1993). The comparison of annual rates of change in soil carbon can be misleading.Soussana et al., (2004) used a database of 19 000 unpublished records of carbon stocks in French soils to model changes in response to various land-use change scenarios.They noted that soil carbon accumulation rates are non-linear; following a change in land use, high initial rates would later slow down as a new equilibrium is reached.The time to reach equilibrium will be determined by factors such as the ability of the soil to stabilise carbon, the prevailing climate, the quality of the added carbon and the balance between carbon inputs and losses through respiration (Freibauer et al., 2004).In the case of plantation forestry on organo-mineral soils, initial losses of carbon due to site preparation may be compensated for later.The implication is that the annual accumulation rate will depend on the time period covered by the measurement. Some studies reported by Post and Kwon, (2000) recorded a loss of carbon at depth in the profile, whilst observing accumulation at the surface.In these cases, organic carbon inputs during early growth of the forest were insufficient to replenish decomposition losses lower in the profile.A decrease in the recalcitrant soil organic carbon pool was predicted by the MERLIN model in a simulation of planting and growth of a 30 year old Sitka spruce plantation which replaced moorland vegetation growing on acid peaty podzols in north Wales (Emmett et al., 1997).The data used to parameterise the model, which came from a chronosequence of Sitka spruce plantations on similar soils, showed, however, a net accumulation of organic carbon by the ecosystem, both as new wood and in the labile soil organic matter pool mainly in the forest floor.Evidence of a loss of old soil organic carbon during forest development also comes from a study of the 14 C signal in soil solution dissolved organic carbon (DOC) by Karltun et al. (2005).Samples of DOC were collected from the transition zone between the A and B soil horizons of a forest chronosequence comprising twelve sites in southwest Sweden, ranging from agricultural land to 89 year old, first generation Norway spruce.In order to explain the observed changes in 14 C signal along the chronosequence, the authors proposed that two processes were occurring simultaneously; namely changes in litter input and increased mobilisation of soil organic carbon formed before afforestation.The implication is that any assessment of SOC stock change in response to afforestation must extend down to the base of the rooting zone in order to account for potential losses of SOC from the subsoil as well accumulation at the soil surface.Guo and Gifford (2002) reported a meta-analysis of studies on the effects of land-use change on soil organic carbon stocks.Analysis of 83 observations (dominated by studies from New Zealand) on the conversion of pasture (including semi-natural grassland) to plantation forest showed an overall decline in soil organic carbon stocks of 10%.Planting broadleaved trees had little effect on soil organic carbon stocks but conifers (mainly Pinus radiata) reduced organic carbon stocks by 12%.The effect on organic carbon stocks was related strongly to rainfall amount.Little effect was observed at lower rainfall sites (<1200 mm) but, where annual rainfall exceeded 1500 mm, there was a significant reduction in soil organic carbon (23%).The implication that, in areas of high annual rainfall, carbon leaching is important was not supported by analysis of soil sampling depth.However, carbon may have been lost by percolation and runoff, rather than simply being re-deposited lower in the soil profile, especially on steeply sloping sites. UK studies of carbon exchange in conversion of semi-natural grassland and moorland ecosystems to plantation forestry Components of the carbon budget have been studied to evaluate the effects on the SOC balance of establishing forest plantations on semi-natural moorland and grassland on peats and organo-mineral soils.However, none of these has produced a complete biogeochemical budget for this change in land use, making it necessary to piece together evidence from a range of appropriate studies. ESTIMATES OF CHANGES IN NET CO 2 EXCHANGE FOLLOWING AFFORESTATION OF PEATS Some of the most detailed field measurements of net exchange of CO 2 (eddy covariance method + supporting meteorological data) have been undertaken in forests planted on deep peats in Scotland (Hargreaves and Fowler, 2000;Hargreaves et al., 2003).At the Auchencorth Moss site, flux modelling using relationships between daytime negative fluxes and solar radiation and night time positive fluxes and temperature has been used to infill missing field measurements and to estimate annual fluxes. The annual net exchange of CO 2 for undisturbed peat lands estimated from these studies (Table 4) indicates that peat lands act as a net sink for CO 2 from the atmosphere.Following further measurements, the figures for Auchencorth Moss have been revised upwards to a mean annual rate of 27.8 gC m 2 yr 1 (SE = +/ 2.5 ; Billett et al., 2004).Even so, this is much less than the mean range for UK peat lands provided by Immirzi et al. (quoted in Hargreaves et al., 2003) of 0.4 to 0.7 tC ha 1 (40 to 70 gC m 2 yr 1 ).Net surface exchange measurements are subject to uncertainty caused by variations in the type and age of peat and the prevailing weather (Billet et al., 2004; Hargreaves et al., 2003) To simulate the effect of drainage and afforestation of peat land on net CO 2 exchange, the flux model developed at Auchencorth Moss was parameterised with values from field campaigns of measurement at four afforested peat land sites representing a chronosequence of development from 14 years, 8, 9 and 26 years.Simulations based on the climate at Mindork Moss (Hargreaves et al., 2003) showed that in the first two years following drainage, the site acted as a net source of CO 2 (2 to 4 tC ha 1 yr 1 ) and returned to becoming a net sink by year five.If the loss from peat occurred only in the first four years following disturbance, the maximum total loss would be +9.0 tC ha 1 (Table 5).By year 26, the cumulative exchange of CO 2 was 54 tC ha 1 , representing an enhancement of 44.2 tC ha 1 over and above unafforested peat land (Tables 4 and 5). Using the climate data for Auchencorth Moss, the predicted net exchange of CO 2 resulting from simulated drainage and afforestation was similar to that at Mindork Moss (Hargreaves and Fowler, 2000).Following drainage, the site lost less carbon (8.5 tC ha 1 ) compared with the Mindork Moss simulation because of lower mean annual temperatures.The cumulative exchange of CO 2 after 26 years was slightly smaller at 50 tC ha 1 (Table 5). The net exchange of CO 2 with the peat plus surface vegetation was further investigated at Mindork Moss by subtracting the net carbon exchange of the growing trees, estimated using the CFLOW model (Dewar and Cannell, 1992;Milne et al., 1998) from the overall net CO 2 exchange measured by eddy covariance (Hargreaves et al., 2003).In the first five years, the peat and surface vegetation were a net CO 2 source due to disturbance and drainage and the time course of CO 2 exchange tracked the overall CO 2 exchange closely, reflecting the minor role of the trees at this time. Between years five and ten, the peat plus vegetation became a net sink, implying that the uptake of CO 2 by the ground vegetation exceeded the loss from peat by decomposition. Beyond canopy closure at about year 15, there was little ground vegetation and the peat continued to be a net source of CO 2 provided that the trees in the CFLOW model were given a yield class of 10 or greater.From measurements at the site by the Forestry Commission and CEH, the average yield class for the site was 10 to 12 over the 26-year period.Subtracting the net cumulative carbon gain in the trees (assuming yield class 1012) from the overall net CO 2 exchange for the site, gave a cumulative carbon loss from the peat plus vegetation of +7.3 to +16 tC ha 1 .Hargreaves et al., (2003) argue that approximately 9 tC ha 1 of this loss was accounted for in the first nine years, so that the remaining loss between years 10 and 26 of approximately 5 tC ha 1 was attributable to the peat.This gives an annual CO 2 loss rate of less than 0.5 tC ha 1 yr 1 (50 gC m 2 yr 1 ). The results from the CO 2 exchange experiments can be compared with the measurements of soil carbon stocks from Mindork Moss and Lochar Moss described above.Combining the measured changes in peat carbon storage at Mindork Moss with an estimate of carbon accumulation in biomass predicted by CFLOW, plus the carbon held in the litter, the overall change in carbon stock was between 24 and 81 tC ha 1 (Table 6), so that the site was a sink for atmospheric CO 2 for up to 26 years.The more productive site at Lochar Moss showed much larger carbon accumulations in litter and living biomass (estimated from site measurements; Harrison et al., 1997), but similar and equally variable changes in soil carbon.Again the site was an atmospheric CO 2 sink over the 28 years of forest growth. Measurement of the net exchange of CO 2 accounts for only one flux component of the carbon balance in an ecosystem and must be supplemented by estimates of other fluxes if the overall effect on soil carbon sequestration is to be determined.Although the estimate of carbon exchange for afforested peat described by Hargreaves et al. (2003) includes carbon uptake into biomass, it does not account for potential carbon losses in soil water and runoff (dissolved and particulate organic carbon, dissolved inorganic carbon, CO 2 and methane) and in terrestrial emissions of methane.Ignoring these carbon losses may lead to a significant overestimation of the terrestrial carbon accumulation rate (Hope et al., 2001), particularly if the fluxes change significantly during the forest cycle. Accounting for the other carbon fluxes how important are they? PEAT LAND CARBON BUDGET Recent attempts to describe the biogeochemical cycle of carbon in UK upland peat systems can be used to illustrate the potential importance of fluxes, other than net CO 2 exchange, to the soil carbon balance.They also highlight some of the uncertainties in carbon balance studies.Billett et al. (2004) used measurements of net CO 2 exchange, stream water carbon fluxes and stream surface CO 2 evasion, combined with literature values of carbon inputs in precipitation and terrestrial methane emissions, to construct an annual carbon budget at Auchencorth Moss.A similar exercise has been undertaken for Moorhouse in the Pennines (Worrall et al., 2003;Table 7). Both studies highlight the importance of carbon losses in stream water to the overall carbon balance of the catchments as this flux is of similar magnitude to the net exchange of CO 2 .The stream flux term contains several components.For the Pennine site, particulate organic carbon (POC) accounts for approximately 50% of the stream flux compared to about 5% at Auchencorth Moss.Large POC fluxes have been reported for a number of Pennine streams (Labadz et al., 1991;Hutchinson, 1995) whereas much smaller fluxes have been observed at sites in Scotland and mid-Wales (Dawson et al., 2002) where DOC accounts for the majority (69 to 88%) of the riverine carbon flux.Dissolved inorganic carbon plus dissolved CO 2 fluxes vary in significance, ranging from 2% to 25% of the total stream carbon flux (Dawson et al., 2002;Worrall et al., 2003;Billett et al., 2004) Table 7. Carbon budgets for UK peatlands (gC m -2 yr -1 ); carbon sink is given a negative notation as it represents depletion of atmospheric carbon.Very different conclusions are drawn from the two studies, with Auchencorth Moss acting as a net carbon source and Moorhouse as a sink for carbon, exemplifying some of the uncertainties inherent in budget studies.Worrall et al., (2003) used the published literature values for net uptake of CO 2 into UK peat lands of 40 to 70 gC m 2 yr 1 (Immirzi et al., 1992, cited by Hargreaves et al., 2003) to estimate net ecosystem exchange of carbon.This figure is approximately double that measured using eddy covariance methods at Auchencorth Moss.Billett et al., (2003) acknowledge that such measurements are very sensitive to prevailing weather conditions indicating a range of possible values for the 24 month measurement period of between 7.6 and 43.8 gC m 2 yr -1 .Although these fall within the range of values published from net CO 2 exchange studies in other northern hemisphere peat land environments (10 to 76 gC m 2 yr 1 ; Billett et al., 2004), they are at the lower end for UK systems.The other important flux omitted from the budget of Worrall et al. (2003) is the evasion loss of CO 2 from the stream which can comprise a significant flux of gaseous carbon in catchments dominated by peat lands (Hope et al., 2001). In relation to the afforestation of organo-mineral soils, these studies confirm the uncertainties attached to net CO 2 exchange measurements which can make the difference between net carbon accumulation and net loss.Losses of carbon in runoff may be a significant flux out of the system similar in size to the carbon captured by net CO 2 exchange. DISSOLVED ORGANIC CARBON FLUXES There has been considerable recent interest in riverine DOC dynamics with widespread reporting of trends of increasing concentrations in surface waters over the last two decades (Worrall et al., 2004;Evans et al., 2005).Concern has been expressed that these trends might be indicative of destabilisation of soil carbon stores, particularly in catchments dominated by peat soils (Freeman et al., 2001).Several environmental drivers have been proposed to account for the trends which include a response to increased temperatures (Freeman et al., 2001) and/or increased concentrations of atmospheric CO 2 (Freeman et al., 2004), effects of major drought-rewet cycles (Watts et al., 2001), response to a change in atmospheric acid anion loading following emission reductions (Clark et al., 2005) or an ionic strength effect (Evans et al., 2005). In terms of land-use change, a review by Hope et al. (1994) indicated that temperate forests exported slightly less DOC (3.3 gC m -2 yr 1 ) compared to moorland and grassland systems (4.3 gC m 2 yr 1 ).These figures are broadly confirmed by data from catchments dominated by organomineral soils in Wales (Table 8), although peat land catchments have larger DOC exports of between 8 and 17 gC m 2 yr 1 . Generally, exports of DOC from catchments containing plantation conifer forest are slightly lower than those from moorland/acid grassland systems on equivalent organomineral soils.The chronosequence of 20 Welsh plantation forests showed widely ranging DOC fluxes (0.5 to 19 gC m 2 yr 1 ), with no consistent relationship between flux and forest age.Many of the catchments contained some nonforest land although the highest fluxes (13.6 and 19.1 gC m 2 yr 1 ) were observed in very small (1 ha) catchments planted entirely with young Sitka spruce (1016 years).Estimates of DOC soil water fluxes beneath the rooting zone in the podzolic B horizon of these forest stands show a decrease with age (Fig. 4).Fluxes in the oldest stands are smaller than those measured in moorland/acid grassland on equivalent podzolic soils. Soil water DOC fluxes beneath the surface O horizon are much larger than in the B horizon (mean of 22 gC m 2 yr 1 for moorland and 18 gC m 2 yr 1 for forest) but there is no relationship between flux and forest age.These data suggest that the stage of forest development has little effect on DOC production in the surface organic horizons.However, the B horizon of older forest soils may have a greater capacity to adsorb carbon. PARTICULATE ORGANIC CARBON (POC) In comparison with DOC, there is much less information about catchment exports of POC although it generally comprises about 10% of the total organic carbon (TOC) export (Hope et al., 1994).The effects of afforestation on catchment sediment losses have been the subject of debate in the past (Moffat 1989;Soutar 1989a).In a review of sediment losses resulting from afforestation of upland catchments, Soutar, (1989b) re-assessed evidence presented by Moffat (1988) and concluded that, in the long-term, sediment losses are three to four times greater in afforested catchments compared with non-forest controls or preafforestation conditions.However, much larger losses, up to 50 times pre-afforestation rates, are possible following ploughing, drainage, road building and harvesting.Steep slopes and storms can, in some cases, exacerbate erosion problems in forest catchments.The source of the material is clearly crucial in determining whether sediment losses will affect soil carbon stores within the catchment.For example, loss of road surface material due to timber lorry traffic during harvesting is of less concern than loss of peat during preafforestation drainage and ploughing. Some of the best long-term information on sediment losses from afforested and moorland upland catchments is available from Plynlimon in mid-Wales where suspended sediment losses have been measured at irregular intervals over the last two decades or more (Leeks and Marks, 1997).To provide an estimate of catchment POC fluxes, the long-term annual means of these data have been combined with an assumed carbon content of 20% (Walling and Webb, 1981 suggest a range of 1030%; Table 9).The estimates suggest that POC losses from forest catchments are considerably higher than those from relatively undisturbed grassland and moorland catchments.However, where peat is actively eroding in moorland catchments, as in the Pennine site, POC losses can be much higher still (31 to 39 gC m 2 yr 1 ; Labadz et al., 1991;Hutchinson, 1995) and the carbon content of suspended sediments may be 2550%.The data from the lower part of the Afon Hore at Plynlimon (L.Hore in Tables 8 and 9) include a two-year period during which 50% (c.130 ha) of the forested part of the catchment was clearfelled.This resulted in a very large increase in suspended sediment load and POC flux (assuming 20% carbon content) from c. 3.5 gC m 2 yr 1 in the two years prior to felling to a maximum of 28 gC m 2 yr 1 in the second year of felling.Annual losses were sustained at around 18 to 20 gC m 2 yr 1 for the subsequent three years. The 347 ha Afon Hafren catchment which is adjacent to the Afon Hore at Plynlimon, comprises a peat-dominated, moorland headwater area of approximately 93 ha while the remainder of the catchment contains a substantial area (167 ha) of commercial plantation forestry consisting mainly of first and second rotation Sitka spruce (Picea sitchensis).The catchment is monitored at two points: the U. Hafren which drains the moorland headwaters and the L. Hafren which drains both the moorland and forested parts of the catchment.Although the measurements cover different periods, the data in Table 9 indicate that annual exports of TOC at the two points in the catchment are approximately equal at c. 11 gC m 2 yr 1 .The peat dominated headwaters export twice as much DOC but 2.5 times less POC compared with the lower part of the catchment which is dominated by organo-mineral soils and includes the large area of plantation forest.The shift in the flux terms represents the combined effects of a change in land use and dominant soil type between the headwaters and the lower parts of the catchment and is consistent with the data from the other sites. The study at Coalburn in Northumberland (Robinson et al., 1998) provides a good example of sediment losses associated with the initial stages of afforestation.The 1.5 km 2 catchment comprised deep peats which were drained and ploughed prior to planting in the summer of 1972.The preforestry drainage density was approximately 3.5 km km 2 which was increased to 200 km km 2 by forestry operations.Prior to treatment, suspended sediment losses were c. 3 g m 2 yr 1 .An estimated 120 t km 2 of sediment was lost as a result of drainage (equivalent to 50 years of sediment load at pre-drainage rates).Assuming this material was peat with a carbon content of 50% (Harrison et al., 1997), the amount of carbon lost was approximately 0.6 tC ha 1 .The probable long-term effect of drainage was to increase sediment losses to c. 12 g m 2 yr 1 (Robinson and Blyth, 1982) equivalent to 6 gC m 2 yr 1 assuming a carbon content of 50% for the sediment. The main loss of material resulting from pre-afforestation drainage at Coalburn was thought have occurred in the first five years after the operations (Robinson and Blyth, 1982).If this were the case, and if the rate of carbon loss stabilised thereafter at 6 gC m 2 yr 1 , the total carbon loss for a 26-year period (equivalent to the measurements at Lochar Moss and Mindork Moss) would have been about 1.9 tC ha 1 .Over the same period, undisturbed peat land would have lost approximately 0.4 tC ha 1 .Although afforestation may have resulted in a roughly five-fold increase in carbon losses as suspended sediment, these are still relatively small quantities compared to the other components of the carbon balance measured at Mindork Moss and Lochar Moss. It is important to note that most, if not all, data describing the effects of afforestation on sediment losses from upland catchments reflect forestry practices of an earlier era.Many data pre-date the Forest and Water Guidelines (HMSO, 2003) which now determine good practice for sustaining water quality in forest catchments throughout the forest cycle.In respect of future afforestation of organo-mineral soils, it is unlikely that the extensive ground preparation classically associated with plantation forestry would be undertaken.Good practice as determined by the UK Forestry Standard (Forestry Commission 1998) suggests that low impact techniques would be employed which would help safeguard both water quality and soil carbon stocks. DISSOLVED INORGANIC CARBON (DIC), CO 2 AND CH 4 EXPORTS Inorganic carbon can be exported from catchments in the form of the bicarbonate anion (HCO 3 ), dissolved CO 2 and dissolved CH 4 .Dissolved CH 4 fluxes even from peat dominated catchments are considered negligible (<0.01 gC m 2 yr 1 ; Dawson et al., 2002;Billett et al., 2004) whilst bicarbonate-C plus CO 2 -C fluxes comprised less than 10% of the stream water carbon export (Dawson et al., 2002;Billett et al., 2004).Much higher proportional losses (25%) were estimated for the Pennine site described by Worrall et al. (2003).For the Plynlimon catchments, bicarbonate C export was less than 1 gC m 2 yr 1 (Reynolds et al., 1989), amounting to between 10 and 15% of the TOC + DIC flux. Relatively few studies of carbon budgets have incorporated the gaseous carbon losses from water bodies resulting from evasion of CO 2 and CH 4 (Hope et al., 2001), although more recently data have been published for peat land systems (Dawson et al., 2002;Worrall et al., 2003;Billett et al., 2004).For peat land systems, total gaseous evasion losses expressed per unit area of catchment may amount to 28 to 70% of the net carbon accumulation (20 50 gC m 2 yr 1 ; Hope et al;2001).This pathway may, therefore, represent a significant, although localised, loss of carbon from upland systems with organic-rich soils.The importance of this pathway will depend on the nature of the soils and the land use.In saturated peat soils, vertical diffusion of gases is much slower than horizontal transport by mass flow in water (Clymo and Pearce, 1995).Evasion losses from the soil surface are, therefore, likely to be relatively low, with greater transport of gases via soil water to the stream (Hope et al., 2001).In thinner, more permeable, freely draining soils, evasion from the soil surface may become the more important pathway.In view of the lack of data and the potential interactions with soil type and land use, it will be difficult to generalise on the likely effects on evasion losses of carbon from organo-mineral soils due to forest planting on moorland and acid grassland.It is probable that methane fluxes will decrease significantly because increases in evapotranspiration losses will tend to dry out the soil and lower the water table. SOIL METHANE EMISSIONS Peat lands are considered one of the main natural sources of terrestrial methane emissions.In peat land systems typical of the British uplands, annual methane production has been estimated as lying between 1.5 and 11 gC m 2 yr 1 (Worrall et al., 2003;Billett et al., 2004).The measured methane flux to the atmosphere from soils is the net result of methane production, transport and consumption.Production of methane in soils requires an anaerobic environment with strongly reducing conditions which may require soils to be flooded or waterlogged for several days (Smith and Conen, 2004).Unless a soil layer such as an iron pan impedes drainage, these conditions are unlikely to persist for extended periods in organo-mineral soils, with the possible exception of gleyed soils.Furthermore, if conditions favourable for methanogenesis exist at depth in the soil, it is highly likely that the methane produced would be oxidised en route to the soil surface.Although data are scarce, it seems likely that methane emissions from upland organo-mineral soils will be very small; the soils are more likely to act as sinks for methane (Smith et al., 2000).Land-use change from moorland/acid grassland to forestry will tend to dry out soils due to increased evaporation from the forest canopy.Thus, as in the case for afforested peat lands (Hargreaves et al., 2003), it would be valid to assume that methane emissions from afforested organo-mineral soils will be negligible. Forest harvesting Assessment of the likely changes to SOC stocks due to afforestation of organo-mineral soils has to consider possible effects of forest harvesting in commercially managed plantations.Johnson and Curtis (2001) have recently undertaken a meta-analysis of the effects of forest management on SOC storage.The harvesting database includes 73 entries.No UK data are included but there are results from a few European, mainly Scandinavian, studies.The meta-analysis explicitly excluded effects of harvesting on soil carbon pools in the surface O horizon.The authors argued that available data were inadequate to assess the outcome of the complex set of factors which determine the O horizon response.These include, amongst others: harvesting residue management, site decomposition rate and the nature of the vegetative regrowth. The meta-analysis concluded that forest harvesting generally had little effect on SOC storage either in the A horizon (uppermost mineral soil) or in the whole mineral soil profile.For the A horizon, stem-only harvesting increased SOC whereas a decrease was found for whole tree harvesting.Within the stem-only harvesting category, there was a significant species effect, with conifers producing more soil carbon after harvest than hardwoods and mixed stands which both showed a decrease in SOC.Several studies in the meta-analysis pointed to the importance of harvest residue management for mineral soil carbon; leaving residues on site had a largely positive effect for mineral soil carbon beneath conifers but little or no effect beneath hardwood and mixed stands.Another significant factor in the assessment was time since harvest.A general trend observed in many studies was that, in the short term, soil C increased as high C/N ratio harvest residues were incorporated in the soil.However, over the longer term, soil carbon re-equilibrated to lower levels, with C/N ratios approaching background values.A study across four sites in the United States confirmed these observations, noting that differences in litter carbon content due to harvest residues had neither a long-term nor a lasting effect on SOC stocks (Johnson et al., 2004).Rather, where long-term effects of harvest residues were observed, these were mainly seen as differences between sites in biomass carbon rather than in SOC stocks (Johnson et al., 2002). Within the UK, most of the biogeochemical studies of forest harvesting have focused on conifer plantations and the likely consequences for water quality in acid sensitive areas.As a result, measurements relating to carbon have tended to be patchy and relatively short-term.Several catchment studies in the UK provide information on the response of stream water DOC concentrations to clearfelling (Neal et al. 1998(Neal et al. , 2003(Neal et al. , 2005;;Reynolds et al., 2004).In general, there is only a small, relatively short-term response in stream water and groundwater DOC concentrations to stem only harvesting which can be hard to separate from background fluctuations (Neal et al., 2005).A larger stream water DOC response has been observed at some sites on wetter, peaty gley and surface water gley soils (Neal et al., 1998).Over the longer term, increases in DOC concentrations at felled sites are generally indistinguishable from trends observed more widely in UK rivers (Neal et al., 2005). The stream water response is not consistent with the large short-term increases in the concentrations of DOC observed in surface organic horizon soil waters at felled sites on organo-mineral soils (Hughes et al., 1990).The soil water response can be attributed to site disturbance, an increase in summer soil temperatures due to a change in microclimate following removal of shade and an increase in nutrient supply from the felling debris stimulating microbial activity in the organic surface horizons (Zech et al., 1994).Field data from Plynlimon indicate that large amounts of DOC are also released from decomposing harvest residues (Table 10) whilst mobilisation of DOC from the forest floor is more significant in the second year following harvesting. It seems that the DOC in soil waters is not reaching surface or groundwaters, suggesting that it may be mineralised en route or immobilised in the mineral soil horizons (Table 11; Hughes et al., 1990 andZech et al., 1994).In the latter case, the carbon would be largely retained on site although the efficiency of this process may be impeded by saturation of sorption sites by the increased DOC flux.At sites subject to acid deposition, loss of sorption sites due to acid buffering by iron and aluminium oxides/hydroxides and competition for sorption sites by sulphate may also occur (Zech et al., 1994). Losses of riverine particulate material have also been studied in catchments subject to harvesting.As noted above, large increases in riverine suspended sediment fluxes can accompany felling operations within a catchment and one of the main sources of this material is forest road erosion (Leeks and Marks, 1997).However, harvesting practice can also be a significant factor determining soil loss from sites.Whole tree harvesting can make soils vulnerable to large erosion losses, especially on steep slopes.In a small plot study on a steeply sloping site in Scotland, Lewis and Neustein (1971), reported losses of 13.6 g m 2 yr 1 of soil organic matter from a felled site which increased to 20.3 g m 2 yr 1 when harvesting residues were removed.Whilst this was a short-term, small scale study, it illustrates the potential problems with this type of harvesting.These are addressed in detail in the current good practice guidelines for whole tree harvesting, along with proposals for preventative and remedial action (Nisbet et al., 1997).For example, leaving harvest residues in place reduces the risk of soil erosion in high rainfall areas, even on slopes as steep as 35° (Lewis and Neustein, 1971). Soil methane emissions from established forests are low, reflecting the drier soil conditions beneath the forest canopy compared with open moorland.Harvesting of trees allows more water to enter the site as evapotranspiration losses are reduced.The increase in soil moisture status, which may be accompanied by a raising of the water table, can increase methane emissions from the soil substantially (Smith and Conen, 2004).The effect on net CO 2 exchange is less clear.Initially, the clear-felled sites might become a net source of CO 2 to the atmosphere due to soil disturbance and decomposition of harvest residues.The magnitude of this effect will depend on site factors such as nutrient status, soil type, prevailing climate and residue management (e.g.accumulation of brash into piles).Clear-felled sites, in particular those where harvest residues have been removed, re-vegetate rapidly.Within three years of felling at Beddgelert forest, there was 84% vegetation cover on whole tree harvested plots compared with <60% on stem-only harvested plots (Stevens et al., 1995).This will increase the strength of the CO 2 -carbon sink and may be sufficient to make the site a net sink for atmospheric CO 2 . Climate change, nitrogen deposition and increased forest productivity Most of the work on soil carbon and UK forests has been undertaken over the last two decades.More recently, there has been a growing awareness that forest productivity has increased across continental Europe and the UK (Spiecker et al., 1996;Cannell et al., 1998) but there is scientific debate and uncertainty about why forests growing on a given site type are now more productive than earlier in the last century. The change has been attributed to the effects of increased atmospheric CO 2 concentrations (Mellilo et al., 1993), increased atmospheric nitrogen deposition (Nadelhoffer et al., 1999) and increased temperatures (Myneni et al., 1997) either singly or in combination.In the UK, improvements in silvicultural techniques and the use of better genetic material have also been proposed as an explanation (Cannell et al., 1998).Using two process-based forest growth models with very different process representation, Cannell et al. (1998) predicted that up to one half of the increase in General Yield Class in plantation forests observed in the 20th century could be accounted for by the combined effects of nitrogen deposition, CO 2 concentration and temperature.Individually, nitrogen deposition and CO 2 concentration accounted for about 714% of the increased productivity but their combined effect was approximately additive.The effect of warming in combination with CO 2 concentration was relatively modest.The fate of the carbon sequestered as a result of increased forest productivity is also the subject of intense debate.In particular, this debate has focused on the fate of the increased amounts of atmospheric nitrogen deposition to forests (Nadelhoffer et al., 1999;Jenkinson et al. 1999;Sievering, 1999).If the additional N inputs to forests are primarily taken in as woody biomass with high C/N ratios (200500), then the effect on forest carbon uptake will be large.If, however, the nitrogen is taken up mainly into the soil with low C/N ratios (1030), the effect on carbon sequestration will be relatively minor.Using data from 15 N isotope studies at nine sites across North America and Europe, Nadelhoffer et al. (1999) concluded that the latter was the case; this was in direct contrast to mass balance and modelling studies which predicted a larger effect for carbon sequestration by forest biomass (e.g.Holland et al., 1997).An important consequence of the debate is that if the increased amounts of carbon sequestered by forests reside mainly in the aboveground biomass rather than the long-term soil carbon pool, the effect will be transitory because other factors such as water and nutrient availability may ultimately limit productivity.However, the feed-back mechanisms and interactions are complex and more work is required to elucidate fully the nature of the response. Conclusions what are the likely effects of land-use change for carbon in upland organo-mineral soils? EVIDENCE FROM DIRECT MEASUREMENT OF SOC STOCKS In the context of organo-mineral soils in the UK uplands, the evidence from direct measurements of SOC change, following a change in land use from semi-natural grassland/ moorland to forestry, was inconclusive about a major effect on SOC stocks.International studies suggest that planting broadleaved trees will have little effect on SOC stocks, whereas conifers, especially in high rainfall areas, may deplete SOC stocks.Few relevant UK data sets address the issue and, in the light of the inherent problems of soil heterogeneity and vertical gradients in organic matter content and bulk density, these must be considered uncertain.Furthermore, the extent to which results from other systems, for example peat lands and lowland agricultural soils, can be extended to upland organo-mineral soils is uncertain.Rooting patterns and tree growth rates will differ considerably and the effects of tree species must also be taken into account. There is evidence that organic matter accumulates at the surface of organo-mineral soils in conifer plantations, but the long-term fate of this material depends on the extent to which it is incorporated into the long-term, stable carbon pool within the soil profile (Pataki et al., 2003).This is likely to depend on the tree species, soil type, site nutrient status, site hydrology and climate.Furthermore, any assessment of the change in SOC resulting from afforestation must take account of potential losses of pre-existing soil organic carbon from the sub-soil which may offset some of the carbon gained from litter inputs at the soil surface. Leaf and needle litter accumulation is an important process for transferring newly fixed carbon to the surface of forest soils.Another significant but much less well understood and quantified pathway is the exchange of carbon between plant roots and the soil (Jones et al., 2004).This has the potential, directly, to transfer newly fixed carbon into the soil at depths where, subsequently, it may be incorporated into the stable, long-term carbon pool.Recent work suggests that carbon sequestration via this pathway may decline as atmospheric CO 2 concentrations rise, with obvious implications for the terrestrial carbon sink (Heath et al., 2005). EVIDENCE FROM FLUX STUDIES The questions addressed by flux measurements essentially come down to how much of the carbon exchanged with the atmosphere finds its way into the stable, long-term SOC store and how much do the other flux pathways offset the carbon sink? To summarise the previous discussions, a simple matrix has been developed which provides a qualitative indication of the likely effects of forestry development on individual fluxes (Table 12). The summary suggests that overall, there may be little net effect of forest development on many of the carbon fluxes, particularly for DOC and DIC, although the latter is uncertain.POC fluxes are likely to be enhanced at all stages of the forest cycle but, as noted above, changes in forest management practice in line with current guidelines should constrain these losses. The effect of forest harvesting on net CO 2 exchange may be neutral depending on whether carbon, lost to the atmosphere from increased decomposition rates and microbial activity, is subsequently balanced by uptake of carbon in re-growing vegetation.This is likely to be very site-specific and will be determined to a great extent by harvest residue management and site environmental conditions.However some measurement campaigns similar to those conducted during the afforestation of peat land sites would provide useful data. Overall, changing land use on organo-mineral soils, from semi-natural/grazed grassland and moorland to forest, is likely to have a relatively small effect on SOC storage; this is in agreement with the assumptions made by the UK national carbon inventory.The main uncertainties which deserve further research effort are: (i) the relative magnitude of the sink for atmospheric carbon as trees grow and mature compared with that lost during site preparation and harvesting, given that the other fluxes are relatively small or of the same magnitude and direction, irrespective of land use or can be controlled by site management (ii) the extent to which carbon, newly fixed during forest growth and deposited as litter, either at the ground surface or within the soil profile, is transferred to the long-term, stable soil carbon pool.First symbol represents direction of flux, where negative sign represents depletion of atmospheric carbon.Second symbol represents the change in the magnitude of the flux with respect to semi-natural or extensively grazed moorland / acid grassland.0 means that there is no change as a result of changing land use; + means an increase in flux with respect to that measured in semi-natural moorland / grassland. Fig. 4 . Fig. 4. DOC fluxes in podzolic B horizon soil waters beneath moorland / acid grassland (age 0) and a chronosequence of Sitka spruce stands in Wales (CEH unpublished data). Table 3 . Summary of annual soil organic carbon accumulation rates 1Excludes the data from Rothamsted which are shown separately. Table 4 . Net ecosystem exchange of CO 2 for undisturbed peatlands (Hargreaves and Fowler 2000; carbon sink is given a negative notation as it represents depletion of atmospheric carbon) Table 6 . (Jones et al., 2000;ance for drained and afforested peatland(Jones et al., 2000; Harrison et al., 1997;carbon sink is given a negative notation as it represents depletion of atmospheric carbon). Table 5 . Net ecosystem exchange of CO 2 for drained and afforested peatlands in Scotland (Hargreaves and Fowler 2000; Hargreaves et al., 2003 carbon sink is given a negative notation as it represents depletion of atmospheric carbon) Table 9 . Estimated POC and TOC exports (gC m -2 yr -1 ) for the Plynlimon experimental catchments and UK upland peatland catchments. Table 10 . DOC fluxes (gC m 2 yr 1 ) beneath harvest residues and the forest floor from adjacent clearfelled Sitka spruce stands on peaty podzol and peaty gley soils at Plynlimon (CEH unpublished data). Table 11 . DOC concentrations (mgC l -1 )in leachate beneath harvest residues and in soil waters from adjacent clearfelled Sitka spruce stands on peaty podzol and peaty gley soils at Plynlimon (CEH unpublished data). Table 12 . Likely effects of forest development on carbon fluxes in organo-mineral soils.
2018-05-08T18:19:17.180Z
2007-01-17T00:00:00.000
{ "year": 2007, "sha1": "b1d88da16781606623e8b3e1f0691a5afad03fd0", "oa_license": "CCBYNCSA", "oa_url": "https://hess.copernicus.org/articles/11/61/2007/hess-11-61-2007.pdf", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "b1d88da16781606623e8b3e1f0691a5afad03fd0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
207376078
pes2o/s2orc
v3-fos-license
Role of HDL function and LDL atherogenicity on cardiovascular risk: A comprehensive examination Background High-density lipoprotein (HDL) functionality and low-density lipoprotein (LDL) atherogenic traits can describe the role of both particles on cardiovascular diseases more accurately than HDL- or LDL-cholesterol levels. However, it is unclear how these lipoprotein properties are particularly affected by different cardiovascular risk factors. Objective To determine which lipoprotein properties are associated with greater cardiovascular risk scores and each cardiovascular risk factor. Methods In two cross-sectional baseline samples of PREDIMED trial volunteers, we assessed the associations of HDL functionality (N = 296) and LDL atherogenicity traits (N = 210) with: 1) the 10-year predicted coronary risk (according to the Framingham-REGICOR score), and 2) classical cardiovascular risk factors. Results Greater cardiovascular risk scores were associated with low cholesterol efflux values; oxidized, triglyceride-rich, small HDL particles; and small LDLs with low resistance against oxidation (P-trend<0.05, all). After adjusting for the rest of risk factors; 1) type-2 diabetic individuals presented smaller and more oxidized LDLs (P<0.026, all); 2) dyslipidemic participants had smaller HDLs with an impaired capacity to metabolize cholesterol (P<0.035, all); 3) high body mass index values were associated to lower HDL and LDL size and a lower HDL capacity to esterify cholesterol (P<0.037, all); 4) men presented a greater HDL oxidation and lower HDL vasodilatory capacity (P<0.046, all); and 5) greater ages were related to small, oxidized, cytotoxic LDL particles (P<0.037, all). Conclusions Dysfunctional HDL and atherogenic LDL particles are present in high cardiovascular risk patients. Dyslipidemia and male sex are predominantly linked to HDL dysfunctionality, whilst diabetes and advanced age are associated with LDL atherogenicity. Objective To determine which lipoprotein properties are associated with greater cardiovascular risk scores and each cardiovascular risk factor. Methods In two cross-sectional baseline samples of PREDIMED trial volunteers, we assessed the associations of HDL functionality (N = 296) and LDL atherogenicity traits (N = 210) with: 1) PLOS Introduction Low levels of high-density lipoprotein (HDL) cholesterol (HDL-C) and high concentrations of low-density lipoprotein (LDL) cholesterol (LDL-C) are traditionally related to a greater risk of suffering a cardiovascular event [1]. However, HDL functions could reflect the protective role of the lipoprotein better than HDL-C levels [2], and LDL characteristics provide further information on the residual atherogenic risk of these particles beyond mere LDL-C concentrations [3,4]. Both lipoprotein traits have been shown to be associated with high cardiovascular risk states in very diverse ways. On the one hand, regarding HDL functions: 1) cholesterol efflux capacity (HDL capacity to pick up cholesterol from peripheral cells) has demonstrated to be inversely related with the incidence of cardiovascular events (and shown to predict these outcomes more accurately than HDL-C concentrations) [5]; 2) deficiencies in the biological function of two enzymes related to the metabolism of cholesterol in HDLs, lecithin-cholesterol acyltransferase (LCAT, responsible for the esterification and internalization of free cholesterol after cholesterol efflux) and cholesteryl ester transfer protein (CETP, responsible for the exchange of cholesterol from HDLs to other lipoproteins), have shown to be linked to modest increments (LCAT) or decrements (CETP) in the incidence of cardiovascular events [6,7], although the effect of modifying these activities in other studies has not been shown to be conclusive [8]; 3) the activity of paraoxonase-1 (PON1, an essential antioxidant HDL enzyme) has been inversely associated with cardiovascular diseases incidence in some works [9] but not in others [10]; 4) HDLs are also thought to promote endothelial protection and are linked to a greater release of nitric oxide from endothelial cells [11], being this property transiently impaired in acute coronary events [12]; finally, 5) HDL oxidation and its global lipid composition, although being related to several aspects of a dysfunctional lipoprotein profile (a decreased capacity to perform HDL biological functions or a decreased HDL stability) [13][14][15][16] been associated with high cardiovascular risk (CVR) states as clearly as other HDL functional traits. On the other hand, regarding LDL atherogenic characteristics: 1) circulating levels of oxidized LDLs are directly related with incidence of coronary diseases and all-cause mortality [4,17], whilst low LDL resistance against oxidative modifications of the particle has been linked with subclinical atherosclerosis and is present in high CVR subjects [18,19]; 2) small LDL particles (a characteristic deeply interrelated with a pro-atherogenic LDL profile, which can be indirectly measured by the ratio between LDL-C and apolipoprotein B-ApoB-levels in circulation [20]), have been associated with a greater incidence of cardiovascular events [21]; and 3) compositional changes of LDL particles such as increases in their remnant triglyceride content tend to increase ApoB-100 instability on LDL surface (which may lead to an inefficient binding to LDL receptors) and have shown to be increased in coronary artery disease patients [22,23]. The aim of this study was to determine the independent associations of HDL functionality and LDL atherogenic characteristics with: 1) the 10-year predicted risk of suffering a coronary event (the Framingham-REGICOR CVR score), and 2) the most prevalent CVRFs (diabetes, dyslipidemia, excess body weight, hypertension, and smoking habit), age, and sex, in high CVR individuals. Study population This study was a cross-sectional analysis in two sub-samples of volunteers from the PRE-DIMED Study [24,25] at the baseline visit: one sample for the evaluation of HDL-related variables (N = 296) [26] and another for the assessment of LDL atherogenic traits (N = 210) [27]. The sample for the study of HDL-related parameters included the one in which the LDLrelated characteristics were assessed. In these populations, we registered the values of: 1) general clinical variables (age, sex, body weight, height, blood pressure, and biochemical profile); 2) drug use; 3) adherence to a Mediterranean Diet, by means of the Mediterranean diet Score; 4) levels of physical activity according to the Minnesota Leisure Time physical Activity questionnaire; and 5) smoking habit [24,28]. In individuals aged 35-74, we calculated 10-year predicted risk of developing a future coronary event as the CVR scores according to the Framingham-REGICOR equation validated for the Spanish population (considering age and sex, presence of diabetes and tobacco habit, total and HDL-C levels, and blood pressure) [29]. Type-II diabetes mellitus was defined as the presence of an abnormal glucose metabolism or use of anti-diabetic drugs. Dyslipidemia was defined as the presence of total cholesterol levels �200 mg/dL or use of statins and triglyceride levels �150mg/dL. Hypertension was defined as the presence of systolic blood pressure levels �140 mmHg, diastolic blood pressure levels �90 mmHg, or use of anti-hypertensive drugs. Body mass index (BMI) was calculated as the ratio between weight (kg) and height squared (m 2 ) [24]. Volunteers provided written informed consent before entering the trial. The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki, was approved by the local Research and Ethics Committee, and was registered with the International Standard Randomized Controlled Trial Number ISRCTN35739639. Its details have been previously published [24,25]. HDL functionality determinations We first isolated HDL particles from plasma by density gradient ultracentrifugation (isolated HDL fraction) [26,30] and polyethylene glycol-induced precipitation of apolipoprotein B (ApoB)-containing lipoproteins (ApoB-depleted plasma samples) [26]. Plasma, serum, isolated HDL, and ApoB-depleted plasma samples were stored at -80˚C until use. We analyzed the participants' lipid profile (triglycerides, cholesterol, HDL-C, and apolipoprotein A-I-ApoA-I-) in an ABX-Pentra 400 autoanalyzer (Horiba ABX) [26]. We determined cholesterol efflux capacity (HDL ability to pick up the cholesterol excess from cells) in a model of human THP-1 monocyte-derived macrophages treated with ApoB-depleted plasma samples [26]. We computed the ability of HDL lipoproteins to esterify cholesterol as the percentage of esterified cholesterol in isolated HDL particles/lecithin cholesterol acyltransferase quantity in plasma [26]. We determined the function of cholesteryl ester transfer protein (CETP) in plasma [26,30] and the arylesterase activity of paraoxonase-1 (PON1) in serum [26] by commercial kits. We assessed HDL vasodilatory capacity as the HDL-induced increment in the production of nitric oxide in a human umbilical vein endothelial cell model treated with ApoB-depleted plasma samples [26]. We determined the oxidation of HDL particles as the equivalents of malondialdehyde per mg/dL of cholesterol in ApoB-depleted plasma samples [26]. We examined the lipid composition of the isolated HDL fraction in an ABX-Pentra 400 autoanalyzer (Horiba ABX) and, from these data, we calculated the triglyceride/esterified cholesterol ratio in HDL particles ("triglycerides in HDL core") [26,30]. Finally, we assessed HDL size distribution by LipoPrint technology (Quantimetrix) in plasma [26,30]. With the percentages of large and small HDL particles (HDL2 and HDL3, respectively), we calculated the HDL2/HDL3 ratio. LDL atherogenic traits We first isolated LDL lipoproteins from plasma samples by density gradient ultracentrifugation [27,31] and stored them at -80˚C until use. From the values of the participants' lipid profile, we calculated LDL-C levels according to the Friedewald formula (whenever triglycerides were <300 mg/dL) [27,31]. We quantified ApoB in an ABX-Pentra 400 autoanalyzer (Horiba ABX) in plasma [27,31]B. We measured LDL resistance against oxidation (LDL lag time) from the kinetics of formation of conjugated dienes (oxidized lipid forms) in isolated LDL samples in a pro-oxidant environment [27,31]. We assessed the oxidation of LDL lipoproteins as the equivalents of malondialdehyde per mg/dL of cholesterol in isolated LDL samples [27]. From the lipid profile values, we calculated an approximation to LDL average size (the LDL-C/ApoB ratio) [27]. We determined the lipid composition of isolated LDL particles in an ABX-Pentra 400 autoanalyzer (Horiba ABX) and, from these data, we calculated the triglyceride/total cholesterol ratio in isolated LDL samples. Finally, we assessed LDL ex vivo cytotoxicity in a THP-1 monocyte-derived macrophage model as previously described [27]. Sample size Accepting a type I error of 0.05, a type II error of 0.2, and a 1% loss rate in a two-sided contrast, sample sizes of 196 and 140 participants provide sufficient statistical power to determine that Pearson's correlation coefficients �0.2 and �0.237 (for HDL-and LDL-related variables, respectively) were significantly different from zero. Sample sizes were increased by 50%, up to 294 and 210 subjects, to allow adjustments for different covariates. Statistical analyses We first assessed the distribution of continuous variables using normality plots and histograms. To study the association between lipoprotein traits and CVR, we first compared the means of HDL-and LDL-related variables among the CVR score groups (low risk-CVR score <5-, moderate risk-CVR score �5 and <10-, and high risk-CVR score �10-) using a one-way ANOVA for normally-distributed variables and a Kruskal-Wallis test for non-normally distributed ones. To determine possible linear associations between the CVR score group and the means or medians of lipoprotein-related variables, we performed Pearson's or Spearman's tests, respectively, to calculate P-trend values. We assessed the differences in the values of lipoprotein characteristics due to classical CVRFs (presence of diabetes, dyslipidemia, hypertension, and tobacco use-categorical variables-; and greater values of BMI-continuous variable-), sex, and age (continuous variable), in three multivariate linear regression models. Model 1 was non-adjusted. To determine the independent effect of each of these traits on lipoprotein characteristics, model 2 was adjusted for the rest of the previous factors, study site, adherence to the Mediterranean diet, and levels of physical activity. Finally, model 3 included HDL-C or LDL-C levels as an extra co-variate, in order to exclude the effect of lipoprotein cholesterol from the previous associations. We accepted any two-sided P-value <0.05 as significant. We executed the previously described analyses in R Software, version 3.4.1 (R: A language and environment for statistical computing. R Foundation for Statistical Computing. Vienna, Austria). Participants In accordance to the high CVR profile of the volunteers in the HDL functionality and LDL atherogenicity subsamples, subjects were 65.9 and 65.4 years old, 49.0 and 51.4% of the participants were male, 49.0 and 47.1% were diabetic, 33.4 and 33.3% were under glucose-lowering treatment, 77.4 and 79.0% were dyslipidemic, 44.9 and 43.3% were statin users, 78.7 and 82.4% were hypertensive, 65.9 and 70.0% were under antihypertensive treatment, 44.9 and 44.8% were obese, and 12.5 and 13.8% were smokers, respectively (Table 1). HDL functionality, LDL atherogenicity, and CVR categories Regarding HDL-related traits, high CVR was associated with low HDL-C and ApoA-I levels, low cholesterol efflux values, high HDL oxidation, high content of triglycerides in HDL core (P<0.001 in the five previous cases), and low values of the HDL2/HDL3 ratio (smaller HDL size) (P = 0.002) (Fig 1). Regarding LDL-related variables, high CVR was associated with high ApoB levels (P = 0.025) (but not with significant differences in LDL-C levels, P>0.05), low LDL resistance against oxidation (low LDL lag time values) (P = 0.019), and low estimated LDL size (P = 0.001), and with a borderline significant trend towards high LDL oxidation (P = 0.085) (Fig 2). Error bars depict standard deviations. Individual effects of CVRFs on lipoprotein traits in high CVR subjects As observed in Fig 3, presence of type-II diabetes was associated with low HDL-C (P<0.001) and ApoA-I levels (P = 0.004) and low cholesterol efflux capacity values (P = 0.001). Regarding LDL properties (see Fig 4), diabetes was related to low LDL-C (P<0.001) levels, greater LDL oxidation (P<0.001) and lower LDL resistance against oxidation (P = 0.007), lower estimated LDL size (P<0.001) and a greater LDL triglyceride content (P = 0.032). Dyslipidemia was related to greater cholesterol and apolipoprotein levels (P<0.05, all) and high cholesterol efflux capacity values (P<0.001) (Figs 3 and 4). High BMI values were associated with low HDL-C levels (P = 0.006), low HDL capacity to esterify cholesterol (P = 0.008), smaller HDL size (P<0.001), and a greater triglyceride content in the HDL core (P = 0.014). High BMI values were also linked to a lower estimated LDL size after adjusting for all CVRFs. (P = 0.004). Hypertension was not particularly associated with an abnormal lipoprotein profile (data not shown), although it was independently linked to a borderline significant trend towards low cholesterol efflux capacity values when adjusted for all classical CVRFs (P = 0.090). Finally, being smoker was also unrelated to differences in any lipoprotein characteristic (data not shown). The exact coefficients of the associations of cardiovascular risk factors with lipoprotein properties for the three models performed are available in S1 Table. Effects of sex and age on lipoprotein traits in high CVR patients Male sex was associated with a highly dysfunctional HDL profile (Fig 2 and S2 Table): after adjusting for all classical CVRFs, men had lower HDL-C and ApoA-I levels (P<0.001, both), lower cholesterol efflux capacity values (P = 0.003), lower CETP activities (P = 0.033), greater HDL oxidation (P = 0.018), and smaller HDL particles (P = 0.004). Regarding LDLs, after adjusting for CVRFs, male sex was linked to lower estimated LDL size (P = 0.028), and to greater LDL oxidation (P = 0.044), cholesterol content (P = 0.009), and cytotoxicity (P = 0.040). Finally, regarding HDL functionality, greater ages were related to larger HDL particles (P<0.001). Despite greater age being linked to low LDL-C and ApoB levels (P<0.001), it was also independently associated with a greater LDL oxidation (P = 0.009), a greater LDL triglyceride content (P = 0.037), and greater LDL cytotoxic potential (P = 0.030) when adjusting for all classical CVRFs. Discussion Our data show that dysfunctional HDL and atherogenic LDL particles are associated with greater CVR scores and particularly impaired in certain high CVR subjects (diabetic, dyslipidemic, with excess weight, male, and older) in the first systematic, comprehensive association analysis performed to date. HDL functions are intimately related to CVR according to previous human studies. In our dataset, we have observed an association between high CVR and low cholesterol efflux, high HDL oxidation, high triglyceride content in HDL core, and smaller HDL size. Cholesterol efflux capacity has already shown to be related to high subclinical atherosclerosis and incidence of cardiovascular diseases [5,32]. Regarding HDL oxidation, it has been previously associated with high CVR states [13] as well as with decreased cholesterol efflux capacity values [14]. A high triglyceride content in the HDL core has been shown to contribute to HDL instability; It leads to an imbalance in the electrostatic relationships of the lipoprotein, promoting the detachment of ApoA-I from the HDL surface [15]. This fact could be associated with an impaired HDL function. Finally, our data also agree with previous reports of low levels of large HDLs in high CVR states [33]. Regarding LDL atherogenicity properties, LDL particles with smaller estimated size and more prone to become oxidized were associated with greater CVR scores. This concurs with previous evidence: small and oxidized LDL particles have been related to a greater coronary risk [3,4]. A lower LDL resistance against oxidation (present in coronary disease patients [19]) could facilitate LDL oxidation. Otherwise, ApoB levels, but not LDL-C concentrations, appeared to be significantly increased in high CVR states. This fact agrees with the hypothesis that alternative measurements of LDL quantity in circulation (such as ApoB levels or LDL particle number) could be more accurate and reflect better the CVR derived from these atherogenic lipoproteins [34]. Diabetes was strongly associated with dysfunctional lipoprotein characteristics in our cohort: it was associated with oxidized, small, triglyceride-rich LDL particles and with impaired cholesterol efflux capacity (although this association was lost when adjusting for HDL-C levels). Diabetes is strongly related to a suboptimal lipid profile [35] and a pro-oxidant, pro-inflammatory status that could contribute to promoting HDL dysfunctionality [36] and LDL atherogenicity [37]. The fact that there was 5% fewer dyslipidemic patients and 9.3% more individuals treated with statins in the group of diabetic individuals could contribute to explaining their lower cholesterol levels. Once adjusted for the effect of HDL-C concentrations, being dyslipidemic was independently associated with greater CETP activity, lower HDL capacity to esterify cholesterol, and smaller HDL size, in agreement with previous work [38]. Dyslipidemia was also independently linked to LDL particles richer in triglycerides. Some authors consider this fact may be linked to a subtype of triglyceride-rich remnant lipoproteins, markedly pro-atherogenic [39]. Other classical CVRFs were shown to impair lipoprotein characteristics in our dataset. On the one hand, increased BMI values were independently associated with lower HDL-C levels, lower HDL ability to esterify cholesterol, and triglyceride-rich, small HDL particles, as well as with low estimated LDL size. Some of these lipoprotein characteristics had already been associated with excess body weight [40]. In addition, hypertriglyceridemic states in overweight or obesity could facilitate the accumulation of triglycerides in HDL particles [40], possibly leading to the formation of more dysfunctional lipoproteins [15]. On the other hand, although the associations were non-significant, our results also suggest hypertension could be related to a lower cholesterol efflux capacity and to greater LDL oxidation, potential mechanisms to be addressed in future trials. Men are known to be more strongly affected by cardiovascular diseases than women [41], hence a potentially deleterious effect of male sex on lipoprotein traits could be expected. In our data, being male was independently associated with low HDL-C levels and, once the confounding effect of HDL-C concentrations was considered, it was also linked to greater HDL oxidation and a reduced HDL capacity to promote the endothelial release of nitric oxide, pointing to two potential novel contributors for the increased CVR in men that should be checked in further studied. In addition, male sex was linked to high concentrations of oxidized, small, cytotoxic LDL lipoproteins, but the significant of these associations was blunted when adjusting for LDL-C levels. These data agree with previous works reporting increased levels of small [42] and oxidized [43] LDL particles in men. Aging has been traditionally associated with lower cholesterol levels, particularly in LDL, in parallel with a time-dependent increase in CVR [41,44]. Our data suggest that despite this cholesterol decrease, greater age is independently associated with a highly atherogenic LDL profile (with oxidized, small, triglyceride-rich, cytotoxic LDL particles). The possible conversion of LDL into pro-atherogenic particles could explain why CVR keeps increasing throughout life. The main strength of the present study is that it has comprehensively assessed the associations of HDL functionality and LDL atherogenicity characteristics with HDL-C and LDL-C levels, and the main factors modulating CVR. Moreover, all the relationships described in our regression models have been adjusted for the effect of the rest of CVRFs and modulators. However, there are also limitations. First, its design was cross-sectional, it did not allow us to infer causality and we could only establish associations between lipoprotein characteristics and CVRFs and modulators that should be addressed in future prospective studies. Second, our study subjects were older and at high CVR, therefore results cannot be extrapolated to the general population. To partially correct this limitation, we considered these factors as covariates in the linear regression analyses. Third, we could not perform the association analyses between CVR scores and HDL-and LDL-related characteristics in individuals aged �75 since the Framingham-REGICOR equation only allows the calculation of CVR scores in subjects 35 to 74 years old. Fourth, due to availability and technical issues, we were unable to analyze the HDL ability to esterify cholesterol and CETP and PON1 activities, HDL size, and HDL vasodilatory capacity in 67, 37, and 60 volunteers. Finally, we could not detect powerful associations between hypertension or smoking and lipoprotein properties since only a small proportion of our volunteers was non-hypertensive (17.6-21.3%) or a smoker (12.5-13.8%). Conclusions High CVR scores were associated with low cholesterol efflux capacity values, high HDL oxidation, triglyceride-rich HDL cores, small HDL size, small estimated LDL size, and low LDL resistance against oxidation. Among high CVR subjects, being dyslipidemic and male were preferentially associated with a dysfunctional HDL profile, while being diabetic and older was specially related to pro-atherogenic LDL particles. To date, this is the first study to comprehensively analyze the independent associations between CVR and HDL-and LDL-related variables in humans. Our data reflect the pertinence of assessing HDL function and LDL atherogenicity in clinical studies, since much more information can be provided by lipoproteins beyond HDL-C and LDL-C levels. Supporting information S1
2019-07-02T13:47:47.127Z
2019-06-27T00:00:00.000
{ "year": 2019, "sha1": "7e1f931ba2f1333efadf908c193f1c68608cdcae", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0218533", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e1f931ba2f1333efadf908c193f1c68608cdcae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5849725
pes2o/s2orc
v3-fos-license
Characterization of the Role of Tumor Necrosis Factor Apoptosis Inducing Ligand (TRAIL) in Spermatogenesis through the Evaluation of Trail Gene-Deficient Mice TRAIL (TNFSF10/Apo2L) is a member of the tumor necrosis factor (TNF) superfamily of proteins and is expressed in human and rodent testis. Although the functional role of TRAIL in spermatogenesis is not known, TRAIL is recognized to induce apoptosis via binding to its cognate receptors; DR4 (TRAIL-R1/TNFRSF10A) and DR5 (TRAIL-R2/TNFRSF10B). Here, we utilize Trail gene-deficient (Trail −/−) mice to evaluate the role of TRAIL in spermatogenesis by measuring testis weight, germ cell apoptosis, and spermatid head count at postnatal day (PND) 28 (pubertal) and PND 56 (adult). Trail−/− mice have significantly reduced testis to body weight ratios as compared to wild-type C57BL/6J at both ages. Also, Trail −/− mice (PND 28) show a dramatic increase in basal germ cell apoptotic index (AI, 16.77) as compared to C57BL/6J (3.5). In the testis of adult C57BL/6J mice, the AI was lower than in PND 28 C57BL/6J mice (2.2). However, in adult Trail −/− mice, the AI was still higher than that of controls (9.0); indicating a relative high incidence of germ cell apoptosis. Expression of cleaved caspase-8 (CC8) and cleaved caspase-9 (CC9) (markers of the extrinsic and intrinsic apoptotic pathway, respectively) revealed a two-fold increase in the activity of both pathways in adult Trail −/− mice compared to C57BL/6J. Spermatid head counts in adult Trail −/− mice were dramatically reduced by 54% compared to C57BL/6J, indicating these animals suffer a marked decline in the production of mature spermatozoa. Taken together, these findings indicate that TRAIL is an important signaling molecule for maintaining germ cell homeostasis and functional spermatogenesis in the testis. Introduction Spermatogenesis is a finely tuned process where spermatozoa are formed from spermatogonia in the testis. During spermatogenesis, germ cell apoptosis serves two important roles; removing damaged germ cells, and limiting the number of developing germ cells to match the testes supportive capacity [1,2]. A higher apoptotic rate is also observed in pubertal animals during the initial establishment of spermatogenesis, also called the first wave spermatogenesis [3], likely as mechanism to limit this population during this period of rapid germ cell production. Tumor necrosis factor (TNF) related to apoptosis-inducing ligand (TRAIL/Apo2L/TNFSF10) is a member of the TNF super family of proteins involved in cancer development and autoimmune diseases [4,5], and is expressed in human and rodent testis, including germ cells, Sertoli cells, and Leydig cells [6,7,8,9]. TRAIL is a Type II transmembrane protein that binds to either of two receptors in humans, DR4 (TRAIL-R1/TNFRSF10A) and DR5 (TRAIL-R2/TNFRSF10B) [10,11]. In mice, there is only one TRAIL death receptor, TRAIL-R (MK/mDR5), and it is homologous to both human death receptors [12]. The manner by which TRAIL initiates the extrinsic apoptotic signaling pathway is well characterized. Briefly, when TRAIL binds to DR4/5 to promote its recruitment of FADD (Fas-associated protein with death domain), this allows for the recruitment and activation of the initiator caspase-8 in a complex known as the death-inducing signaling complex (DISC). Caspase-8 enzyme activation cleaves other downstream effector caspase family members to ultimately lead to the apoptotic elimination of the cell [13,14]. Another means to induce cell apoptosis is through the intrinsic apoptotic signaling pathway. This pathway is triggered by the release of cytochrome c from mitochondria and its binding to monomeric apoptotic protease activating factor-1 (APAF-1) to form the apoptosome complex. The formation of this complex results in the cleavage and activation of the initiator caspase-9. The cleavage and activation of executioner caspases are similar to that seen with the extrinsic pathway [15,16]. Recent studies have focused on the potential application of TRAIL in autoimmune diseases, anti-cancer therapy [4,5], and Type I diabetes [17]; although, the role of TRAIL in the immune system and its functional mechanisms are still controversial [18]. Previously, we showed that the combined addition of recombinant TRAIL and anti-DR5 antibody (MD5) enhanced the ability of TRAIL to cluster and activate DR5 resulting in significant increases in apoptosis in the GC2spd (ts) germ cell line, thereby implicating a likely role for TRAIL in the modulation of germ cell apoptosis in the testes in vivo [19]. Even though death ligands have been implicated in the control of germ cell apoptosis [20,21,22], few studies have focused on the role of the TRAIL-induced signaling system in regulating apoptosis during spermatogenesis in the testis. The aim of this study was to elucidate the role of TRAIL in the regulation of the first wave of spermatogenesis during pubertal (postnatal day 28, PND 28) and of spermatogenesis in mature aged mice (PND 56). Here, we use TRAIL gene-deficient mice to examine the importance of the TRAIL signaling system in spermatogenesis and its ability to induce apoptosis in the testis during these two developmental periods. Trail 2/2 Mice have Significantly Reduced Testis to Body Weight Ratios The testis and body weights of Trail gene-deficient (Trail 2/2 ) mice and the wild type C57Bl/6J mice at both PND 28 and 56 are shown in Table 1. Trail 2/2 mice had significantly smaller testes than C57BL/6J mice at both PND 28 (Table 1A, Fig. 1). Compared to C57BL/6J mice, Trail 2/2 mice had a 54% decrease in the production of mature spermatids. Trail 2/2 Mice have Higher Rates of Germ Cell Apoptosis at Pubertal and Adult Ages TUNEL analysis was performed to determine the rate of germ cell apoptosis in Trail 2/2 and C57BL/6J mice at PND 28 and PND 56 ( Fig. 2A-D). In C57BL/6J mice, the apoptotic index (AI) was 3.5060.98 at PND 28 and 2.2160.07 at PND 56, and the AI of Trail 2/2 mice was 16.5760.31 at PND 28 and 9.04560.06 at PND 56 (Fig. 2E). Comparatively, Trail 2/2 mice displayed a higher AI than C57BL/6J mice at both PND 28 and PND 56 (4.7 and 4.1 fold, respectively). These results indicate that Trail 2/2 mice have a higher basal level of germ cell apoptosis at both the pubertal and adult ages. This elevated level of apoptosis likely underlies the reduction of spermatid head counts in the adult testis. Trail 2/2 Mice Exhibit a Delay of in Spermatogenesis and Alterations in Meiosis at the Pubertal Age Testicular cross sections were stained using PAS-H for the histological evaluation of pubertal mice ( Fig. 3 and 4). At the pubertal age, C57/BL6J mice had an average of 4.5860.46% tubules containing meiotic germ cells (Fig. 4C), while in Trail 2/2 mice these events were significantly increased (15.3961.41%, Fig. 4C). Trail 2/2 mice had a 3.4 fold higher incidence of meiotic figures at the pubertal age. At the adult age, Trail 2/2 mice had a relatively higher number of tubules containing meiotic figures (8.1360.32%) compared to C57/BL6J mice (4.5960.45%). Even when compared to C57/BL6J, Trail 2/2 mice had a greater incidence of meiotic figures (1.47 fold). These results indicate that an alteration in meiosis of germ cells occurred in Trail 2/2 mice. Also, the formation of mature spermatid heads at PND 28 can act as an indicator of normal physiology. Although some tubules in the wild type mice were near completion of the first wave of spermatogenesis and contained elongated spermatids in the tubules in the Trail 2/2 mice ( Fig. 3E-F), appeared to be developmentally delayed. When comparing the tubules at the same stage, mature spermatid subtypes were present in C57/BL6J mice, but lacking in Trail 2/2 mice ( Fig. 3G-H). Trail 2/2 Mice Testis Show Genotype Specific Changes in Intrinsic Apoptotic Proteins, but not in Extrinsic Apoptotic Proteins Since TUNEL analysis revealed that the basal level of germ cell apoptosis is higher in Trail 2/2 mice than in C57BL/6J wild type mice, it is possible that either another death receptor/ligand, such as FasL, is induced or that the intrinsic 'mitochondrial' apoptotic signaling system has become activated. To gain insights into the participation of either the extrinsic or intrinsic signaling systems in mediating the high observed basal germ cell apoptosis in the Trail 2/2 testis, characteristic proteins involved in the extrinsic and intrinsic apoptotic pathways were examined by immunohistochemistry. Western blot analyses of FasL, another death ligand previously shown to cause germ cell apoptosis [20], was performed to assess if Trail 2/2 mice have a compensatory increase in this death receptor, which would accounts for the higher basal levels of apoptosis seen in the testis of PND 28 and 56 mice. In contrast to the induction of TRAIL observed in the testis of FasL gene deficient mice that we previously described [20], no significant differences in the expression of FasL were observed between the C57/BL6J and Trail 2/2 mice at either PND 28 and PND 56 (Fig. 5). Death receptor-mediated signaling activation was also evaluated through the detection of the cleaved form of caspase-8 (CC8, Fig. 6A), the prototypical 'initiator' caspase for the extrinsic signaling pathway. Trail 2/2 mice were observed to have a greater number of CC8 positive tubules than C57BL6J at both PND 28 and PND56. At PND 28, 8.04% of the tubules in Trail 2/2 mice were CC8 positive, whereas C57BL/6J mice only had 2.07% positive tubules. By PND 56, the percentage of positive tubules in Trail 2/2 mice dropped to 6.08% while wild type mice of the same age maintained the same proportion of CC8 positive tubules (2.83%). The activation of the intrinsic apoptotic signaling pathway was assessed through detection of the cleaved form of caspase 9 (CC9, Fig. 6B). Interestingly, Trail 2/2 mice also had more CC9 positive tubules than wild type mice at PND 28 and PND 56. At PND 28, The Role of TRAIL in Spermatogenesis PLOS ONE | www.plosone.org Trail 2/2 mice have a basal level of CC9 positive tubules (14.70%) that is 7 fold higher than age-matched C57BL/6J mice (2.03%). By PND 56, the number of CC9 positive tubules in the Trail 2/2 mice decreased slightly (10.45%) while C57BL/6J mice remained at a low level of CC9 activity (3.93%). Taken together, the observed increases in both CC8 and CC9 in Trail 2/2 mice indicate that the Trail 2/2 mice have a more robust basal activity of both the extrinsic and intrinsic apoptotic pathways, which together underlies the mechanism accounting for the increased apoptotic index observed in Trail 2/2 mice at both ages. Discussion The data in this manuscript show an increase in the incidence of germ cell apoptosis in TRAIL gene deficient mice and that the loss of TRAIL disrupts the first wave of spermatogenesis as well as later cycles. Trail 2/2 mice in this study carry a partial deletion of exon 2 and a complete deletion of exon 3 in the Trail gene [23]. It has been reported that a HeLa cell line transfected with two alternative splicing transcripts of TRAIL (TRAIL-b and TRAILc) has a decrease in DNA fragmentation and cell death. This cell line harbors both TRAIL-b and TRAIL-c, resulting in a truncated TRAIL protein lacking the extracellular domain [24]. Conversely, our results show an increase of germ cell apoptosis in pubertal (PND 28) and adult (PND 56) Trail 2/2 mice compared to C57BL/6J of the same age (Fig. 2E). This increase in germ cell apoptosis may result from compensation by another death receptor signaling family member, such as FasL, that is known to participate in the instigation of germ cell apoptosis in the testis [20], or a possible compensatory activity of the intrinsic apoptotic pathway in order to maintain homeostasis during testis development (Fig. 6). Studies in hepatocytes (primary hepatocytes and HepG2) show that TRAIL contributes to enhanced intrinsic pathway signaling and subsequent apoptosis [25,26]. When TRAIL fails to trigger apoptosis in these cells, the MAP kinase, JNK, can still be activated [25]. In this study, our results indicate that the loss of TRAIL signaling leads to the activation of the intrinsic apoptotic pathway in the testis (Fig. 6B). TRAIL is known to induce apoptosis in transformed cells, yet soluble TRAIL generally has little pro-apoptotic activity in nontransformed cells. However, our laboratory has reported that the addition of TRAIL and an anti-DR5 monoclonal antibody MD5- . Western blot analysis of FasL expression in wild type C57BL/6J and Trail 2/2 mice at PND28 and PND 56. Total protein from two sets of PND 28 and PND 56 whole testis tissue were analyzed using primary antibodies against FasL. Tubulin was used as a loading control. Total cellular protein expression from whole testis homogenates at PND 28 and PND 56 were detected using primary antibodies against FasL. a-tubulin was the loading control. Values represent the mean 6 SEM with an asterisk identifying a significant difference from control (*p,0.05, Student's t-test). doi:10.1371/journal.pone.0093926.g005 1 can induce a synergistic increase of apoptosis in p53 permissive GC-2spd (ts) cells [19]. Although TRAIL is expected to induce apoptosis in a majority in tumor cells with less of an effect in the most normal cells and tissues [7], its function in normal, nontransformed tissues is not clear. Several reports show that TRAIL has the ability to active mitogen-activated protein (MAP) kinase, nuclear factor (NF)-kB, and protein kinase B (PKB or Akt); which are all involved in pro-survival or non-apoptotic signaling [27,28,29]. Recent studies have suggested that suppressing apoptosis with a caspase inhibitor can enhance TRAIL's stimulation of the NF-kB pathway [29]. Also, it has been observed that TRAIL-R can enhance cholangiocarcinoma metastasis by activation of the NF-kB signaling pathway [30]. It is possible that the loss of TRAIL expression in the testis causes an elevation of apoptosis when TRAIL cannot trigger these non-apoptotic pathways. In addition to examining the function of TRAIL in normal testis development and spermatogenesis, we evaluated the testis of Trail 2/2 mice during the pubertal period (PND 28) and adulthood (PND 56). C57BL/6J mice had a higher testis to body weight ratio at PND 28 and PND 56 (Table 1). No significant differences were observed in the body weight of C57BL/6J and Trail 2/2 mice at either age group, but Trail 2/2 mice showed a significant decrease in testis weight at both PND 28 and PND 56 (Table 1). Bclw, which is a member of the Bcl2 gene family, is an anti-apoptotic protein involved in apoptosis during development of the testis. A similar decrease in testis weight was found in Bclw 2/2 mice [31]. These findings suggest that the loss of a pro-apoptotic or antiapoptotic protein may lead to compensation by the extrinsic and intrinsic apoptotic pathways following a disruption of homeostasis during development of the testis. In order to determine whether the loss of TRAIL can alter reproductive capacity, the number of mature spermatid heads was evaluated in C57BL/6J and Trail 2/2 mice at PND 56. A reduction in mature spermatid head counts was found in Trail 2/ 2 mice compared to C57BL/6J mice (Fig. 1). The low value observed at PND 56 may be due to the continued high germ cell apoptotic frequency at PND 28 (Fig. 2). It is possible that the reduced number of spermatid heads results from altered meiosis during spermatogenesis at the pubertal age, causing the lower spermatid head counts in Trail 2/2 mice observed in the adult (Fig. 3). The delayed spermatogenesis in Trail 2/2 mice at PND 56 was not a prolonged effect, yet the Trail 2/2 mice still have more meiotic germ cells than wild type mice. Mice deficient in Bclw or Bax often suffer a loss of fertility [31,32], but the decreased number of spermatid heads did not dramatically influence the reproductive capacity of Trail 2/2 mice. Although the reduction of spermatid heads of Trail 2/2 mice does not significantly alter the ability of these mice to reproduce, the time to litters is longer than in C57BL/6J mice (data not shown). Itch, a known E3 ligase, targets proteins in immune system, including cFLIP, the anti-apoptotic protein cellular FLICE-like inhibitory protein. Itch can degrade cFLIP to prevent its inhibition of caspase-8 [33]. The high basal apoptotic rates in the testis were also observed in itchy 2/2 mice, which are similar to Trail 2/2 mice at both pubertal and adult ages [34]. An important difference is that FasL 2/2 mice only show high basal apoptosis at the pubertal age, not in adult [20]. Recent reports indicate that Fas regulates spermatocyte apoptosis in rats during the first wave of spermatogenesis as a mechanism to establish an appropriate population size [21]. This suggests that TRAIL and ITCH may play more dominant roles than FasL in controlling the population of germ cells after the first wave of spermatogenesis, while FasL may be more critical during the pubertal period [20]. Histological analysis of the adult Trail 2/2 mice revealed some alterations in the seminiferous epithelium. At PND 28, Trail 2/2 mice had more tubules with meiotic figures and less tubules with late stage spermatids compared to C57BL/6J mice, suggesting that the loss of TRAIL can influence cell division (Fig. 3-4). A similar phenomenon is also found in the Itcy 2/2 mice, though a detailed mechanism is still unclear [34]. If the increase in meiotic spermatocytes is due to a delay or arrest of maturation, the progression of normal spermatogenesis will be altered. At PND 56, the number of meiotic figures decreased to 8.13% in the Trail 2/2 mice, but this proportion is still greater than that observed in C57BL/6J mice (1.8 fold). Although adult Trail 2/2 mice can still produce mature and functional spermatozoa, the number of sperm is less than C57BL/6J. This suggests that the meiotic delay may lengthen spermatogenesis, causing fewer mature spermatids form. In addition, the high apoptotic rate and low testis weight in PND 56 Trail 2/2 mice may be influenced by this meiotic delay. Taken together, the results presented in this study provide novel evidence that TRAIL is important in regulating germ cell apoptosis during the first wave of spermatogenesis, and in maintaining testicular germ cell homeostasis at later ages. Ethics Statement All procedures involving mice were performed in accordance with the guidelines of the University of Texas at Austin's Institutional Animal Care and Use Committee (IACUC) in compliance with guidelines established by the National Institutes of Health. The animal experiments described in this manuscript were specifically approved by the IACUC (approval # AUP-2012-00046). Animals All mice used in the experiments were maintained at The University of Texas at Austin's Animal Resource Center and were housed at a constant temperature (2260.5uC) at 35-70% humidity with a 12L:12D photoperiod. Mice were given standard lab chow and water ad libitum. Breeding pairs of wild-type C57BL/ 6J mice were purchased from The Jackson Laboratory (Bar Harbor, ME). Breeding pairs of Trail gene deficient (Trail 2/2 ) mice were provided by Amgen Inc. (Thousand Oaks, CA) [35]. Two ages of mice were selected for this study; postnatal day (PND) 28 representing the pubertal period and adult PND 56. The pubertal age was selected since it is known that an increased incidence of germ cell apoptosis occurs during this developmental period [3]. Mice were killed at postnatal day PND 28 and PND 56 by CO 2 inhalation followed by cervical dislocation, and the body and testis weight were recorded. The testes were rapidly removed and either frozen in liquid nitrogen and stored at 280uC for protein analysis, or immersion-fixed overnight in Bouin's solution (Polysciences, Inc., Warrington, PA), washed in 70% ethyl alcohol-Li 2 CO 3 saturated solution and embedded in paraffin for histology analysis. Genotyping PCR and Primers Wild type C57BL/6J and Trail 2/2 mice were confirmed using genotyping PCR with primers specific for the Trail gene [35]. The mutant TRAIL allele was detected by PCR analysis of tail lysis genomic DNA, using primers 5-AAA GAC GGA TGA GGA TTT CTG GG-39 and 59-GAC AGA ACA CCA TAT TGC TGG CG-39 specific to the TRAIL/Apo2L sequence, and primers 59-GCC CTG AAT GAA CTG CAG GAC G-39 and 59-CAC GGG TAG CCA ACG CTA TGT C-39 specific to neomycin sequence. Genotyping PCR conditions were performed in 35 cycles of 94uC for 1 minute, 66uC for 1 minute, 72uC for 40 seconds, and 72uC for 5 minutes. The wild type primer results in 240 bp single band, while the mutant primer results in 520 bp single band (data not shown). Physiological Characterization In order to characterize the basic testicular phenotype in Trail 2/ 2 mice during development, pubertal (PND 28) and adult (PND 56) were selected. The testes weights were calculated for each mouse, and represented as an average of the left testes and the right testes. The testis/body weight ratio is computed by dividing the average testes weight by body weight. A minimum number of 18 mice were used for each genotype in each group. Testicular Spermatid Head Counts Testicular spermatid head counts in PND 56 mice were performed as previously described with slight modifications [20,36]. Briefly, testes were homogenized in a solution containing 0.9% (w/v) sodium chloride (NaCl) and 10% (v/v) dimethyl sulfoxide (DMSO). Homogenization-resistant spermatid heads were counted on a hemocytometer by using a Nikon E800 microscope (Nikon Instrument Inc., Melville, NY). The average number of spermatid heads were determined from 9 C57BL/6J mouse testes and 8 TRAIL 2/2 mouse testes. Each testis sample was counted 3 times. The daily sperm production per testis was calculated by using 4.84 days as the time divisor [37]. Testicular Histology and Meiotic Quantification Cross sections (5 mm) of paraffin-embedded testes were evaluated for morphological changes by using Periodic Acid-Schiffs-Hematoxylin (PAS-H) staining. All slides were viewed on Nikon E800 microscope and images were captured using a Nikon digital DS camera (Nikon Instrument Inc., Melville, NY) or Aperio ScanScope system (Aperio Technologies, Vista, CA). The method for assessing meiotic figures is based upon criteria as detailed in Histological and Histopathological Evaluation of the Testis (LD Russell et al) and modified by Dr. Yokonoshi [39] and by Dr. Dwyer in one of our previous publications [34]. In short, due to a lack of elongated spermatids in young animals, it is difficult to precisely identify tubule stage. Some tubules appear to be at Stage 12, the only stage would be expected to see meiosis normally, but in fact there are more than three cell types present, suggesting these tubules are more likely in Stage 10 to Stage 2/3. Therefore, if a tubule contained cells that were undergoing meiosis, we labeled that tubule as having meiotic figures, rather than stating it was a Stage 12 tubule. The number of mice by type used for the evaluation of meiotic quantification: PND 28 C57BL/6J mice (n = 12) and TRAIL 2/2 mice (n = 14); PND 56 C57BL/6J mice (n = 12) and TRAIL 2/2 mice (n = 15). Total Protein Extraction and Western Blot Analysis As detailed description of total protein preparation from mouse testis and Western blot analysis have been described previously [38]. Briefly, total protein was collected from 2 sets of whole testes homogenized in RIPA buffer and the concentration was determined by using the Bio Rad DC Protein Assay (Life science, Hercules, CA, #500-0111). For each sample, 30 mg total protein was separated using a 4-12% NuPAGE gradient gel (Life Technologies, Grand Island, NY), transferred to a PVDF membrane (Life Technologies, Grand Island, NY), and blocked using a 5% milk solution. Total cellular proteins were detected using primary antibodies against FasL (Santa Cruz biotechnology, Inc., Dallas, TX, sc956, 1:2,000) and a-Tubulin (Cell Signaling Technology Inc., Danvers, MA, #2144, 1:5,000) coupled with horseradish-conjugated secondary antibody (Cell Signaling Technology Inc., Danvers, MA, #7074, 1:5000). The ECL chemiluminescent substrate (GE Healthcare bio-science, Pittsburg, PA) was used as the detection reagent and a-tubulin as the internal control for gel loading. All experiments were performed in triplicate and repeated at least three times. Statistical Analysis In this study, the minimum number of animals necessary to achieve statistical significance was determined by statistical power analysis (a = 0.05, b = 0.05) [40,41]. Statistical analysis was performed using Prism 5 (Graph Pad Software, Inc., La Jolla, CA). Statistical results were presented as the individual means 6 SEM. The data were subjected to a Student's t-test. Comparisons were considered statistically significant when p,0.05.
2017-04-13T01:10:28.349Z
2014-04-15T00:00:00.000
{ "year": 2014, "sha1": "b87e728e7612e6179531f717ab59a16f8c3f4eec", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0093926", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b87e728e7612e6179531f717ab59a16f8c3f4eec", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16747208
pes2o/s2orc
v3-fos-license
Pro- and Anti-Inflammatory Cytokines Release in Mice Injected with Crotalus durissus terrificus Venom The effects of Crotalus durissus terrificus venom (Cdt) were analyzed with respect to the susceptibility and the inflammatory mediators in an experimental model of severe envenomation. BALB/c female mice injected intraperitoneally presented sensibility to Cdt, with changes in specific signs, blood biochemical and inflammatory mediators. The venom induced reduction of glucose and urea levels and an increment of creatinine levels in serum from mice. Significant differences were observed in the time-course of mediator levels in sera from mice injected with Cdt. The maximum levels of IL-6, NO, IL-5, TNF, IL-4 and IL-10 were observed 15 min, 30 min, 1, 2 and 4 hours post-injection, respectively. No difference was observed for levels of IFN-γ. Taken together, these data indicate that the envenomation by Cdt is regulated both pro- and anti-inflammatory cytokine responses at time-dependent manner. In serum from mice injected with Cdt at the two first hours revealed of pro-inflammatory dominance. However, with an increment of time an increase of anti-inflammatory cytokines was observed and the balance toward to anti-inflammatory dominance. In conclusion, the observation that Cdt affects the production of pro- and anti-inflammatory cytokines provides further evidence for the role played by Cdt in modulating pro/anti-inflammatory cytokine balance. INTRODUCTION Snake bites represent a serious public health problem in developing countries due to their high incidence, severity, and sequelae [1]. In Brazil, fatal cases of bites involving Crotalus durissus terrificus (Cdt) are high, corresponding to 72% of cases not submitted to specific serum treatment and to 11% of cases submitted to specific treatment [2]. This venom contains a variety of toxic proteins including crotoxin, crotamine, gyroxin, convulsium, and thrombinlike enzyme, and produces serious complications, such as neurotoxicity, respiratory paralysis, hypotension, coagulation disorders, myotoxicity, and acute renal failure [3], with possibly additional heart and liver damage [4][5][6]. Envenomation is involved by a serious, abnormal condition that occurs when an overwhelming infection leads to low blood pressure and low blood flow. The victims may exhibit serious complications, such as disseminated intravascular coagulation and multiple organ failure, and death. With respect to the multipleorgan failure represents effects in endothelial cell injury, oedema formation, and the sequestration and an excessive systemic host inflammatory response are largely mediated by complex immunologic processes. A potent, complex immunologic cascade ensures a prompt protective response to venom in humans and experimental animals. Although activation of the immune system during envenomation is generally protective, septic shock develops in a number of patients as a consequence excessive or poorly regulated immune response to the injury organism. This imbalanced reaction harms the host through a maladaptive release of endogenously generated mediators. A successful immune response is dependent the activation of an appropriate set of immune effectors function and load may determine the differentiation of precursos T helper (Th0) lymphocytes into Th1 and Th2 cells [7]. These two subsets of Th cells differ in the effectors functions and 2 Mediators of Inflammation mainly in the repertoire of cytokines that they secrete in response to antigenic stimulation. Th1 cells promote cellmediated effectors responses, whereas Th2 cells promote B cell-mediated humoral responses. Cytokines produced by Th1 cells include interferon gamma (IFN-γ), interleukin-2 (IL-2), IL-12, and tumour necrosis factor beta (TNF-β), and constitute a proinflammatory cytokine profile. Those produced by Th2 cells named as anti-inflammatory cytokine include IL-4, IL-5, IL-6, and IL-10. There are also some cytokines such IL-13 and TNF-α which are common to both subsets [8]. Many mechanisms are involved in the pathogenesis of envenomation, including release of cytokines. The cytokines are divided into two important groups: the proinflammatory such as IL-1 [7], IL-6 [8,9], and TNF-α [10,11], and the antiinflammatory such as IL-10, and have a negative impact on resistance to infection and in septic mice, reduction of IL-10 levels improves survival [12][13][14]. Cytokine production in envenomation has been widely studied, and it seems that both pro-and anti-inflammatory cytokines are overproduced in sepsis syndrome. However, their clinical significance and prognostic value have not been elucidated [15][16][17][18][19], it seems that a complex network of interactions between different cytokines and possibly other components of the immune response takes place during severe infections. These are accumulating data suggesting that an equilibrium between the pro-and anti-inflammatory responses is important for the final outcome of victims with severe envenomation [18,19]. The another inflammatory mediator is nitric oxide (NO) that is an important free radical serving as a second messenger in processes including neurotransmission, maintenance of vasodilator tone, and arterial pressure and it has been suggested that cytokine-mediated circulatory shock is caused by activation of the inducible isorform (type II) of NOS [20]. In biological systems, nitric oxide decomposes to nitrite and nitrate, and the cytokine-mediated increases in concentrations of nitrite/nitrate. Modifications in the concentrations of nitrite and nitrate production have been associated with several conditions: severe envenomation [19], septic shock, hypertension, and atheriosclerosis [21]. The purposes of the present study are: (a) to evaluate the susceptibility to the toxic effects of Cdt, (b) to determine the glucose, creatinine, and urea levels following injection with Cdt, (c) to investigate the changes in serum levels of proinflammatory cytokines and anti-inflammatory cytokines in an experimental model of severe envenomation induced in mice by Cdt, and (d) to determine the ratios of pro-/antiinflammatory cytokines in sera from mice injected with Cdt. Venom Lyophilized venom of Crotalus durissus terrificus (Cdt) was obtained from the Laboratory of Herpetology, Instituto Butantan, São Paulo, Brazil, and stored at −20 • C. The venom was dissolved in sterile physiological saline [0.85% (w/v) NaCl solution], immediately before use. Animals Female BALB/c mice with different body weights were obtained from an established colony maintained by the Bioterio of Instituto de Biotecnología (UNAM, Mex., USA). The animals were maintained and used under strict ethical conditions according to international recommendations for animal welfare set by Committee Members, (International Society on Toxicology, 1992) [22]. Groups of mice were injected intraperitoneally (i.p.) via with different amounts of Cdt and after different intervals of time the blood was collected from the retroorbital plexus. For the assays to determine the kinetics of cytokines since mortality was of a fraction of the injected animals, the number of mice per experimental group ranged between 5 and 15 to obtain blood samples from at least five mice for each time interval. Mice were bled at 0, 1/4, 1/2, 1, 2, 4, 8 and 24 hours, and sera were separated and stored at −20 • C until use. Lethality Probit method was used to calculate the lethal dose fifty (LD 50 ) of Cdt. Four groups of female BALB/c mice with different body weights 10-12 g; 13-15 g; 16-20 g, and 21-25 g were injected intraperitoneally (i.p.) with increasing doses of venom and the number of mice that died were counted after 24 hours. The number of mice used at each dose lethal was ten. Measurement of rectal temperature Groups of female BALB/c mice with 13-15 g were used to study the effects of Cdt on body temperature in an experimental room for animal behavior, which was maintained at 23-25 • C. Each mouse was placed individually in cage (19 × 12 × 11 (depth)), then removed every 15 minutes, held loosely in a small cloth bag, and the core body temperature was measured using a digital thermometer. After each measurement, the mouse was returned to its cage. Mice whose rectal temperature before Cdt administration was below 37 • C were not used for experiments. A. Hernández Cruz et al. 3 Cdt was administered after the temperature became stable. Blood biochemical Groups of female BALB/c mice with 13-15 g were injected i.p. with 0.5, or 1, and/or 2 LD 50 of Cdt dissolved in 0.1 mL of saline solution. Control mice received 0.1 mL of saline solution. Two hours after injection with Cdt, animals were bled. The blood samples were allowed to stand until they formed a clot and the sera were used in biochemical analysis. The glucose, urea, and creatinine levels present in sera from control mice or injected with Cdt were measured using specific kits (SPINREACT diagnostic, Sant Esteve de Bas, Spain), respectively, according to the manufacturer's protocol. Nitrite assay The nitrite levels in sera from mice were determined as previously described by Schmidt et al., 1989 [23]. Briefly, 40 μL of each mice sera sample were transferred to 96well plates and mixed with 40 μL of the reduction solution (NADPH 1.25 ng/mL; FAD 10.4 ng/mL; KH 2 PO 4 0.125 M) containing 0.5 U of NO − 2 reductase for 2 hours at 37 • C. After this time, 80 μL of Griess reagent (1 part 0.1% N-1-naphthyl-ethylene-diamine dihydrochloride in water and 1 part 1% sulfanilamida in 3% concentrated H 3 PO 4 ) were added to each well. The mixture was incubated for 5 minutes at room temperature and read at 540 nm in a microplate reader. Concentrations were determined compared with a standard curve of sodium nitrite. The detection limit of the assay was 1 μM nitrite. Cytokines The levels of cytokines IL-4, IL-5, IL-6, IL-10, and IFN-γ in the serum from BALB/c mice were assayed by a two-site sandwich enzyme-like immunosorbent assay (ELISA) [24]. In brief, ELISA plates were coated with 100 μL (1 μg/mL) of the monoclonal antibodies anti-IL-4, anti-IL-5, anti-IL-6, anti IL-10, or anti-IFN-γ in 0.1 M sodium carbonate buffer (pH 8.2) and incubated for 6 hours at room temperature. The wells were then washed with 0.1% phosphate-buffered saline (PBS/Tween-20) and blocked with 100 μL of 10% fetal calf serum (FCS) in PBS for 2 hours at room temperature. After washing, duplicate sera samples of 50 μL were added to each well. After 18 hours of incubation at 4 • C, the wells were washed and incubated with 100 μL (2 μg/mL) of the biotinylated monoclonal antibodies anti-IL-4, anti-IL-5, anti-IL-6, anti-IL-10, or anti-IFN-γ as second antibodies for 45 minutes at room temperature. After a final wash, the reaction was developed by the addition of orthophenyldiamine (OPD) to each well. Optical densities were measured at 405 nm in a microplate reader. The cytokine content of each sample was read from a standard curve established with the appropriate recombinant cytokines (expressed in picograms per millilitre). The minimum levels of each cytokine detectable in the conditions of the assays were 10 pg/mL for IL-4, IL-5, IL-6, and IL-10 and 300 pg/mL for IFN-γ. To measure the cytotoxicity of TNF present in the serum from BALB/c mice, a standard assay with L-929 cells, a fibroblast continuous cell line was used as described previously by Ruff and Gifford (1988) [25]. The percentage cytotoxicity was calculated as follows: (A control −A sample /A control )× 100. Titres were calculated as the reciprocal of the dilution of the sample in which 50% of the cells in the monolayer were lysed. TNF activity is expressed as pg/mL, estimated from the ratio of a 50% cytotoxic dose of the test to that of the standard mouse recombinant TNF. Statistical analysis Data are expressed as the mean ± standard deviation. Statistical analyses were performed by Student t-test and the level of significance was set at P < .05. Determination LD 50 and symptoms To verify whether the venom present an effect on the body weight and also to determine the LD 50 , groups of female BALB/c mice with different body weight were injected intraperitoneally with distinct doses of Cdt. The LD 50 value was calculated by probit analysis at 95% confidence. These animals were distributed in four groups, with different body weights. As shown in Figure 1, BALB/c female mice presented different susceptibility to Cdt, (10-12 g, LD 50 = 7.5 μg), intraperitoneal injection of 1 LD 50 of Cdt, the time course of mortality did not differ between the groups studied. In all groups, the majority of deaths occurred within first 6 hours. No deaths were observed in mice injected with saline solution (results not shown). Thus, in subsequent experiments mice weighting 13-15 g were used. Death was usually preceded by certain signals or symptoms such as hypothermia. Groups of BALB/c female mice of 13-15 g of body weight were injected i.p. with 0.5, or 1, and/or 2 LD 50 of Cdt, and at different intervals of time specific signs were observed (data not shown). To determine the glucose, urea, and creatinine levels, groups of BALB/c female mice with 13-15 g of body weight were injected i.p. with 0.5, or 1, and/or 2 LD 50 of Cdt for 2 hours. As shown in Figure 2 the LD 50 increased, it was possible to observe a decrease in glucose levels. The glucose levels were significantly lower for animal groups that received Cdt when compared with those obtained from control groups of animals (P < .01). Figure 3 shows that all mice that received different amounts of Cdt, the levels of urea in sera were significantly lower (P < .01) when compared with those obtained for control group. The levels of creatinine in sera from groups of mice injected with Cdt are shown in Figure 4. The levels of creatinine were increasing in a concentration-dependent manner. The maximum levels of creatinine were observed in sera from groups of mice injected with 2 LD 50 (Figure 4). Comparative in vivo mediators release upon Cdt venom injection In order to compare the mediators release such as cytokine secretion and nitric oxide production, sera from BALB/c female mice with 13-15 g of body weight were injected i.p. with 0.5, or, 1 and/or 2LD 50 of Cdt and bled after 2 hours. At this time, the levels of IL-5 and IL-6 were undetectable in serum from mice injected with different LD 50 . As shown in Figure 5, the levels of TNF, NO, IL-4, and IL-10 were significantly higher (P < .001) in sera from mice injected for 2 hours with different amounts of Cdt when compared with those obtained in sera from control group. Interestingly the results obtained also shown that the levels of these mediators were consistently and significantly lower (P < .001) in sera from mice injected with 2 LD 50 when compared to those obtained in sera from groups of mice that received 0.5 and/or 1 LD 50 ( Figure 5). In contrast, no significant difference was observed in the levels of IFN-γ present in sera from mice injected with different amounts of Cdt ( Figure 5). Kinetic of mediators release upon Cdt venom injection To determine the kinetic of cytokine secretion and NO production, groups of BALB/c female mice with 13-15 g of body weight were injected i.p. with 1 LD 50 of Cdt and bled after different time intervals. The highest levels of NO − 2 after Cdt injection were observed at 30 minutes postinjection, decaying thereafter ( Figure 6). Cdt induced a discrete increment of IL-6 levels at 15 minutes postinjection ( Figure 6). The TNF and IFN-γ levels increased gradually, reaching their highest at 2 hours postinjection, decaying thereafter ( Figure 6). The highest levels of IL-5 were observed at 1 hour postinjection ( Figure 6). Cdt was also capable to induce an increase in the serum levels of IL-4 and IL-10 with the highest values occurring 4 hours postinjection, decaying thereafter ( Figure 6). A. Hernández Cruz et al. DISCUSSION Various factors can contribute to the presence of specific signs and symptoms followed by stings or bites with respect to the venom toxicity variations [26]. However, it has been demonstrated that other factors may also contribute to clinical signs, such as age or size of the victims, the site of the injection, and the vulnerability of the victim to the venom [15,26,27]. The present study was designed to simulate accidental envenomation in humans, wherein the route of Cdt administration, the time elapsed between the injection and specific signs, the dose administered, and mediators production were studied. The experimental models studied should involve different susceptibility to the venom toxic effects. This was achieved in the present study, the highest susceptibility was observed for female groups at different body weights. Among the analyzed of female BALB/c within 10-12 g was significantly more susceptible to the Cdt lethal effects than the other groups with different body weights. In the present study, we observed that mice presented respiratory abnormalities following Cdt injection. These observations agree with previous studies that showed that Cdt produces respiratory abnormalities in mice [28]. Various studies carried out show that Crotalus venom induces in animals generalized rhabdomyolysis, causing myalgias, by the increment massive rise in serum of myoglobin and creatinine kinase levels accompanied by myoglobinuria [29]. Acute renal failure is the main cause of death among humans observed after de envenomation with Cdt and possible additional heart and liver damage [2][3][4][5][6]. In this study, we observed changes in several blood biochemical parameters in the mice were measured after the Cdt injection. The amounts of serum glucose, urea, and creatinine levels measurements are described in detail in Figures 2, 3, and 4. These results agree with previous studies that showed that clinical and laboratory alterations in animals immunized with snake venoms [30]. Cdt envenomation also presents an elevation of catecholamines, angiotensin II, glucagons, and cortisol accompanied by changes in insulin secretion [31]. In the Crotalus envenomation the insulin and glucose metabolism alterations could be responsible for the pathogenesis of variety of clinical manifestations. The present study showed that in the blood of groups of mice injected with Cdt, the levels of glucose were decreased. Urea is formed in the liver and circulates in the blood in the form of urea nitrogen. In healthy humans most urea nitrogen is filtered out by the kidneys and leaves the body in urine. If kidneys are not functioning properly or if the body is using large amounts of protein, the blood urea nitrogen level rises. If the human has severe liver disease the blood urea nitrogen will decay. In present study, we observed decreased levels of urea in blood from mice injected with Cdt that suggested a liver failure. These results are inline with previous reports showing that human patients who were bitten by Cdt showed hydropic degeneration and mitochondrial injury in the liver [4]. Changes in blood parameters which are typical effects of Cdt were glucose and urea levels decreased whereas creatinine increased. The present study also shows that these alterations in serum were observed only when large amounts of Cdt were injected into the animals. The envenomation is characterized by a generalized inflammatory state. The normal reaction to envenomation involves a series of complex immunologic cascade that ensures a prompt protective response to venom in humans [32] and experimental animals [15][16][17][18][19]27]. Although activation of the immune system during envenomation is generally protective, the septic shock develops in a number of patients as a consequence of excessive or poorly regulated immune response to the injury organism [19,32]. This imbalanced reaction may harm the host through a maladaptitive release of endogenous mediators that include cytokines and nitric oxide. Cytokines are soluble protein mediators important for the orchestration of inflammatory responses of the human body [33]. The production of proinflammatory and Figure 5: Mediators secretion. Groups of BALB/c female mice with 13-15 g were i.p. injected with 0.5, or, 1 and/or 2 LD 50 of Cdt, after 2 hours, the animals were bled and the levels of cytokines and NO present in the serum were determined as described in materials and methods (see Section 2). Each point represents the mean value of the results obtained from two independent experiments conducted with five to fifteen animals each. Statistical differences between the injections were marked with asterisk (P < 0.001). anti-inflammatory cytokines is strictly controlled by complex feedback mechanisms [14,34,35]. Cytokines may be divided into proinflammatory and anti-inflammatory. The proinflammatory cytokines such TNF-α, IL-1 and IL-8 that include the mobilizing immune system cells to proliferate and produce more cytokines creating an inflammatory cascade, and as anti-inflammatory cytokines such IL-10 which function to dampen or control the inflammatory response. Proinflammatory cytokines are primarily responsible for initiating a potent defence against exogenous pathogens. In contrast, anti-inflammatory cytokines are crucial for down regulating the elevated inflammatory process and maintaining homeostasis for the correct functionality of vital organs [36]. However, excessive production of these mediators may significantly contribute to shock, multiple organ failure, and death [14,34,35]. Envenomation is a constellation of clinical signs and symptoms resulting from excessive systemic host inflammatory response that are largely mediated by cytokines, which are released into the systemic circulation. Serum concentrations of specific cytokines TNF-α, IL-6 that are frequently elevated in envenomated mice and their concentrations A. Hernández Cruz et al. correlate with severity and outcome of envenomation. In addition, proinflammatory cytokines are produced in large quantities in envenomated mice, however, the specific role of these molecules in sepsis remains undefined. A balanced ratio of pro-and anti-inflammatory cytokines is important for appropriate immune response, excessive inflammation, or hyporesponsiveness which can lead to envenoming complications. To determine the magnitude of the cytokine response caused by Cdt venom injection and to evaluate the balance of pro-and anti-inflammatory cytokines released during the envenomation, we measured levels of cytokines in serum from mice. TNF-α is a proinflammatory cytokine which plays an important role in the immune response to infections and cancer and in the regulation of inflammation [37]. The present study shows that the elevation of serum concentrations of TNF-α which occurs 2 hours after of Cdt administration. IL-6 is produced by a variety of cell types during infection, trauma, and immunological challenge. The functional properties of IL-6 are extremely varied and this is reflected by the terminology originally used to describe the activities of this cytokine. It has been described to have both proand anti-inflammatory effects, as well as being involved in a variety of immune response. The results obtained in this study showed that the levels of IL-6 increased until 15 minutes after the 1 LD 50 Cdt injection, decaying thereafter. IFN-γ is produced by a variety of cell types and probably plays a role in the early stages of host response to venoms. In the present study the serum concentrations of IFN-γ were similar for all groups of mice injected with different amounts of Cdt. In groups of mice injected with 1 LD 50 of Cdt the levels of IFN-γ were possibly observed a modest until 4 hours, decaying thereafter. IL-10 is a pluripotent immunoregulatory cytokine that has not been previously characterized in T-cell clones from humans and mice. IL-10 is an anti-inflammatory cytokine that potently inhibits the proinflammatory cytokines secretion such as TNF and IL-1 [38] and regulates the differentiation and proliferation of several immune cells [39]. The present study also shows that the levels of IL-10 increased until 4 hours in groups of mice injected with 1 LD 50 of Cdt. IL-4 has a wide range of functions and in vivo this cytokine is principally responsible for the production of IgE in mice in response to a variety of stimuli that elicit Ig class switching to the expression of this Ig class [40]. The present study showed that Cdt has the ability to stimulate the IL-4 production that certainly is exerting a modulatory effect of host inflammatory response. In this study, we observed that the levels of all mediators with exception of IFN-γ were consistently and significantly lower (P < .001) in sera from mice injected with 2 LD 50 when compared to those obtained in sera from groups of mice that received 0.5 and/or 1 LD 50 ( Figure 5). These results agree with previous studies that are carried out the crotoxin that is the major neurotoxin present in Crotalus venom, demonstrated the activities such as immunosuppresor and immunomodulatory in experimental animals [41,42]. NO is known to be involved in multiple biologically important reactions [43,44]. This chemical compound is a gas that easily diffuses from the endothelial cells to the smooth muscle cells on the vascular wall. The present study showed that the Cdt has the ability to stimulate the NO production that certainly is exerting a modulatory effect on the host inflammatory response. The production of NO is one of the main mechanisms involved in endothelium function. When NO is synthesized from arginine, by the NO synthase (NOS) reaction, citrulline and intermediate product of the urea cycle is formed. Thus, the urea cycle is bypassed by the NOS reaction. With respect to the levels of mediator, similar results were obtained for mice groups with different body weight (data not shown). In this study, we showed that the levels of IFN-γ and TNF-α were higher in mice injected with 1 LD 50 than in control group and/or mice group injected with 2 LD 50 . Nevertheless, a direct correlation between IFN-γ and TNF-α and IL-4 and IL-10 cytokines was observed in mice injected with Cdt indicating a mutual pro-/antiinflammatory participation. In conclusion, Cdt is regulated by both pro-and anti-inflammatory cytokine responses. In groups of mice injected by short period of time the deviation of the pro-/anti-inflammatory balance toward to pro-inflammatory predominant type. In contrast, with the increasing of injection time the deviation of balance was to anti-inflammatory dominance.
2014-10-01T00:00:00.000Z
2008-07-01T00:00:00.000
{ "year": 2008, "sha1": "76d73effdc5834b2b853a3c8059be0e67d61e155", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/mi/2008/874962.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5581bf345ee9dbe3aa3d2d18c34fa71ba5ffefb5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221383182
pes2o/s2orc
v3-fos-license
The Use of Sour and Sweet Whey in Producing Compositions with Pleasant Aromas Using the Mold Galactomyces geotrichum: Identification of Key Odorants Fermented products with a pleasant aroma and with strong honey, rose, and fruit odor notes were developed through the biotransformation of a medium containing sour or sweet whey with the addition of l-phenylalanine by the Galactomyces geotrichum mold. In order to obtain the strong honey-rose aroma, G. geotrichum strains were screened and fermentation conditions were optimized to achieve a preferable ratio (>1) of phenylacetaldehyde to 2-phenylethanol by the Ehrlich pathway. This allowed post-fermentation products with the ratio of concentrations of phenylacetaldehyde to 2-phenylethanol being 1.7:1. Additionally, the use of gas chromatography–olfactometry (GC–O) analysis and the calculation of odor activity values (OAVs) allowed 10 key odorants to be identified in post-fermentation products. The highest OAVs were found for phenylacetaldehyde with a honey odor in both sour and sweet whey cultures (3010 and 1776, respectively). In the variant with sour whey, the following compounds with the highest OAVs were 3-methyl-1-butanol (131), 3-(methylthio)-propanal (119), 3-methylbutanal (90), dimethyl trisulfide (71), 2,3-butanedione (37), and 2-phenylethanol (29). In the post-fermentation product with sweet whey, the following compounds with the highest OAVs were 3-(methylthio)-propanal (112), dimethyl trisulfide (69), and 2,3-butanedione (41). ■ INTRODUCTION There has, in recent years, been a growing demand for flavorings in industry, especially in food production. 1−3 These compounds enrich the aroma of food products, which is especially important given that modern food production technologies often simplify and shorten fermentation processes. Chemical synthesis of aroma compounds is highly efficient and results in a relatively inexpensive product, but these have low acceptability among consumers, who prefer foods containing additives of natural origin. This means that there is a need for relatively efficient and straightforward technologies that can produce volatile compounds of natural origin. Particular attention should be paid to biotechnological processes that do not suffer from the difficulties attendant on the extraction of aroma compounds from natural materials (including seasonality and effect of climate change on the acquisition of the raw materials), which can make use of industrial byproducts. 1,3−6 Galactomyces geotrichum is a mold that occurs naturally in dairy products, which, according to our recent studies, is responsible for the formation of rose-like aroma compounds, such as phenylacetaldehyde, 2-phenylethanol, and phenylacetic acid. 7,8 We have, moreover, noted that the concentration ratio of phenylacetaldehyde and 2-phenylethanol is unusually close to 1:1 in the fried cottage cheese we tested. Phenylacetaldehyde has a lower odor threshold (OT) than 2phenylethanol in a range of food matrices (60 times lower in water, 10 times lower in oil, and 4 times lower in starch), 7,9−11 which is why its highest content in the produced aroma is desirable. 6,7 Whey is a yellowish liquid obtained during cheese production by separating the curd. About 160 million tons of whey is produced globally each year. The whey produced during cheese production is the most contaminated waste in this sector, which is why it is so important to develop new methods for managing it. During dairy production, sweet whey is obtained after enzymatic coagulation and sour whey comes from cottage cheese production. Depending on the type, whey contains 93−94% water, 4.5−6.0% lactose, 0.6−1.1% protein, 0.8−1.0% minerals, 0.05−0.9% lactic acid, and 0.06−0.5% fat. 12,13 It should be noted that whey contains L-phenylalanine, which is a precursor to the rose-like aroma compounds formed in the Ehrlich pathway. 14 In this study, both types of whey were analyzed, but sour whey is preferred to sweet on account of how it is produced: sour whey is formed during the production of cottage cheese when coagulation occurs by acidifying the milk to a pH of 5.1 or less. Fermentation by lactic acid bacteria results in more lactic acid than is produced in sweet whey, which affects the growth of G. geotrichum. 12,13,15 In addition, sour whey appears to be a more natural environment for the development of G. geotrichum from fried cottage cheese. During the production of this cheese, the environment is acidified by lactic acid bacteria, as with any kind of cottage cheese. Sour whey is formed in these types of processes, which is why it may also contain low-molecularweight metabolites of lactic acid bacteria that can affect the metabolism of G. geotrichum. Furthermore, sour whey is more difficult to utilize than sweet whey, which is why we give special attention to new means of managing this byproduct. 16 At present, there is increasing interest in the biotechnological production of aroma mixtures, rather than of individual aroma compounds, 17 as mixtures can more closely resemble the products of natural processes that occur during food fermentation. 3,17,18 In the present study, the final fermentation product has thus been treated as an aroma composition of several compounds, and all its key odorants have been identified, in order to find the compounds responsible not only for the honey-like and rose-like aroma notes but also for the fruit-like, caramel-like, or butter-like. Potential applications of this type of aroma are food products using fermentation processes such baking, brewing, and dairy industries. In this type of product, it is possible to utilize whey, and it is also desirable to enrich the aroma with honey-like and rose-like odor notes. 19 The aim of this study was to determine the optimal conditions for the biotransformation by G. geotrichum of sour and sweet whey into post-fermentation products with an intense, honey-rose pleasant aroma. This is achieved by obtaining preferable ratio (>1) of phenylacetaldehyde to 2phenylethanol through the Ehrlich pathway by screening of G. geotrichum strains and adjusting culture conditions. The postfermentation product with the most intense, pleasant honeyrose-fruit aroma then underwent sensomics analysis, 17 which included identification of the key aroma compounds by gas chromatography−olfactometry (GC−O) and gas chromatography−mass spectrometry (GC−MS) and quantitation of the most important aroma compounds by the stable isotope dilution assay (SIDA) and by calculation of odor activity values (OAVs). Screening of 39 G. geotrichum Strains. Shake flask cultivations were carried out in 300 mL Erlenmeyer flasks with 100 mL of medium containing per liter (modified, based on Grygier et al. 6 ) 22.8 g of Na 2 HPO 4 ·2H 2 O, 10.3 g of citric acid, 0.5 g of MgCl 2 , and 0.17 g of yeast extract. After sterilizing the medium, 60 g of sucrose and 21 g of L-phenylalanine were added per liter; these had been exposed to UV radiation for 30 min before use. This procedure was intended to prevent the formation of aroma compounds due to high temperatures. This medium was employed in all the subsequent experiments. Each flask was inoculated with 10 μL of a G. geotrichum strain, which had previously been revived by inoculating the agar slant with chloramphenicol (incubation at 30°C for 72 h under aerobic conditions). Fermentation in flasks with 39 strains of G. geotrichum was carried out in a water bath at 30°C for 7 days with shaking (150 rpm). Lyophilization of the G. geotrichum Strain. The strain selected during screening was lyophilized to standardize inoculum concentration. Four flasks with the selected mold strain were prepared in the same way as during screening. Fermentation in the flasks was carried out in a water bath at 30°C for 72 h with shaking (150 rpm). The cultures were centrifuged for 10 min at 2012.4g (3000 rpm) and 20°C after fermentation. The G. geotrichum precipitate was resuspended in 400 mL of the mixture of base medium and 10% inulin solution (1:1, v/v). To determine the number of colony-forming units (CFUs), the culture was sequentially diluted and plated onto agar plates with chloramphenicol. The plates were incubated for 72 h in 30°C , and the CFUs were then counted to calculate the number of colonies per milliliter of sample. The freeze-drying process was conducted in a Beta 1-16 freeze dryer (Martin Christ, Osterode am Harz, Germany). The process was initiated with a freezing step at −35°C for 2 h, followed by the main drying stage at 15°C for 20 h, and the final drying at 22°C (5 h). The dried preparations were collected into glass jars under a nitrogen atmosphere, each of which received the product formed by freeze-drying 3 mL of culture. Optimization of Culture Conditions. We analyzed the effects of the type of sugar, the pH of the medium, and the incubation temperature on the sensory profile of the aroma produced by the selected strain of G. geotrichum mold. Optimization was carried out on two types of media: the first variant was prepared using the base medium with sour whey, while the second used sweet whey. In both cases, spray-dried whey was used at a concentration of 130 g/L. Citric acid as a component of the nutrient was replaced at this stage with lactic acid, as this naturally occurs in whey. 12 Once the medium had been prepared, the pH was adjusted to the set value in sterile conditions by adding lactic acid in the presence of litmus papers as pH indicators. Shake flask cultivations were carried out in 300 mL Erlenmeyer flasks with 200 mL of different medium variants. Each flask was inoculated with 3.4 × 10 6 CFU of lyophilized G. geotrichum mold. Type of Sugar. We analyzed the effects of four types of sugar sucrose, glucose, galactose, and fructoseon the aroma profile of the culture. Each type of sugar was added to the medium at a concentration of 60 g/L. As before, the UV-treated sugar was added to the sterilized medium. The pH of the media was adjusted to 5.0. Fermentation was carried out in a water bath at 30°C for 7 days with shaking (150 rpm). pH Value. The effect of pH on the culture was examined by carrying out fermentation on nutrient media with pH values of 3.0, 4.0, and 5.0. The type of sugar in the medium was sucrose. Fermentation was carried out in a water bath at 30°C for 7 days with shaking (150 rpm). Incubation Temperature. The incubation temperature was optimized by culturing G. geotrichum at three temperatures. The type of sugar in the medium was sucrose, and the pH value was 5.0. Incubation was carried out in water baths at different temperatures (25,30, and 35°C) for 7 days with shaking (150 rpm). Sensory Evaluation of Cultures. Ten panelists experienced in descriptive sensory analysis carried out evaluation of the samples. The evaluation was run in triplicate in separate odor profiling sessions. The odor descriptors were selected from the basic flavor descriptive language (Givaudan Roure Flavor) 20 and were determined in preliminary tests, which looked for the variant with the highest possible ratio of phenylacetaldehyde to 2-phenylethanol. The qualities used were honey-like, caramel-like, rose-like, and fruit-like. The sensory panel also evaluated the general desirability of the aroma. Sensory analysis was performed by scoring odor descriptors on a 10 cm linear scale, with the beginning labeled "none" and the end labeled "very strong". We collected 20 g of G. geotrichum samples from the various culture condition variations and placed them in 100 mL glass containers. These were presented to the panelists at room temperature. The sensory evaluation of samples obtained from the bioreactor cultures was carried out in the same way except that, in addition to the previously mentioned odor descriptors, the following notes were also assessed: butter-like, cheese-like, and sour-like. The results were converted to numerical values for data analysis. Bioreactor Cultures. For the biotechnological production of aroma composition, selected G. geotrichum mold culture parameters were used in a laboratory-scale bioreactor. Bioreactor cultures were carried out in a 2.3 L Labfors 5 bioreactor (Infors HT, Bottmingen, Switzerland) with a working volume of 2 L and aerated to 1.5 vvm, with the inlet air sterilized by filtration. The medium prepared in the same way as for previous flask cultures was stirred at a speed of 150 rpm. The batch fermentations were carried out in two variants, with sour whey and with sweet whey, and each variant was inoculated with 3.4 × 10 7 CFU of lyophilized G. geotrichum molds. The incubation temperature, medium pH (determined at the beginning of the culture), and type of sugar were determined in relation to the results obtained during the optimization of the culture parameters. The pH of the medium, as with the flask culture, was determined using lactic acid before inoculation and was not regulated during the experiment. Although the pH value was not actively controlled throughout the fermentation process, it was measured at the end with a value of 4.7. For each variant, also blank tests were performed with the same culture conditions, without inoculation of the medium with G. geotrichum. Headspace Solid-Phase Microextraction (HS-SPME). HS-SPME was used as an isolation method to determine the levels of phenylacetaldehyde and 2-phenylethanol in samples collected during the screening of the 39 G. geotrichum strains. This employed divinylbenzene/Carboxen/polydimethylosiloxane (DVB/CAR/ PDMS) fiber (Supelco, Bellefonte, PA, U.S.A.) in combination with an MPS 2XL multipurpose sampler (GERSTEL, Mulheim an der Ruhr, Germany). Samples (10 g) were placed in 20 mL headspace vials, spiked with internal standards (d 5 -phenylacetaldehyde and d 5 -2phenylethanol) to a concentration of 500 ppb each, and sealed with septum magnetic caps. The compounds were extracted from headspace of the vials at 40°C for 30 min. Afterward, the analytes were desorbed in the splitless mode at the GC injection port at 250°C . Solvent-Assisted Flavor Evaporation (SAFE). Odor-active compounds were isolated from the fermentation broth obtained by biotransformation of the sour and sweet whey in the bioreactor using the SAFE method described by Engel et al. 21 Samples (50 g) were mixed with dichloromethane (100 mL) and spiked with the internal standard [ 2 H 8 ] naphthalene (25 μg). The samples were then extracted for 2 h by shaking in the horizontal shaker, and the volatiles were isolated by SAFE extraction. In the next step, the extracts were dried over anhydrous sodium sulfate. Finally, the extracts were concentrated to approximately 500 μL using a Kuderna Danish concentrator (Sigma-Aldrich, Poznan, Poland). GC−O. SAFE extracts underwent GC−O analysis to identify odoractive compounds, using an HP 5890 chromatograph (Hewlett-Packard, Wilmington, DE, U.S.A.) with two columns of different polarities: SPB-5 (30 m × 0.53 mm × 1.5 μm) and SUPELCOWAX 10 (30 m × 0.53 mm × 1 μm) (Supelco, Bellefonte, PA, U.S.A.). The effluent was divided between the olfactometry port with humidified air as a makeup gas and a flame ionization port using a Y splitter in GC. The operating conditions for the SPB-5 were as follows: an initial oven temperature of 40°C (1 min) raised at 6°C/min to 180°C and at 20°C/min to 280°C. For SUPELCOWAX 10 column, the operating conditions were as follows: an initial oven temperature of 40°C (2 min) raised to 240°C at a 6°C/min rate and held for 2 min isothermally. Injection of the SAFE extract (2 μL) into a GC column was using a splitless mode. GC-effluent sniffing (GC−O) was carried out by three panelists who detected odor-active regions and specified notes for the analyzed volatiles. For all peaks and flavor descriptors with specific retention times, retention indices were calculated in order to compare results with those obtained by GC−MS and literature data. For each compound, the retention indices (RIs) were calculated using a homologous series of C 7 −C 24 n-alkanes. GC Quantitation by Stable Isotope Dilution Assays (SIDA). All compounds identified during GC−O analysis were quantified by the SIDA method. To this end, stock internal standards of the labeled isotopes were prepared in diethyl ether and added to the samples of post-fermentation products after bioreactor culture, in concentrations similar to that present in the post-fermentation broth for each compound. The samples prepared in this way were extracted by the SAFE method. The identified aroma compounds were analyzed by GC−MS, monitoring the intensity of the respective ions presented in Table 1. For all volatiles, response factors were calculated in the standard mixture of labeled and unlabeled compounds at a known concentration of 500 ppb. In the case of 3-methylbutanal, for which Journal of Agricultural and Food Chemistry pubs.acs.org/JAFC Article no direct isotopologue was available, we used a 2-methylbutanal isotopologue, which in our opinion is the most similar to this compound. We then applied the correction using the response factor. The concentrations of the analyzed volatiles in the samples were calculated using the peak area of the analyte and its corresponding internal labeled standard obtained for selected ions. The odor activity values (OAVs) were then calculated by dividing the concentration of a given analyte by its odor threshold (OT) value determined in water. The results obtained for the blank samples were subtracted from the results obtained for the sour and sweet whey products. ■ RESULTS AND DISCUSSION Screening of G. geotrichum Strains. In the first experiment, 39 strains of G. geotrichum molds were characterized in terms of the ratio of phenylacetaldehyde to 2-phenylethanol. This experiment was intended to find those strains with the ability to produce more phenylacetaldehyde than 2-phenylethanol through the Ehrlich pathway. When Lphenylalanine is transformed, both compounds are usually produced and are present at the same time in the fermentation products; however, in most cases, 2-phenylethanol dominates in quantity over phenylacetaldehyde. Reversing the ratio to give higher amounts of phenylacetaldehyde would increase aromatization power. The results in Figure 1 thus illustrate the ratio of phenylacetaldehyde to 2-phenylethanol obtained during fermentation. It can be seen that most strains produced significantly greater amounts of 2-phenylethanol than phenylacetaldehyde. However, for strains 32 and 36, the ratios of phenylacetaldehyde to 2-phenylethanol were 1.6:1 and 1.1:1, respectively. Based on these results, strain 32 was selected for further study. Optimizing Culture Conditions. The aim of this experiment was to optimize the culture conditions so as to increase the pleasant honey, rose-like aroma produced by the Ehrlich pathway from L-phenylalanine. To this end, the type of sugar added, the pH, and the incubation temperature were recorded. Each culture variant was subjected to a sensory analysis with four target odor descriptors: honey-like, rose-like, fruit-like, and caramel-like and general desirability. The effect of the type of sugar in the medium on the sensory quality of the post-fermentation product is outlined in Figure 2. Our preliminary studies showed that G. geotrichum molds do not ferment lactose, which is why sucrose, glucose, galactose, and fructose were used in the experiment as nutrient ingredients. The results in Figure 2 illustrate that for both fermentation products, the panelists ranked general desirability highest when sucrose was used. In the sour whey culture, the broadest aromatic profile was achieved for the variants with sucrose and with galactose ( Figure 2A). Both fermentation products gave strong honey-like and rose-like notes. On the other hand, when using fructose and glucose in a sour whey fermentation, the aroma was the weakest for all descriptors. In the case of the sweet whey culture ( Figure 2B), again the sucrose variant had the most diverse odor profile, with strong honey-like, rose-like, and caramel-like notes. The use of glucose and fructose as components of the sour whey medium lowered the intensity of the rose-like and caramel-like notes, while the addition of galactose was the least appreciated by the panelists, who in this case ranked all the descriptors lowest. Taking these results into consideration, sucrose was selected as the carbon source that yielded the most pleasant aroma with strong honey and rose-like odor notes. The presence of rose-like aroma is typical of certain types of cheeses and fermented food products, such as black tea and pumpernickel bread. 7,22−24 It has been well established that during L-phenylalanine transformation, honey-rose aroma compounds are produced, including phenylacetaldehyde, 2phenylethanol, phenylacetic acid, and 2-phenylethylacetate. These compounds are formed in the Ehrlich pathway. 4,25,26 The resulting 2-phenylethanol can then be esterified to 2phenylacetate. 25 An alternative pathway for reducing phenylacetaldehyde to 2-phenylethanol is oxidizing it to phenylacetic acid. 27 Although the formation pathway of these compounds is interdependent, they are rarely all present in one product at a time. 22,25 It should be noted that there are dependencies between the compounds arising in the Ehrlich pathway that affect the characteristics of the aroma. According to Whetstine et al., 22 phenylacetaldehyde plays a more important role in shaping the rose aroma than does phenylacetic acid. However, it has been shown that phenylacetaldehyde, in combination with even a small amount of phenylacetic acid, is characterized by a much higher intensity of rose aroma than the sum of individual impressions of these compounds. This phenomenon is most pronounced at high concentrations of phenylacetaldehyde in the product. 22 According to Dunkel et al., 17 phenylacetaldehyde and 2-phenylethanol are equally often recognized as key odorants as in 22.5 and 22.9%, respectively, of food products. Numerous studies of fermented products have shown that both those compounds are formed during fermentation and are present in the final product. For example, Schuh and Schieberle 23 identified the phenylacetaldehyde and 2-phenylethanol formed during fermentation of black tea leaves to have a concentration ratio of 0.3:1. On the other hand, in our research on ripened cheese, we found the concentration of phenylacetaldehyde and 2-phenylethanol with a ratio of 0.7:1, which correlated with the sensory analysis and honey-rose flavor. 7 Further analysis confirmed that it is G. geotrichum that is responsible for this biotransformation and for the formation of phenylacetaldehyde and 2-phenylethanol. This also reinforces the fact that phenylacetaldehyde has stronger aromatization power than 2-phenylethanol and therefore has higher potential to bring desirable rose-like aroma to fermented products. In our study, the presence of L-phenylalanine in the medium as a component of whey and as a nutrient ingredient for G. geotrichum allows the formation of honey-like and rose-like compounds such as phenylacetaldehyde, 2-phenylethanol, phenylacetic acid, and 2-phenylethylacetate. G. geotrichum grows over broad ranges of temperature and pH, although 25−30°C and pH 5.0−5.5 are suggested as optimal. 28 We tested three different incubation temperatures (25, 30, and 35°C) to find which is optimal for the growth of G. geotrichum and for the honey-rose aroma. The sensory profiles obtained were the broadest and had the strongest honey-like note for the cultures grown at 30°C, for both sour and sweet whey variants. Panelists noted the presence of a strong rotten fruit note in the samples incubated at 25°C, while at 35°C, a very poorly perceptible aroma was obtained. Comparing the effects of pH values of 3.0, 4.0, and 5.0 on the aroma showed that the most intense honey-rose odor profile was obtained by incubation on medium at pH 5.0, again for both types of whey. The aroma produced using the medium of pH 4.0 was about half as intense as at pH 5.0, while the culture at pH 3.0 had the least intense sensory profile. The results for the individual descriptors are consistent with the assessments of general desirability for the variants we tested. In both sour and sweet whey cultures, samples with medium at pH 5.0 and those with an incubation temperature of 30°C were most desirable by 6 to 7 points. In comparing the type of sugar in the medium, the highest values of desirability were achieved with sucrose in both whey variants. It should be noted that neither incubation temperature nor pH value caused as large a difference as did sugar type. Based on the sensory evaluation and quantitative analysis of the aroma compounds in further experiments, the following were selected as the ideal culture conditions: sucrose in the medium, medium pH at 5.0, and an incubation temperature of 30°C. Sensory Evaluation of Bioreactor Cultures. Having determined the optimal culture parameters, the fermentation process was moved to a larger scale with a 2.3 L bioreactor using media with sour and sweet whey. In the first stage, the aroma compositions we obtained underwent extended sensory evaluation, the results of which are given in Figure 3; these show that both compositions had strong honey and rose aromas, with a slight advantage for the sour whey. In addition, the sour whey culture had a more intense fruit-like aroma, while the sweet whey sample showed had a significantly less intense caramel-like note. Butter-like and cheese-like flavor notes were barely perceptible in either variant, whereas the sour whey culture showed a sour-like aroma of the medium intensity. The difference between the odor profiles of the bioreactor cultures and the flask cultures may be associated with the more carefully controlled conditions in the bioreactor, such as regarding aeration. Identification of Key Aroma Compounds in a Post-Fermentation Product by Means of GC−O Analysis and Calculation of OAVs. In the SAFE extracts prepared from samples obtained after bioreactor fermentation of sour and sweet whey mediums by G. geotrichum, 10 compounds were identified by GC−O analysis. All were then quantified, and the OAVs were calculated for them by dividing the concentration of the analyte by its odor threshold value. The results in Table 2 show that for both whey variants, the highest OAVs were recorded for phenylacetaldehyde, with a strong honey aroma (3010 for sour whey and 1776 for sweet whey, respectively), although this was not the compound with the highest concentration. This compound is characterized by a strong honey aroma and is formed by the Ehrlich pathway, which involves the transamination of L-phenylalanine to phenylpruvate, its decarboxylation to phenylacetaldehyde, and its reduction to 2-phenylethanol. 4,25,26 Its odor threshold in water is 60 times less than that of 2-phenylethanol; however, much more 2-phenylethanol with rose aroma is formed in the Ehrlich pathway. In our research, the 2-phenylethanol level was found to be 7090 μg/kg for samples with sour whey and 4125 μg/kg for sweet whey, which means that the concentration ratio of phenylacetaldehyde and 2-phenylethanol is 1.7:1 in both variants. As mentioned, this ratio ensures greater aromatization power, which is clearly seen in the OAVs that are over 100 higher for phenylacetaldehyde than for 2-phenylethanol. Additionally, our results show that phenylacetic acid is formed in the bioreactor; this is a third compound with a honey, roselike aroma, which is derived in the Ehrlich pathway after oxidation of phenylacetaldehyde. 27 This means that phenylacetic acid is present in products that also contain phenylacetaldehyde and/or 2-phenylethanol or one of them as compounds formed in the same process, such as dairy products, fermented beverages, beans, and soy sauce. 7,29−32 We can observe here the ratio between the concentration of phenylacetaldehyde, 2-phenylethanol, and phenylacetic acid to be 1.7:1:0.2 in sour whey and 1.7:1:0.8 in sweet whey. In the fried cottage cheese from which G. geotrichum was isolated, these compounds were found in a ratio of 0.7:1:0.2 after 4 days of ripening, with the concentration of 2-phenylethanol at 1892 μg/kg, whereas in fermented cocoa beans, this ratio was 0.03:1:2.1 and 2-phenylethanol was at a concentration of 2100 μg/kg. 7,33 In the drink produced by fermentation of the wort by Trametes versicolor, all three compounds were found in a ratio of 5.2:1:2.7. The concentration of 2-phenylethanol in the analyzed product was 31 μg/L, and despite the highest OAV content in the aroma tested for phenylacetaldehyde, it was characterized by only a slight floral aroma. 29 It should be noted that phenylacetaldehyde, 2-phenylethanol, and phenylacetic acid are found in fermented products together but that their levels may vary greatly, depending on the type of the substrate and the conditions of the fermentation process. 27 An example is the traditional Chinese fermented red pepper paste that contains phenylacetaldehyde and 2-phenylethanol in a ratio of 0.05:1 (at a 2-phenylethanol concentration in the product of 129.22 μg/kg) and no phenylacetic acid at all. 34 However, it should be noted that in the case of both phenylacetaldehyde and 2-phenylethanol, the post-fermentation product with sour whey contained 1.7 times more of these compounds than in the case of sweet whey. The aroma compounds discussed earlier are important odorants in the aroma produced by G. geotrichum; however, it should be noted that human perception of mixtures of various flavor compounds is not just the sum of their individual aroma notes. 17,35 It has been shown that mixtures containing more than four odorants are characterized by the loss of individual fragrance notes for each compound in order to create a specific perception of the entire aroma. 17 Most aromas of natural origin are complex mixtures of different odorants, as is the aroma produced by G. geotrichum on the test substrates. 35 Another example is rape honey in which 2,3-butanedione, with its butter aroma (typical of fermentation processes), is formed as a result of natural transformations; dimethyl trisulfide with its cabbage-like aroma and (E)-β-damascenone with its scent of cooked apples are also produced. These compounds are characterized by a relatively high OAV, but the aroma as a whole is not directly related to any of these descriptors. 36 Another compound identified as one of the key odorants in post-fermentation products is 3-methyl-1-butanol, with its fruit aroma. This is the second most abundant compound (121,511 μg/kg) in samples based on sour whey and is characterized by a relatively high OAV (131), whereas in the variant with sweet whey, it is present at 1546 μg/kg, being the sixth most abundant compound. Its formation is associated with fermentation processes, and its concentration increases to a maximum of 5600 μg/kg during three-stage sourdough fermentation, which is part of the preparation of pumpernickel bread. 24 This compound is present in fermented beverages, such as wine and cider. 37−39 It is produced by the microorganisms associated with grapes grown for wine production. It is the substance found in the highest concentration in the aromas produced by Paenibacillus sp. and Aureobasidium pullulans, and the second highest in the aromas is produced by Sporobolomyces roseus. 37 In apple ciders, after fermentation by Hanseniaspora osmophila and its mixed culture with Torulaspora quercuum, 3-methyl-1-butanol was found in the concentration from 3.56 to 4.65 μg/L. 38 This compound is also produced by Staphylococcus xylosus, which gives its characteristic aroma to fermented meat products. 40 Other compounds identified in high concentrations in the cultures tested here are acetic acid and butanoic acid with their odors of vinegar and cheese, respectively. Acetic acid has been shown to be formed during enzymatic degradation in the fermentation process of fruit pulp. 33 Butanoic acid is also a characteristic fermentation product formed during the transformation of carbohydrates. 17 In addition, subjecting whey to controlled fermentation processes could be a way of producing acetic or butanoic acid. 12 In samples with sour whey, acetic acid is the key odorant found in the highest concentration, but due to its high OT value, this results in an OAV of 2.5the lowest value after 2-phenylethanol. In the sweet whey variant, acetic acid is present at a concentration lower than the OT value, which gives OAV < 1. However, butanoic acid is present in a similar concentration in both sweet and sour whey cultures (at 18,523 and 17,450 μg/kg, respectively). Another compound identified as a key odorant is 2,3-butanedione, with its aroma of butter, which is present at 550 and 610 μg/kg in the sour and sweet whey variants, respectively. This compound is present in various types of cheese due to bacterial or mold fermentation, being a key aroma compound in fried cottage cheese, Lazur, Camembert, Cheddar, and Emmental cheeses. 7,9,22,41 The compound with the eighth highest concentration in both culture variants (51 and 48 μg/ kg, respectively, for sour and sweet whey) was identified as 3-(methylthio)-propanal, which has a smell of boiled potatoes. This compound is characterized by a low OT value, which resulted in the third and second highest OAVs for the sour and sweet whey variants, respectively. Both 3-(methylthio)propanal and 3-methylbutanal with its malty aroma (found at 45 and 0.8 μg/kg in the sour and sweet whey, respectively) are formed during Strecker degradation from methionine and leucine, respectively. This reaction can also be carried out by means of an enzymatic reaction in the Ehrlich pathway. 24,42 It has been demonstrated during the preparation of bread dough that increasing the amount of yeast and applying proper fermentation conditions (reduced temperature and shorter time) increased the content of 3-(methylthio)-propanal in the bread. 43 3-Methylbutanal is present in high concentrations in soy sauce due to the formation during the fermentation process of large amounts of free amino acids, which are precursors of this compound. 32 The final key odorant determining the aroma produced by G. geotrichum molds is dimethyl trisulfide, with the fifth highest OAV for the sour whey and the third OAV for the sweet whey. The cabbage odor note of this compound was not strongly perceptible in the aroma studied, but it significantly affects the overall composition. According to the review on key odorants, the following compounds: acetic acid, butanoic acid, 2,3butanedione, 3-(methylthio)-propanal, and 3-methylbutanal identified as key odorants in the post-fermentation product analyzed here contributed to the aroma of more than 25% of 227 food samples and therefore belong to the "generalist" group. 17 Combined analysis of the results from GC−O analysis, from the OAVs, and from sensory evaluation show that phenylacetaldehyde is responsible for the honey note and 2phenylethanol for the rose note in the aroma produced by the G. geotrichum mold. The occurrence of a delicate buttery note can be associated with 2,3-butanedione. 3-Methyl-1butanol is a compound that is present in a higher concentration in cultures with sour whey, with this variant also characterized by a more intense fruity note, so the presence of this compound should be associated with the aroma. Acetic acid is responsible for the more intense sour aroma note in the bioreactor culture with sour whey, present in this case in a higher concentration than in the sweet whey. It is hard to link the rest of the odor descriptors with the specific aroma compounds responsible for their appearance, probably as a result of the combination of several compounds, individually described by other notes. In summary, we determined the optimal conditions for culturing G. geotrichum, resulting in post-fermentation products with an intense honey-like, pleasant aroma. The screening of G. geotrichum strains and optimization of fermentation conditions were conducted with the aim of achieving the preferred ratio (>1) of phenylacetaldehyde to 2-phenylethanol in the Ehrlich pathway. Strain 32 was chosen in the case of both medium supplements, and the selected culture conditions were sucrose as the medium component, a pH of 5.0, and an incubation temperature of 30°C. Further experiments in a 2.3 L bioreactor gave fermented products with strong honey-like, rose-like, and fruit-like aromas. The application of GC−O analysis and the calculation of OAVs allowed 10 key aroma compounds to be identified in the post-fermentation culture. Phenylacetaldehyde with its honey aroma had the highest OAV among all the key odorants in both the sour and sweet whey variants. In addition, in both culture variants, we determined the presence of phenylacetaldehyde and 2-phenylethanol, responsible for honey and rose odor notes, respectively, in a ratio of 1.7:1, which resulted in a much more intense aroma. However, the concentration of these compounds was 1.7 times higher in the product with sour whey than with sweet whey. In conclusion, our results show that fermentation with G. geotrichum on a medium with sour or sweet whey gives the possibility to obtain a post-fermentation product with strong honey-like and rose-like aromas due to the high ratio of phenylacetaldehyde to 2-phenylethanol formed by the Ehrlich pathway. TIC chromatogram of SAFE extract obtained from sour whey fermentation by G. geotrichum in a 2L bioreactor (PDF)
2020-09-01T13:01:36.374Z
2020-08-31T00:00:00.000
{ "year": 2020, "sha1": "ae83ea508e5496ca4a72be286503fc202a26e68b", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6fa155345b70949124d01bffb50232e453466f56", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
80711455
pes2o/s2orc
v3-fos-license
Predictive factors of renal involvement in cryoglobulinaemia: a retrospective study of 153 patients Abstract Background The course of cryoglobulinaemia varies widely, from asymptomatic patients to severe vasculitis syndrome. Renal involvement (RI) is the major prognostic factor, and frequently occurs several years after diagnosis. However, predictive factors for RI are not well known. The aim of our study was to identify RI predictive factors during cryoglobulinaemia. Methods We retrospectively reviewed the clinical charts of a consecutive series of 153 patients positive for cryoglobulinaemia in the University Hospital of Lyon (France). RI was defined either histologically or biologically if cryoglobulinaemia was the only possible cause of nephropathy. Results Among the 153 positive patients (mean age 55 years, 37% male), cryoglobulinaemia was associated with RI in 45 (29%) patients. Sixty-five percent of patients had Type II cryoglobulinaemia, 28% had Type III and 7% had Type I. Autoimmune diseases were the most common aetiology (48%), followed by infectious diseases (18%) and lymphoproliferative disorders (13%). Membranoproliferative glomerulonephritis was the main histological pattern (93% of the 14 histological analyses). A multivariable logistic regression showed that Type II cryoglobulinaemia, a high serum cryoglobulin concentration, the presence of an IgG kappa monoclonal component and diabetes were independently associated with the risk for developing RI. Conclusion We identified several factors predictive of RI in patients with cryoglobulinaemia, which were different from the diagnostic criteria for cryoglobulinaemic vasculitis. This could suggest a specific pathophysiology for RI. We suggest performing an extensive renal monitoring and ensure nephroprotection when a diagnosis of cryoglobulinaemia is made in patients with these predictive factors. INTRODUCTION Cryoglobulins are immunoglobulins (Ig) that precipitate at temperature below 37 C and dissolve after rewarming. The term cryoglobulinaemic vasculitis is used to describe patients with symptoms related to the presence of cryoglobulins. Many patients with cryoglobulinaemia remain asymptomatic [1]. However, precipitation of cryoglobulins in small vessels can be responsible for vasculitis, with clinical symptoms ranging from mild palpable purpura, arthralgias and fatigue, to severe vasculitis with skin necrosis and glomerulonephritis (GN), as well as involvement of peripheral nerves, central nervous system, gastrointestinal tract, lungs and myocardium [2]. After immunochemical typing, cryoglobulins are sorted according to the classification of Brouet et al. [3]: Type I cryoglobulinaemia comprises single monoclonal Ig; Types II and III are mixed cryoglobulinaemia associating a monoclonal component with polyclonal Ig in Type II and only polyclonal Ig in Type III. These cryoproteins may be associated with distinct underlying diseases encompassing lymphoproliferative disorders in Type I, infectious and autoimmune diseases in Types II and III or, more rarely, may be primary [4,5]. Renal involvement (RI) during cryoglobulinaemia is mainly characterized by urinary abnormalities [6,7], proteinuria being reported in 88-100% of cases [8][9][10][11], and haematuria in almost all patients [8,10]. Elevation of plasma creatinine is described in 47-63% of patients with RI [7,11]. Pathological features observed on kidney biopsies of patients with RI are often characterized by an extensive glomerular infiltration by monocytes with double contours of the basement membrane, and hyaline intraluminal thrombi, evocative of membranoproliferative glomerulonephritis (MPGN) [8]. Immunofluorescence analysis defines the RI as an immune complex-mediated MPGN in the new classification of MPGN [12], showing intra-glomerular sub-endothelial deposits of Ig identical to those of the cryoprecipitates, and complement components. Renal necrotizing vasculitis and extracapillary crescents are rarely observed. RI has been reported in 18-40% of patients [13][14][15], and in patients with cryoglobulinaemia vasculitis, death appears more frequently in those with RI [9,[14][15][16][17][18][19]. The occurrence of RI follows the diagnosis of cryoglobulinaemia after a mean follow-up of 2.6-4 years [7,11], and occurs until 41 years after diagnosis [11]. In case of RI, extrarenal manifestations are rarely associated [19], and if they are, their onset is not concomitant to the onset of RI [11]. Long-term predictors of survival in patients with RI are well known [7,11], but predictive factors of RI have been described in only one study, with no data about cryoglobulin concentration [20]. The aim of our study was to identify predictive factors of RI in cryoglobulinaemia in a large monocentre cohort of patients with cryoglobulinaemia. Patients Clinical charts of a consecutive series of 153 patients with cryoglobulinaemia (from January 2012 to December 2014) from different medical departments of University Hospitals were retrospectively reviewed. Inclusion criteria were: (i) cryoglobulinaemia with cryoglobulin total Ig concentration >20 mg/L and (ii) age >18 years. Exclusion criteria were: (i) RI with a possible cause other than cryoglobulinaemia and (ii) missing data [absence of urinalysis: proteinuria, haematuria; absence of complement exploration or rheumatoid factor (RF) assay; incomplete clinical data]. Renal involvement RI was defined either histologically or biologically if no other cause of RI than cryogloclobulinaemia was present. Pathological criterion was a biopsy-proven cryoglobulinaemic GN. Biological criteria were proteinuria (protein to creatinine ratio >0.5 g/g or proteinuria >0.5 g/24 h) and/or haematuria (>10 red blood cells/mm 3 ) and/or estimated glomerular filtration rate (eGFR) <60 mL/min/1.73 m 2 as calculated by Chronic Kidney Disease Epidemiology Collaboration equation. In case of RI without renal biopsy, patients were excluded if there was a possible cause of RI other than cryoglobulinaemia (connective tissue diseases, mostly systemic lupus erythematosus and Sjö gren syndrome; diabetes if uncontrolled or with other microangiopathic complications). Patients with cryoglobulinaemia and total serum cryoglobulins concentration >20 mg/L without RI were considered as controls. The flow chart of the study is shown in Figure 1. Hypertension was defined by a blood pressure >140/90 mmHg. Laboratory analyses Cryoglobulin detection, purification and characterization were performed in the immunology laboratory of the Hospices Civils de Lyon according to the protocol previously described by Kolopp-Sarda and Miossec [21]. Blood samples were collected by venipuncture for complement exploration and for cryoglobulin detection, conserved for 2 h for complete coagulation at 37 C and rapidly transported to the laboratory at 37 C. Samples were centrifuged (2200g, 15 min) and serum was decanted and stored at 4 C for 7 days. Visual observation at day 7 allowed the detection of any cryoprecipitate. In that case, cryoprecipitate was isolated by þ4 C centrifugation (3500 r.p.m. 2200g, 15 min) and purified by three washes with cold phosphate-buffered saline (PBS, pH 7.4, þ4 C) to remove serum and proteins that had not precipitated. Pellets were then dissolved at 37 C in 500 mL PBS and stored at 37 C for further analyses. Characterization of cryoprecipitate was performed by electrophoresis-immunofixation to type cryoglobulins with anti-c, anti-a, anti-l, anti-j and anti-k antisera (SAS-3 V R , Helena Bioscience, Gateshead, UK). In dissolved cryoprecipitate stored at 37 C, IgG, IgM and IgA concentrations as well as RF activity (IgM anti-IgG) were assayed by immunonephelometry (BNprospec V R , Siemens, Marburg, Germany, reagents for low concentrations). Total Ig concentration in cryoprecipitate was the sum of IgG, IgM and IgA concentrations. A positive threshold of cryoglobulin total Ig concentration >20 mg/L is defined in our laboratory, in line with our technical practice and values described by Vermeersch et al. [22] Serum RF (normal <20 UI/mL), complement C3 (normal range 0.82-1.60 g/L) and C4 (normal range 0.14-0.32 g/L) were quantified by immunonephelometry (BNprospec V R ) and complement functional activity (CH50) was quantified on SPAplus V R (Binding Site, Birmingham, UK; normal range 41-95 U/mL). Histological analysis Biopsy specimens were available for 14 patients. Light microscopy was available for all patients, and immunofluorescence was missing for one patient. Renal biopsies were not processed by electron microscopy. Statistical analysis Patient characteristics are reported as number (percentage) for categorical variables and mean 6 SD for continuous variables. Comparison of two groups involved Student's t-test for quantitative variables and v 2 or Fisher's exact test as appropriate for categorical variables. We reported group comparisons with odds ratio [OR, 95% confidence intervals (95% CI)]. To identify independent predictors of RI, we used a stepwise procedure to select, among parameters with P < 0.20 in univariable analysis, those to retain in the final multivariable logistic regression model. Results are expressed as OR (95% CI). Statistical analysis was done on SPSS v21.0 (SPSS Inc., Chicago, IL, USA). P < 0.05 was considered statistically significant. Demographic data The main features of the study groups are listed in Tables 1 and 2. Mean age of patients was 54.6 6 18.2 years with no difference between the two groups (P ¼ 0.57). Male patients were more prevalent in the RI group (60% versus 27% in the control group, P < 0.0001). Mean follow-up from cryoglobulinaemia diagnosis was similar in both groups, and was 2.0 6 4.8 years (range 0-26.4 years) in the RI group and 1.2 6 2.5 years (range 0-18.2 years) in the control group. Diabetes was present in 11% of the population, with a higher proportion in case of RI (29% versus 4% in the control group, P < 0.0001). Clinical symptoms Purpura was the most frequent symptom (27% of the patients). Its prevalence was greater in RI than in the control group (38% versus 22%, P ¼ 0.048). The most common symptoms were respectively arthralgias (20%), neurological involvement (14%) and acrosyndrome (12%). Regarding aetiologies, the majority (75%) of cryoglobulinaemias were secondary, with only 14% of cases related to hepatitis C virus. Autoimmune diseases were the most frequent aetiology (48%), with a higher frequency in the control group than in the RI group (58% versus 27%, P ¼ 0.001). Lymphoproliferative disorders were more common in case of RI (22% versus 9% in the absence of RI, P ¼ 0.03). Cryoglobulin characteristics Type II cryoglobulinaemia was the main type (65%), followed by Type III (28%) and Type I (7%). IgG and IgM cryoglobulins were found in 92% and 91% of patients, respectively, whereas IgA cryoglobulin was found in only 8% of patients. IgA cryoglobulin was more common in cases of RI (22% versus 3% in the control group, P ¼ 0.0003). Monoclonal IgM kappa was the most common monoclonal cryoglobulin (67%), whereas regarding the polyclonal cryoglobulin, IgG was the most common (84%). IgM kappa and IgG kappa were more frequent in cases of RI than in the absence of RI (60% and 22%, versus 43% and 7%, P ¼ 0.05 and P ¼ 0.005, respectively). Median of the total cryoglobulin concentration was 45 mg/L, and was higher in cases of RI than in the absence of RI (137 mg/L versus 39 mg/L, P ¼ 0.001). Figure 2 shows the distribution of total cryoglobulin concentration (defined as cryoglobulinaemia level in the figure) in patients with or without RI. Complement and RF The majority of patients had low serum C4 and CH50, but a normal C3, with no difference between both groups except for CH50 (64% of low CH50 in cases of RI versus 46% in the control group, P ¼ 0.041). IgM anti-IgG RF was present in cryoprecipitate in 29% of patients, with a higher proportion in the RI group (42%) than in the control group (24%, P ¼ 0.03). Renal involvement Biological results. Mean eGFR was 54 mL/min/1.73 m 2 in the RI group and 95 mL/min/1.73 m 2 in the control group. In case of RI, haematuria was present in 71% of patients, with a median of 40/mm 3 , and proteinuria was present in 62% of patients, with a median of 0.79 g/24 h (or g/g). In the RI group, patients with kidney biopsy had a more severe RI than patients without kidney Histological results. DISCUSSION This study aimed to describe the clinical, biological and histological characteristics of patients with cryoglobulinaemia with or without RI in a large monocentre cohort between 2012 and 2014, and to identify predictive factors of RI. This is the first study focusing on the predictive factors for developing RI in the presence of cryoglobulinaemia. In 2011, De Vita et al. [23] proposed a preliminary classification criteria for the cryoglobulinaemic vasculitis: the diagnosis was made if two or three items (questionnaire, clinical, laboratory) were positive. These items were different from our results. The proposed classification included several clinical symptoms, reduced serum C4, positive serum RF and positive serum monoclonal component as criteria for diagnosis of cryoglobulinaemic vasculitis, whereas none of these features was independently associated with RI in our study. Moreover, predictive factors of RI defined in our study were not present in this classification. This could suggest a specific pathophysiology for RI, different from that which targeted the onset of cryoglobulinaemic vasculitis. First, our results show that a higher cryoglobulin level is associated with a higher risk of RI: for each increase of 100 mg/L of the total cryoglobulin level, the risk of RI increased by 10%. The importance of the cryoglobulin level was not recognized, either in the classification for the cryoglobulinaemic vasculitis [23] or in the study of Hurwitz et al. [24] in 1975, who did not find any difference for this criterion between patients with or without RI. The only previous study about predictive factors of RI did not analyse the cryoglobulin level [20]. However, a similar figure was found by Trejo et al. [13] in a cohort of 443 patients in Spain: patients with a cryocrit >5% had a higher frequency of GN. Our study strongly suggests a dose-dependent relationship to this association. In rheumatological surveys, patients with Type III cryoglobulinaemia outnumbered those with Type II cryoglobulinaemia [17,25]. Conversely, surveys based on RI showed a greater prevalence of Type II mixed cryoglobulins [8,11,20]. We therefore confirm these analyses, showing a statistically significant association between Type II cryoglobulinaemia and RI in the multivariable analysis (Table 4). In our study, the presence of an IgG kappa cryoglobulin was independently associated with risk for developing RI. A link between IgG monoclonal cryoglobulins and severe manifestations of cryoglobulinaemia, including RI, has been reported by Né el et al. [19] in 2014 in Type I cryoglobulinaemia. However, our results indicate the special role of the monoclonal IgG kappa component in all types of cryoglobulinaemia, while polyclonal IgG or the presence of an IgG cryoglobulin was not associated with RI. The role of IgA cryoglobulins in RI of cryoglobulinaemia requires investigation in further studies. Indeed, presence of IgA cryoglobulins was strongly associated with RI in univariate analyse, but this result was not found in multivariate analyse because of associations between IgA cryoglobulin and three of the predictive factors of RI (presence of IgG kappa cryoglobulin, Type II cryoglobulinaemia and male gender). These associations are not described in the literature, although it is known that IgA have an important role in some glomerulopathies. So perhaps, the lack of statistical power explains the absence of statistically significant difference. The high frequency of Type II diabetes in cryoglobulinaemic patients has been shown by Antonelli et al. [26], and is confirmed in our study. However, diabetes was independently associated with risk for developing RI in our study. To prevent potential patients with diabetic nephropathy in the RI group, we have excluded patients with uncontrolled diabetes or with other microangiopathic complications. The analysis of diabetic patients with RI shows that a high proportion had haematuria (62%, similar to the 71% of the total RI group), whereas haematuria is infrequent in diabetic nephropathy [27]. Furthermore, in patients with a biopsy-proven cryoglobulinaemic RI, 21% had diabetes, which is not different from the 29% in the total RI group (Table 1), and widely superior to the 4% of diabetic patients in the control group (Table 1). These results are in favour of absence or a very low proportion of patients with diabetic nephropathy in the RI group, although we cannot exclude RI due to diabetes versus cryoglobulinaemia. Patients with diabetes are therefore possibly more at risk of developing RI in case of cryoglobulinaemia, but this notion requires further studies. No particular aetiology independently predicted RI, although lymphoproliferative disorders were more common in cases of RI (22% versus 9%), but this was not statistically significant. The existing literature about cryoglobulinaemia has extensively studied the hepatitis C virus (HCV)-related cryoglobulinaemias [7,18,[28][29][30], but our study showed different results concerning cryoglobulinaemia-related diseases, with a low proportion of HCV (14% of the total population, without any difference between both groups). Beddhu et al. [8] studied 17 patients with cryoglobulinaemia and RI; 11 patients (65%) had an HCV infection. In a Barcelona cohort of 441 patients with cryoglobulinaemia, Trejo et al. [13] found a proportion of 71% patients with an HCV infection, while there only 14% of patients had HCV in our study. This low proportion of HCV can be explained by the geographical characteristic of our monocentre cohort, where the prevalence of HCV infection is much lower than in some Mediterranean populations [31]. It also confirms the current trend towards a decreasing prevalence of HCV infection in France since the introduction of effective antiviral treatments [32]. In our study, because of the exclusion of 48 patients with a possible cause of RI other than cryoglobulinaemia, some results about aetiologies should be considered with caution, because most (31 of these 48 patients) had a connective tissue disease, possibly leading to underestimation of the proportion of patients with connective tissue disease in the RI group. For the same reason, the correlation we found between male gender and RI should be interpreted with caution, because 28 of these 48 excluded patients were women. Therefore, male gender is potentially overestimated in the RI group. Clinically, no single symptom was an independent predictive factor of RI. Purpura was more frequent in the RI group, but this difference was not statistically significant in multivariate analysis. Hypertension was much more present in cases of RI, but seems to be a consequence of RI. Indeed, several studies have described the high prevalence of hypertension in cryoglobulinaemic patients with RI: 75% and 64% of patients in two US studies [8,9], 80% in a French study [33] and 82% in an Italian study [11]. Furthermore, in cases of RI, blockers of the renin-angiotensin system are sometimes stopped, which may partly explain the elevation of blood pressure. However, treatments were not recorded; therefore, it is impossible to analyse the influence of drug withdrawal on blood pressure in our study. Hypertension could be the cause of the RI in some patients. However, only 2 of the 45 patients with RI had neither haematuria nor proteinuria, which is hardly compatible with a high proportion of hypertensive nephrosclerosis, and the mean eGFR in the RI group was quite high (54 mL/min/1.73 m 2 ), which is not in support of a high proportion of secondary focal glomerulosclerosis. Complement analyses were similar in both groups, with frequently normal C3, but decreased C4 and CH50. These are classic data for complement system in cryoglobulinaemia, as described in the 1970s [34][35][36]. However, the fact that low C4 was not an independent predictive factor of RI supports the hypothesis that in cryoglobulinaemia, low C4 is not the consequence but the cause of the disease. Indeed, Menegatti et al. [37] showed in 2016 that polymorphisms of the C4 gene were less frequent in patients with mixed cryoglobulinaemia than in healthy patients. RF presence was not an independent predictive factor of RI, whereas it is a predictive factor of cryoglobulinaemic vasculitis [23]. Morphological analyses of kidney biopsies showed that MPGN was the most frequent histopathological pattern, as described in earlier studies [7,8,10,11,29,33], but extra-capillary crescents were present in 35% of biopsies, which is quite high [8] and partly responsible for the RI. Deposits of Ig were frequent, and identical to those of the cryoprecipitates in most cases. Histological analysis was only available in 14 patients. However, in case of biological RI, we excluded patients with possible causes of RI other than cryoglobulinaemia, such as connective tissue disease and diabetic nephropathy. Moreover, the population of the RI group had similar biological renal characteristics to those of patients from previous studies of RI in cryoglobulinaemia [6,[8][9][10][11], with a very high proportion of haematuria and/or proteinuria (only two patients had RI because of an isolated low eGFR without haematuria or proteinuria). The proportion of patients with RI and without kidney biopsy could be explained by the fact that RI frequently appears several years after the diagnosis of cryoglobulinaemia [7,11], and in high-risk patients (e.g. anticoagulant drugs, elderly patients). In cases of typical and non-severe RI, renal biopsy is deemed unnecessary in patients with already known cryoglobulinaemia. We compared patients with and without biopsy in case of RI, and indeed, patients with kidney biopsy had a more severe RI. There are caveats to our study: the small sample size yielded large confidence intervals, which reduces the robustness of the analysis. Furthermore, the monocentre source of data may limit the validity of our results, which should be confirmed in further studies. In summary, we identified several predictive factors of developing RI in case of cryoglobulinaemia: Type II cryoglobulinaemia, a high cryoglobulin concentration, the presence of an IgG kappa monoclonal component and diabetes. These factors are different from the diagnostic criteria for cryoglobulinaemic vasculitis. This could suggest a specific RI pathophysiology. In patients with predictive factors for RI at diagnosis, kidney function monitoring and nephroprotection should be intensified.
2019-03-18T14:04:16.558Z
2018-11-09T00:00:00.000
{ "year": 2018, "sha1": "982dcf9590912d792e0f26f371fc2fe581571527", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ckj/article-pdf/12/3/365/28752525/sfy096.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "982dcf9590912d792e0f26f371fc2fe581571527", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
102573650
pes2o/s2orc
v3-fos-license
Raise of Nano-Fertilizer Era: Effect of Nano Scale Zinc Oxide Particles on the Germination, Growth and Yield of Tomato (Solanum lycopersicum) Importance of agriculture to all human societies is characterized more than ever with the increasing world population. The first and most important need of every human is need to access the food, and food supply for humans is associated with agriculture directly or indirectly. The world‟s population will grow to an estimated 8 billion people by 2025 and 9 billion by 2050, and it is widely recognized that global agricultural productivity must increase to feed a rapidly growing world population (FAO/WHO, 2002). Vegetables and fruits are perishables, and in the absence of effective storage, preservation and transportation, the prices are unstable and the availability uncertain in addition to the above limitation, the diets of the average Indian household did not show any significant improvement over the last few decades of the century. A challenge for global food and International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 7 Number 05 (2018) Journal homepage: http://www.ijcmas.com Introduction Importance of agriculture to all human societies is characterized more than ever with the increasing world population. The first and most important need of every human is need to access the food, and food supply for humans is associated with agriculture directly or indirectly. The world"s population will grow to an estimated 8 billion people by 2025 and 9 billion by 2050, and it is widely recognized that global agricultural productivity must increase to feed a rapidly growing world population (FAO/WHO, 2002). Vegetables and fruits are perishables, and in the absence of effective storage, preservation and transportation, the prices are unstable and the availability uncertain in addition to the above limitation, the diets of the average Indian household did not show any significant improvement over the last few decades of the century. A challenge for global food and ISSN: 2319-7706 Volume 7 Number 05 (2018) Journal homepage: http://www.ijcmas.com Globally the soils are Zinc (Zn) deficient and plants are not in a position to accumulate enough Zn in edible parts that can meet the human nutrition requirement. Nanotechnology is one of the most important tools in modern agriculture. Nano agriculture involves the use of nano sized particles with unique properties (increased uptake by plants as they are small in size and have high rate of penetration through plant cell membrane) to boost crop productivity. In the present study, an attempt was made to study the effect of nano zinc oxide particle (ZnO NPs) for improving yield and Zn content in tomato plant. Initially seed priming concentration were standardized in vitro using ZnO NPs (400ppm) and granular zinc sulphate (ZnSO 4 ) (800ppm). Further standardized seed priming concentrations with different combinations of treatment such as seed priming, seed priming+ foliar spray and foliar spray were studied under field condition to evaluate their effect on biomass and Zn accumulation The obtained results based on the physiological and yield parameters showed that the usage of ZnO NPs fertilizers through any of the method of application has significant positive effect compared to zinc sulphate. ICP-OES analysis of plant digested material revealed that uptake of ZnO NPs is higher than the granular ZnSO 4 . The present study addresses the potential of nano scale particles on plant system opens an avenue for its potential use as future "nano fertilizers". Thus nanotechnology is one of technologies where lot of scope exists to improve the plant nutrition. nutrition security is to feed the world population with nourishing food (Quasem et al., 2009;Ghaly, 2009). Hence emphasis should be laid on production of high quality food with the required level of nutrients and proteins (Pijls et al., 2009;. To meet this increasing demand, researchers are trying to develop an efficient and ecofriendly production technology based on the innovative technologies. The issue of micronutrient deficiency is related to food security (Meenakshi et al., 2010;. Micronutrient deficiencies in human being as well as crop plants are difficult to diagnose and accordingly the problem is termed as "hidden hunger" (Stein et al., 2008). This hidden hunger may cause nearly 40per cent reductions in crop productivity and also it is estimated that it affects more than a half of the global population. Micronutrient deficiency in general refers to Fe, Zn, Se, I, Cu, Ca and Mg (Zhao and Mcgrath, 2009), among them Zinc (Zn) deficiency is most wide spread next to Iron, Vitamin A and Iodine. WHO reported that Zn deficiency stands fifth risk factor for causing diseases among children"s in developing countries. Based on analysis of diet composition and nutritional needs, it has been estimated that 49per cent of the world"s population (equivalent to 3 billion) are at risk of suffering from Zn deficiency. Until the recent times, soil fertilization was the only way to meet the mineral requirement of crops plants. However, several problems exist like need for large quantity of fertilizer, fixation in soil and slow uptake by plants. Zn has a specific physiological functions in all living systems, such as i) maintenance of structural and functional integrity of biological membranes, ii) as a cofactor for more than 300 enzymes, iii) detoxification of highly toxic oxygen free radicals iv) contribution to protein synthesis and gene expression under normal and stress conditions etc. Among all metals Zn is needed by the largest number of proteins, at least 2800 proteins are Zn dependent and make up nearly 10 per cent proteomes in eukaryotes, Zn has a vital role in several body functions such as vision, taste perception, cognition, cell reproduction, growth and immunity, resistance to some infectious diseases such as diarrhoea (Black, 1998) and immunity, (Shankar and Prasad, 1998). Most of the Indian soils are found to be Zn deficient, hence the food crops grown in those soils contain less amount of Zn in food. There has been a significant genetic variability to maintain growth and yield under Zn deficient conditions among crop species (Hacisalihoglu et al., 2003). Significant variation across crop species and genotypes exists for their ability to Zn uptake, their sequestration and ability to transport to edible parts. In order to overcome Zn disorder, several strategies are being employed including supplementation, fortification, diversification and biofortification. Among these strategies biofortification of food crops with Zn is considered to be cheaper and sustainable. The simplest of these techniques to increase Zn content of plants is through the addition of the appropriate mineral as an inorganic compound to the fertilizer. This method has been successful in many instances but depends on the crop species, cultivar, the mineral itself, quality and properties of the soil, making the strategy difficult to apply generally. The major advantages of this method are, it is simple, relatively inexpensive and enhancement can be achieved very rapidly. However Zn being heavy metal, indiscriminate application of Zn fertilizers to soil over years will lead to accumulation in soil to the levels toxic to the plants. With the current emphasis on Zn in agriculture, care should be taken not to get over zealous with Zn applications. Therefore, an efficient mechanism to reduce the amount of Zn fertilizer application to soil/ foliar without compromising the plant growth and yield is very essential. Hence, in recent years the application of nano scale particle of Zn is being preferred to enhance agronomic effectiveness of Zn fertilizers. Now, after years of green revolution and decline in the ratio of agricultural products to world population growth, it is obvious that there is necessity of employing new technologies in the agriculture industry more than ever. Modern technologies such as bio and nanotechnologies can play an important role in increasing production and improving the quality of food produced by farmers. Many believe that modern technologies will secure growing world food needs as well as deliver a huge range of environmental, health and economic advantages (Wheeler, 2005). Nanotechnology is one of the most important tools in modern agriculture, and agri-food nanotechnology is anticipated to become a driving economic force in the near future. Nanoagriculture focuses currently on target farming that involves the use of nanosized particles with unique properties to boost crop and livestock productivity. The development of nano materials could open up the novel applications in plant biotechnology and soil science. It is anticipated that very soon the industrial production of manufactured nano particles will be increased by manifold and released into the market. However with significant potential benefits, there are considerable uncertainties with regards to potential risks to the environment and human health that needs to be clarified. The current situation in nanotechnology is one in which there is great potential for benefit but an equally high uncertainty in associated risks. There are evidences for both optimism and pessimism. Pessimism is because of the huge discrepancy between the scale of research being performed on the invention of materials such as nano particles and their associated risks. Optimism is because of the uniquely forward-looking attitude of policy makers and regulators. The unusual properties of nano particles may result in substantially different environmental fate and behaviours than their bulk counterparts but very few observations were made in higher plant growth and yield. Because nano particles are spherical or faceted metal particles typically, <100nm in size. These nanoparticles are having high surface area (30-50 m 2 /g), high activity, better catalytic surface, rapid chemical reaction, rapidly dispersible and adsorb abundant water. Thus the implementation of particles in nanometer range can serve as potential alternative to overcome the limitation of presently available fertilizers. So nano fertilizers may increase the efficiency of nutrient uptake, enhance yield and nutrient content in the edible parts and also minimize its accumulation in the soil. Thus present study investigates the effect of ZnO NPs on tomato plants with a view point of their potential use as future" nano fertilizers". Preparation of particle suspension Chelated bulk ZnSO 4 was used as a reference Zn source, the materials were suspended directly in deionised water and dispersed by ultrasonic vibration (100 W, 40 KHz) for 30 min. Different concentrations (0, 100, 200, 400, 800, 1000, 1500, 2000 ppm) of solutions were prepared. Magnetic bars were placed in the suspensions for stirring to avoid aggregation of the particles. The nano scale suspensions as expected to appeared as clear solutions. The pH of all the prepared suspensions was found to be 6.8-7.0. A control was also maintained, corresponding to pure water. Standardization of Zn concentration for seed priming Tomato seeds were treated with 100ml of Zn solutions /suspensions of granular ZnSO 4 and ZnO NPs for three hours. After inhibition seeds were thoroughly washed under running water and allowed for germination. Then the 10 seeds were placed in each petri dish (100 mm x15 mm) with single layer of sterilized filter paper and 5 ml of water was added (as per the recommendations of the International Seed Testing Association, 1976). Than germination percentage, root length, shoot length were recorded using the values Seedling Vigour Index was calculated by using the formula described by Abdul-Baki and Anderson (1973). Seed Vigour Index = Germination per cent × (root length + shoot length) Field experimentation Experiment was conducted in the field of Department of Crop Physiology, UAS, GKVK. The experiment was conducted with three different treatments based on method of application of Zn sources, such as only seed priming, seed priming + foliar application and only foliar application. The standardized Zn concentrations in nano ZnO and granular ZnSO 4 under lab condition were used as initial seed priming concentrations. And foliar spray was given at 30 DAS (Days after Sowing) Tomato. The concentration maintained for foliar spray was 1 per cent in granular ZnSO 4 and 0.5 per cent in nano ZnO. Physiological parameters were measured at 45 DAS. The following parameters were measured: number of branches, plant height, root length SCMR (Spad Chlorophyll Meter Reading), Relative Water Content (RWC), Specific Leaf Area (SLA), Relative Water Content (RWC) To determine the effect of different treatment in field experiment, RWC was measured. The leaf discs were obtained from plants from replicated treatments, and the fresh weight was determined. Discs were then floated on deionised water for 5hr under low irradiance, the turgid tissue was then quickly blot dried with tissue paper prior to determining turgid weight. Dry weight was then determined after oven drying at 70 o C for 48hr. The relative water content was calculated using the formula (Gui-Rui et al., 2000). Yield and estimation of zinc Yield per plant (gm) was recorded. Zinc content was analyzed in different plant parts like leaf, root and fruit by using Inductively Coupled Plasma -Optical Emission Spectrometer (ICP-OES). Standardization of Zn concentration for seed priming Tomato seeds responded variably towards the treatment at various concentrations of both bulk ZnSO 4 and nano scale ZnO particles. Seed treated with 400ppm Nano ZnO recorded significant germination (93.33 %) and Seedling Vigour Index (919.80). Table.3 Interaction effect of different Zn sources and method of application on yield and Zn content in tomato The results from the bulk ZnSO 4 treated seeds were not promising (Table 1). Among the different nano scale ZnO concentrations, 400ppm showed the maximum and increased concentration showed decreased seedling vigour index Field experiment The observations recorded at 45 DAS reveals the promotory effect of nano scale ZnO. The result shows that there is a significant difference between both the Zn sources and also the method of application. The high root length was observed in nano ZnO T 2 (46.07cm) treatment followed by nano ZnO T 3 (36.80 cm). In ZnSO 4 all the treatments had comparatively less root length than nano ZnO. But all the Zn treated plants showed significantly highest root length compared to control (26.10 cm). ZnO NPs at all the methods of application proved to be effective in improving both root length and root dry weight it is represented in Table 2. These results confirmed that the physiological effects were related to the nano meter sized particles. Yield and Zn content The results revealed that the response of tomato to nanoscale ZnO was highly significant. In comparison with granular zinc sulphates and method of application, ZnO NPs with Seed Priming + Foliar spray recorded highest yield with highest Zn accumulation in fruit (6.93mg/100gm), leaf (6.87mg/100gm) and root (3.67mg/100gm). Table 3 indicates the significant increase in yield and Zinc content by nano ZnO over granular ZnSO 4 and control. Zn Sources and method of application on yield and Zn content in tomato. Due to promotory effects of nano ZnO on plant growth, yield and zinc content significantly increased over ZnSO4 and control. Such effects can be due to higher seedling vigour and early vegetative growth. Nano particles (NPs) with small size and large surface area are expected to be the ideal Currently use of nano materials has been expanded in every fields of science including agriculture. It has been stated that application of micronutrient fertilizers in the form of NPs is an important route to release required nutrients gradually and in a controlled way, which is essential to mitigate the problems of fertilizer pollutions (Naderi and Abedi, 2012). It is because of that when materials are transformed to a nano, they change their physical, chemical and biological characteristics as well as catalytic properties and even more increase the chemical and biological activities (Mazaherinia et al., 2010). The micronutrients in the form of NPs can be used in crop production to increase yield (Reynolds, 2002). Recently it has been studied that nano ZnO positive impact on germination, growth and yield of peanut (Prasad et al., 2012). It has been reported from pot culture experiments on wheat plants that increasing seed zinc content from 0.25 μg per seed to 0.70 μg per seed significantly improved root and shoot growth under Zn deficiency. Hence it may be concluded that high Zn content in seed could act as a starter fertilizer. Ajouri et al., (2004) reported that seed priming with Zn was very effective in improving seed germination and seedling development in barley. These results may indicate that high Zn concentration in seeds has very important physiological roles during seed germination and early seedling growth. In our study we have standardized the Zn concentration for Tomato seeds using nano ZnO. At 400 ppm nano ZnO tomato seeds showed significantly highest germination percentage, root length, shoot length and seedling vigour index (Table 1). A significant increment in the germination, shoot length, root length and seedling vigour index was observed in the nano ZnO standardized concentration compare to the common ZnSO 4 at the same concentrations. In one of the study on groundnut seeds with nano ZnO particles with a concentration of 1000 ppm also reported the significant increase in germination; shoot length, root length and vigour index (Prasad et al., 2012). In one of the study on mung (Vigna Radiata) similar results were found with nano ZnO seed priming and also been observed that beyond optimum concentration growth was inhibited (Pramod et al., 2011). Another report showed that effect of ZnO nanoparticle on the seed germination and root growth in black gram (Cicer arietinum) seeds on the reactivity of phytohormones especially indole acetic acid (IAA) involved in the phyto stimulatory actions. Due to oxygen vacancies, the oxygen deficient, i.e. zinc-rich ZnO nano particle increased the level of IAA in roots (sprouts) which in turn indicate the increase in the growth rate of plants (Avinash and Pandey et al., 2010). Reports on mung (Vigna radiate) and in gram (Cicer arietinum) by using nano ZnO particle in agar method, which found to affect the growth of mung and gram seedlings at different concentrations. The maximum effect was found at 20 ppm for mung and 1 ppm for gram (Pramod Mahajan et al., 2011). Method of application of fertilizer is the most important concerned to the uptake and translocation onto the different parts of plant. Foliar fertilization is an important tool for the sustainable and productive management of crops. The ability of plant leaves to absorb water and nutrients was recognized approximately three centuries ago (Fernández and Eichert, 2009). The application of nutrient solutions to the foliage of plants as an alternative means of fertilize to crops such as grass. Spraying with 0.5 per cent ZnSO 4 gave significantly higher peanut pod yield compared to no spraying. However, soil application of 10 kg /ha ZnSO 4 during sowing gave yield on par with control plants without ZnSO 4 application. This indicates that groundnut responds to foliar spray but not to soil application (Channabasavanna and Setty, 1993). The effectiveness of various synthetic and natural chelates has been widely investigated (Alvarez and Gonzalez, 2006;Gonzalez et al., 2007;Prasad and Sinha, 1981). Apart from their effectiveness, application of chelates is generally expensive and may result in potential leaching risk because the more mobile the chelate, or the less biodegradable the carrier, the greater the risk of leaching (Gonzalez et al., 2007). Zinc sulphate, which is highly soluble, can easily be taken up by plants but is known to fall off quickly. The retention time in the plant system is low. So the bioavailability of nutrients for long period was not sure with the use of ZnSO 4 . If the plants are soft or sensitive and if the conditions are harsh like high temperatures, ZnSO 4 has a large salt index, which may burn the plant. Moreover, the zinc content in the mixture is usually very low (9-12 %). (Brown et al., 1993;Fageria et al., 2002) In this study different treatments with nano Zn oxide particle and ZnSO 4 such as seed priming, foliar application and seed priming with foliar application were imposed to examine the treatment effect. Results suggest that nano ZnO form is absorbed by plants to a larger extent unlike bulk ZnSO 4 . Nano ZnO has proved to be more effective in enhancing productivity and absorption of Zn because of high surface area to volume ratio. Better growth and 12-14 days early flowering was observed in nano ZnO treated onion plants at 20 and 30 µg/ml compared to control onion plants (Laware, 2014).
2019-04-09T13:10:51.545Z
2018-05-20T00:00:00.000
{ "year": 2018, "sha1": "0421b58e8a9788f15e5f1680d94868e93e8b29f0", "oa_license": null, "oa_url": "https://www.ijcmas.com/7-5-2018/Hajira%20Khanm,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "463e3b3532d438557fca513e8e139a08ae1bb4f5", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
229631562
pes2o/s2orc
v3-fos-license
Entrepreneurship Education at SMP IT LHI Yogyakarta Implemented Through Life skill Programs as a Strategy to Deal With the Challenges of Industrial Era 4.0 The 4.0 industry era poses its own challenges in the field of education. Education must be able to equip students with competencies to deal with disruptive innovation phenomenon. Schools as educational institutions must be able to equip students to face these challenges by facilitating the internalization of students' skills so that they become competitive human beings. The required skills can be improved through a life skill program. High human competitiveness in a country will increase the country's competitiveness in the international arena. A country that dreams to advance can develop entrepreneurship, because entrepreneurship is one of the keys to a country's advancement. Entrepreneurship development can be performed through education channels i.e. by making entrepreneurship as an intracuricular and extracurricular activity. Entrepreneurial education is education that applies the principles and methodologies enhancing the formation of life skills for students through an integrated curriculum developed at schools. This research aims at examining the entrepreneurial education strategy through life skills at SMP IT LHI Yogyakarta. This research is a case study research. Data collection techniques include observations, interviews and documentations. Data validity is gained using triangulation techniques. Data analysis technique utilized in this research is an interactive analysis of Miles and Hubberman models. The results show that entrepreneurship education implemented through the life skill program is performed by determining the life skill program in accordance with the specialization of students. In addition, the implementation of entrepreneurship education is in the form of life skill clubs, student expos, bazars, and market days. Keywords—life skill, entrepreneurship education, industry 4.0 I. INTRODUCTION The industrial revolution era 4.0 is an industry that prioritizes computerization. Computerization is closely related to the cyber world and automation is not only affecting the industry but also affects all aspects of human life including social, cultural, economic, educational and other aspects. The 4.0 industrial era gives its own challenges in the field of education. Education must equip students with skills in order to be able to deal with the disruptive innovation phenomenon. Schools as educational institutions must be able to arm students to face these challenges by facilitating the internalization of students' skills so that they become competitive human beings. The required skills are expected to provide life skills for students where life skills can be improved through a life skill program at school. This is in line with Hodge, stating that life skills may bring us a success in various sectors, and there are many basic life skills that work across sectors (e.g., home, school, sport, peer group environments) [1]. If that statement is associated with this research, it can be concluded that by having life skill, students will have the possibility of success in various fileds of life. Toch in Meyer explains that President Obama has argued that we need to know if students possess skills required for the 21 century including problem solving, critical thinking, creativity, and entrepreneurship [2]. Based on this view, it can be concluded that students need to have skills required to deal with 21 st century, one of which is entrepreneurship. Entrepreneurship is important for a country. Entrepreneurship has experienced rapid development in the past three decades as stated by Lin and Xu that the world has witnessed the rapid development of entrepreneurship education over the past three decades. This development is closely related to policies of the government [3]. Of course, if Indonesia wants to increase the number of entrepreneurs, the government policy must also support the development of entrepreneurship education. Tight human competitiveness in a country will increase the country's competitiveness in the international world. A country that dreams to advance can develop entrepreneurship, because entrepreneurship is one of the keys to a country's advancement. As stated by Ciputra, along with the existence of the ASEAN community, Indonesia must develop entrepreneurship so that Indonesia can make an advancement otherwise Indonesia will be less competitive [4]. II. RELATED WORK The importance of entrepreneurship is supported by the OECD statement, 2009 in Blenker, in which entrepreneurship plays important role for the process of value creation, job creation and the advancement of general economic [5]. This suggests that universities are required to carry out an entrepreneurship teaching and produce graduates who have entrepreneurship competencies, skills and motivations to be an entrepreneur. The statement above needs to be responded to as early as possible by education. Education as media to educate the public has a vital role in developing entrepreneurship through a concept of entrepreneurial education. According to Saroni, entrepreneurship education is an educational program that considers the entrepreneurial aspects as a vital part in escalating the learner' competencies [6]. Another view is expressed by Suherman, he says that entrepreneurship education is a type of education that teaches people to produce their own business [7]. Another view is stated by Lin and Xu in which entrepreneurship education is an educational program offered by universities and colleges to their students or other individuals intended to improve entrepreneurship awareness, capabilities, and techniques [3]. Sunyoto and Wahyuningsih in Titiani, say that entrepreneurship deals with mental and attitude, the active soul in trying to increase its work in order to increase income [8]. According to Soemanto, there are 3 principles of entrepreneurial education which include the following [9]:  Entrepreneurship education can last a lifetime, anywhere, and at any time, so that humans are oblighed to learn and educate themselves naturally.  The environment of entrepreneurial education can be anywhere, at school, at family, and in the community.  The parties in charge of entrepreneurship education include school, family and community. Mulyani states that the success of entrepreneurship education programs can be identified from the achievement of criteria/indicators by students, which include: having high independence, having high creativity, being brave to take risks, being action oriented, having high leadership character, having hard-working character, understanding the concepts of entrepreneurship, having entrepreneurial skills in his school especially regarding entrepreneurial competence [10]. According to Sutrisno, entrepreneurial education is education that applies the principles and methodology enhancing the formation of life skills for students through an integrated curriculum developed in schools [11]. Schools that implement an entrepreneurial education must carry out activities which enhance life skills. One of the schools that develops entrepreneurship education is SMP IT Luqman Al Hakim (LHI) Yogyakarta. Listyono defines life skills as the ability and courage to deal with life's problems, then proactively and creatively find solutions to overcome the problems [12]. Definition of life skills is broader than vocational skills or working skills. According to World Health Organization (WHO), life skills is the ability to behave in an adaptive and positive manner that enables a person to effectively solve their daily needs and challenges. Based on Law No. 20 of 2003, Life skill education is education that provides personal, social, intellectual skills, and vocational skills to work or perform independent business [13]. SMP IT Luqman Al Hakim International Yogyakarta has the concept of holistic education performed through an integral learning process. SMP IT Luqman Al Hakim has anexcellent program i.e. life skill program based on specialization. The life skill program aims to provide life skills based on the students' interests. The life skill programs develop entrepreneurial values, so that it is expected to be able to train students with entrepreneural skills. This is because entrepreneurship needs to be taught to the younger generation as stated by Baidi and Suyatno that the development of entrepreneurship to the younger generation needs to be increased [14]. III. METHODOLOGY This research is a descriptive qualitative research. The research subjects were the Principal, the vice principal of Student Affairs and the teacher of life skill at SMP IT LHI Yogyakarta, as well as life skill students at SMP IT LHI Yogyakarta. The research subject was determined using purposive sampling technique. The objects of the study include life skill activities in schools related to entrepreneurial education strategies conducted by schools through the lifeskiil programs. Research instruments are researchers themselves or often referred to as human instruments. Data collection techniques consist of observations, interviews, and documentations. To gain the data validity, the researchers employ triangulation techniques or data retrieval process which is carried out through observation, interviews and documentation of the life skill programs. The data analysis techniques employed was an interactive analysis proposed by Miles and Huberman [15]. IV. RESULTS AND DISCUSSION From the observations, interviews, and documentation, we investigate that the competencies developed in SMP IT LHI consist of: The vision and missions of SMP IT LHI promotes the development of skills needed in everyday life which are later on developed in the form of life skill programs. The life skill program is intended to enable students to have skills required for living in the next 20 years and can be applied in the daily lives of students and in the surrounding environment. The development of student skills is carried out by habituating the students to stay in the dormitories that are also developed in school life. To further improve their skills, the schools design life skill clubs. A. Determining Life skill Programs Determining the life skill program for students is carried out through various stages which can be described in figure 1. B. Implementation of Entrepreneurship Education Through Life skill Program Ruskovaara and Pihkala state that Teachers have a great responsibility to integrate an entrepreneurship education in their teaching practices. Also, they are encouraged to search for the best and benefited model since an easy to perceived pedagogical guideline for performing an entrepreneurship education doesn't exist yet [16]. Teachers need to integrate entrepreneurial education in their teaching as a best practice. The implementation of entrepreneurship education at SMP IT LHI through the life skill program is suited to the specialization groups. Students attend life skill training within clubs that they have chosen. Life skill programs that are organized are in the form of projects that must be completed by students or employ a project based learning approach. Projectbased learning is chosen as a life skill learning method because it can foster student creativity in completing projects. At the end of the semester, after attending the life skill programs, students must produce a product based on the project agreed at the beginning of the life skill program. Life skill products are used as merchandise in student expo (LSE / LHI Students Expo), bazaars, and market days. Through the life skill programs, students are equipped with the skills to produce a product based on their interests guided by life skill instructors and a dormitory companion (musrifah).Then the results of the life skill programs /products are exhibited and marketed in LSE activities. C. Organizing Student Expo, Bazar, and Market Day Student expo activities, bazaars, market days are held as a medium for instilling the student's entrepreneurial sense so that they have a selling skill. Also, they aim at training students to dare to show their work, offering their products, determining the selling price of their products and marketing their products. Entrepreneurship education at SMP IT LHI Yogyakarta is implemented through a life skill program. According to Sutrisno, education that promotes entrepreneurial knowledge must perform life skill development activities and must conduct activities which enhance life skills [11]. Life skill program is organized based on the students needs employing specialization models, so that students participate in the life skill programs with feelings of pleasure without feeling of being forced. The life skill programs implemented by schools in the form of life skill clubs based on the students specialization is expected to provide students an overview of what they are going to do in the future. This is in line with the view of Sultana which states that the important life skills required by young generation includes the capability to arrange a training or educational direction in relation to the occupations they desire to pursue [17]. In implementing life skill programs, entrepreneurial values are instilled through indirect learning i.e. how to produce goods, and how to market goods through student expo activities, bazaars, or market days. Entrepreneurial values are intended to develop entrepreneurial attitudes as stated by Asenjo and Barberá, in Zondo that entrepreneurship deals with a process that occurs over a period of time and its initial phase is an entrepreneurial attitude [18]. Students must have skills which are relevant to the need of 21st century. With regard to this, Harris says that the Partnership for 21st Century Skills consist of a well-recognized framework of skills which are required for the modern workforce (Partnership for 21st Century Skills, 2011). The skills addressed in this framework consist of innovation and learning, innovation and creativity, problem solving and critical thinking skill, communication skill, and cooperation skill [19]. Advances in Economics, Business and Management Research, volume 154 These skills will enhance students' to be able to compete in the industrial era 4.0 where several skills to face the industrial era 4.0 are required, one of which is creativity. Student's creativity is improved through project based learning which is employed as a learning method in the life skill program. Moursund in Gultekin explains that project-based learning is a learning approach which is implemented based on the principle that learners solve real life problems individually or in groups [20]. Project-based learning can improve the students skills to solve complex problems, collaborate, develop communication skills, organize projects skillfully. These skills are trained to students to face the industrial era 4.0 and in accordance with the development of society and the modern world, so that entrepreneurial development runs well. This is in accordance with the view of Vakili et al., "therefore, promoting entrepreneurship is inevitable in order to align activities in societies with modern world, and education this process is considered as one of the most important components of entrepreneurship development" [21]. V. CONCLUSION AND FUTURE SCOPE Entrepreneurship education implemented through the life skill program as a strategy to face the industrial era 4.0 is performed by determining the life skill programs which are suited to the specialization of students. The implementation of entrepreneurship education is in the form of life skill clubs (gardening, fishering, cooking, sewing). To practice entrepreneurship education especially the concept of marketing, students expo, bazaar, market day are held. The implementation of entrepreneurship education through life skill which is packaged with a project based learning model is directed to instill skills to students as a provision to face the industrial era 4.0 where several skills are needed, one of which is creativity. Creativity skills will be developed through projects in entrepreneurship education through life skill programs. Other researchs can also be carried out the effectiveness of the life skills program in entrepreneurship education.
2020-11-26T09:06:00.743Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "33b45cfc72356f61ee521221992171693338d948", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/aebmr.k.201116.052", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "085502f9742eee20903a2f4e23ed689932d6bda1", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Business" ] }
220045640
pes2o/s2orc
v3-fos-license
How Much Do Young Italians Know About COVID-19 and What Are Their Attitudes Toward SARS-CoV-2? Results of a Cross-Sectional Study Objectives: At the end of 2019, an outbreak of novel coronavirus pneumonia, called severe acute respiratory syndrome coronavirus 1 (SARS-CoV-2), was first identified in Wuhan, Hubei Province, China. It subsequently spread throughout China and elsewhere, becoming a global health emergency. In February 2020, the World Health Organization (WHO) designated the disease coronavirus disease 2019 (COVID-19). The objective of this study was to investigate the degree of knowledge of young Italians about COVID-19 and their current attitudes toward the SARS-CoV-2 and to determine if there were prejudices emerging toward Chinese. Methods: An online survey was conducted on February 3, 4, 5, 2020, with the collaboration of Italian website “Skuola.net”. Young people had the opportunity to participate by answering an ad hoc questionnaire created to investigate knowledge and attitudes about the new coronavirus, using a link published on the homepage. Results: A total of 5234 responses were received, of which 3262 were females and 1972 were males. Most of the participants showed generally moderate knowledge about COVID-19. Male students, middle school students, and those who do not attend school, should increase awareness of the disease; less than half of responders say that their attitudes toward the Chinese population has worsened in the last period. Conclusions: Global awareness of this emerging infection should be increased, due to its virulence, the significant risk of mortality, and the ability of the virus to spread very quickly within the community. C oronaviruses (CoVs), important human and animal pathogens, are a family of RNA viruses that typically cause mild respiratory, enteric, hepatic, and neurologic disease in humans. 1,2 Six coronavirus species are known to cause human disease: among these, 4 species, including hCoV-229E, OC43, NL63, and HKU1, are prevalent and typically cause mild respiratory diseases, 3 while 2 novel fatal coronaviruses emerge periodically in different areas: severe acute respiratory syndrome coronavirus (SARS-CoV) and Middle East respiratory syndrome coronavirus (MERS-CoV). At the end of 2019, an outbreak of novel coronavirus pneumonia, called SARS-CoV-2, was first identified in Wuhan, Hubei Province, China. 4,5 It is supposed that the new virus originated from an animal-to-human spillover event linked to seafood and live-animal markets. It subsequently spread throughout China and elsewhere, becoming a global health emergency. In February 2020, the World Health Organization (WHO) designated the disease COVID-19, which stands for coronavirus disease 2019. 6 Coronavirus infection in humans causes mild to moderate respiratory diseases, such as colds, that last for a short period of time. Symptoms may include runny nose, cough, inflamed throat, fever, headache, gastrointestinal disorders, and general malaise. Human coronaviruses can cause diseases of the lower respiratory tract, such as pneumonia or bronchitis. This occurs mainly in people suffering from pre-existing chronic diseases of the cardio-vascular and/or respiratory system, and individuals with a weakened immune system, in infants and the elderly. The transmission of the virus takes place from 1 infected person to another through: saliva, coughing, and sneezing, with direct personal contact and touching with contaminated hands (not yet washed) mouth, nose, or eyes. The contagion can occur through fecal contamination. 7 There are many gaps in the knowledge of the epidemiology, prevalence, and clinical manifestation of infection. As the World Health Organization (WHO) has pointed out, the transmission of adequate information to the public about the virus, how contagion happens and appropriate prevention measures are essential to ensure adequate disease control. 8 The new COVID-19 virus is still little known and this characteristic leads on the 1 hand to look for as much information as possible on the subject, on the other hand, to rely on sources that often report incorrect or unfounded news. However, the need to know and inform is associated with a loss of confidence in institutions, science, and medicine, and the interests of pharmaceutical companies are feared. In the age of social networks, people are subject to an informational deluge, and new forms of media epidemics are developing that quickly transmit habits and behaviors, even wrong and unfounded news. 9 The aim of this study was to investigate the degree of information on new coronavirus among young Italians and to understand if young people in Italy had developed prejudices against the Chinese because of the coronavirus. METHODS Study Design and Participants This study was a cross-sectional study. An online survey was conducted on February 2020 with the collaboration of "Skuola.net", an Italian website for information and insights for secondary school, high school, and university students. The students and also people who do not attend any school or university but simply viewed the website, had the opportunity, during 3 d, to participate in the survey by answering the questionnaire through a link published on the homepage (www.skuola.net). No limits of age were applied to the participants. This study was carried out according to the STROBE statement. 10 Data Collection The questionnaire was created to investigate the knowledge and attitudes of Italian students about the new coronavirus (SARS-CoV-2), on the basis of previous published studies. 8,[11][12][13] It was developed in Italian language and composed by 16 multiple choice questions: 7 concerning knowledge, 5 concerning attitudes and, 4 about sociodemographic data, such as gender, age, school, and geographic area. The participants were assured about the anonymity of their responses. The data from survey responses were collected in an Excel file for statistical analysis. A pilot study involving 93 students revealed a Spearman correlation coefficient for test-retest reliability of 0.908 (P < 0.001) and Chronbach alpha of 0.701. Statistical Analysis All analyses were performed using SPSS for Windows (Statistical Package for the Social Sciences, Version 25; SPSS, Inc., Chicago, IL). A descriptive analysis of the categorical variables was conducted using absolute frequencies and percentages. The associations among sex, school, geographic area, and attitudes and knowledge were evaluated. The differences between groups with respect to the categorical variables were analyzed using the Chi-square test. A score was created using the correct answers to the 7 knowledge questions evaluated with 1-point, with a range of value from 0 (no knowledge) to 7 (maximum knowledge): we considered for good knowledge a cutoff of correct answers of 75% (5/7). A multivariate linear regression analysis was performed using a forward stepwise selection, considering the score of knowledge as dependent variables and socio-demographic factors as independent variables. The goodness of fit for the model was assessed with R2. Moreover, 5 logistic regression models were computed, estimating odds ratio (OR) with 95% confidence intervals (95% CIs): the dependent variable in the models was each question about attitudes, and the independent variables were age, sex, school, geographic area and knowledge score. Significance threshold was set at P < 0.05 for all analyses. Descriptive Analysis of the Sample A total of 5374 people took part in the questionnaire and 5234 complete answers were received, of which 3262 (60.7%) were females and 1972 (36.7%) were males. The frequencies of the socio-demographic characteristics (age, attended school, regional macroarea) of the sample are shown in Table 1. Univariate Analysis: Knowledge Questions Questions 1 to 7 concerned knowledge about new coronavirus. To the question "Can coronavirus infection pass from man to man through cough-borne droplets?" especially females answered correctly (82.2%), students attending high school (83.4%), and people from North of Italy (82%). Similar results came from question 2 ("Are fever and cough among the signs and symptoms of coronavirus infection?", from question 4 ("How long can infection develop after exposure to coronavirus?"), from question 5 ("Can patients with coronavirus infection be cured?"), and from question 7 ("Is there currently a vaccine against coronavirus?") in which most of the females correctly answered, most of the university students, and most of the people of northern Italy. Table 2 and the additional file Supplemental Table 2A show the results in detail. Multivariate Analysis Linear regression analysis (Table 3) was performed to evaluate the association between the score of knowledge and sociodemographic variables (age, sex, geographic area, school). All variables were directly associated with the score except for the variables "Macroarea-Center" and "Middle school," which had no significant relationships (B = −0.008; P = 0.619; B = 0.024; P = 0.357). Logistic regression models are reported in Table 4. Regarding question 8, the variable "Center" had no significant association (OR, 0.879; CI: 0.756-1.022), while "Age" had an indirect association (OR, 0.666; CI: 0.637-0.697). Center was not significantly different from south Italy, while age had an inverse association with the intention of being vaccinated. The same happened in questions 9 and 11 with the geographic variable "Center," which was associated with a negative attitudes (OR, 0.796; CI: 0.695-0.913; OR, 0.787; CI: 0.693-0.895). Age was also inversely associated with the attitude investigated in question 12 (OR, 0.943; CI: 0.903-0.986), so the older participants tended to have negative behaviors toward a Chinese-born schoolmate. DISCUSSION The aim of this study is to investigate the degree of knowledge of Italian students of middle school, high school, and university about the new coronavirus and their current attitudes toward the coronavirus and the Chinese population, especially if they have changed in the last period. The survey was conducted in early February before the outbreak of the coronavirus epidemic in Italy. The study highlights how, in particular, female students, university students, high school students, and those of Northern Italy, compared with their respective counterparts, have a greater knowledge about the infection of the new coronavirus and underlines, in particular, how the attitudes and behaviors of male students, middle school students, and those of Central-Southern Italy, toward the Chinese population, have worsened in the last period. Therefore, greater attention should be paid to male students, middle school students, and those who do not attend school, to increase awareness of the disease and to implement the most suitable preventive measures, designed to stem the spread of the infection. Disaster Medicine and Public Health Preparedness more socio-demographic data, such as the origin of the participants from urban or rural areas and a greater number of very useful questions to prevent and stem the spread of the infection: questions regarding the knowledge of the main preventive measures, the treatment of the disease and the epidemiological characteristics of the infection. In addition, they included a question about the main source of information, which appears to be the Ministry of Health, followed, in order of frequency, by the social networks. 14 Another study that involved medical students highlighted how these students had a good knowledge of the clinical manifestations of the MERS but a poor knowledge of the rate of mortality of the infection. 15 Our study is the first that evaluates the knowledge of Italian students regarding COVID-19 infection, highlighting a good global knowledge; in addition, only students who visited the "Skuola.net" site were able to participate in the survey by filling in a questionnaire on a link published on the website homepage. Compared with other studies, there is no age limit for completing the questionnaire and participating in the survey. However, not all students access the "Skuola.net" site to acquire information regarding a given topic; in this study, therefore, even the most deserving students, who prefer to study a given topic on a more reliable source, such as a textbook, could be excluded. Another limitation of the study is not to include, among the questions in the questionnaire, 1 relating to the main source of information regarding the infection caused by SARS-CoV-2: in this way 1 could know those sources least used by students to be able to encourage them to use these sources to increase awareness of the infection. Finally, if the survey had been conducted in the period of maximum contagiousness in Italy, such as the end of March, and in April, the answers of the students could vary, resulting in a greater knowledge of COVID-19, for the several details provided above all by the media, and, probably a more inclined attitude toward the Chinese population. He and colleagues in their online survey observed social exclusion and discrimination in the outbreak of COVID-19 across the world and inside of China, reporting many feared contact with people from Wuhan or Hubei Province and the stigmatization of people from Hubei was associated with the social exclusion process. 16 In February 2020, before the first case of COVID-19 was confirmed in Poland, Rzymski and Nowicki conducted an anonymous online survey of Asian medical students in Poland to assess whether they experience any form of prejudice related to the ongoing pandemic. The authors demonstrated the COVID-19 outbreak had triggered xenophobic reactions toward students of Asian-origin before the first SARS-CoV-2 case was confirmed in Poland. 17 Epidemics spread fear that is the feeling behind the phenomena of racism and xenophobia. The COVID-19 pandemic has uncovered social and political fractures within communities, with racialized and discriminatory responses to fear, disproportionately affecting marginalized groups. 18 Following the spread of COVID-19 from Wuhan, China, discrimination toward Chinese people has increased. This includes individual acts of microaggression or violence, to collective forms, for example, Chinese people being barred from establishments. 19 CONCLUSIONS Most young people are aware of the main symptoms of the disease and generally show a good level of knowledge about new coronavirus, despite the primary source of information beng social networks, natural docking places of fake news and alarm uncontrolled. Basic notions that, however, do not make them immune to irrational behaviors, from true and proper psychosis. Efforts should be concentrated to increase global awareness of this emerging infection, also to prevent and contrast any prejudicial or discriminatory behaviors, especially among the youngest.
2020-06-25T09:09:56.250Z
2020-06-24T00:00:00.000
{ "year": 2020, "sha1": "fdf7ac2155402157cc8e9b7bf602994bc7b6e2f9", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/D93809D10DE2873F1A075847C87E3B86/S1935789320002050a.pdf/div-class-title-how-much-do-young-italians-know-about-covid-19-and-what-are-their-attitudes-toward-sars-cov-2-results-of-a-cross-sectional-study-div.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0c23a1c216a2fa28dc8fd50438a66672075f23b0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
247292431
pes2o/s2orc
v3-fos-license
The generalized Marchenko method in the inverse scattering problem for a first-order linear system The Marchenko method is developed in the inverse scattering problem for a linear system of first-order differential equations containing potentials proportional to the spectral parameter. The corresponding Marchenko system of integral equations is derived in such a way that the method can be applied to some other linear systems for which a Marchenko method is not yet available. It is shown how the potentials and the scattering solutions to the linear system are constructed from the solution to the Marchenko system. The bound-state information for the linear system with any number of bound states and any multiplicities is described in terms of a pair of constant matrix triplets. When the potentials in the linear system are reflectionless, some explicit solution formulas are presented in closed form for the potentials and for the scattering solutions to the linear system. The theory is illustrated with some explicit examples. Introduction Our main goal in this paper is to develop the Marchenko method for the linear system d dx α β = −iζ 2 ζ q(x) ζ r(x) iζ 2 α β , −∞ < x < +∞, (1.1) where x is the spacial coordinate, ζ is the spectral parameter, the scalar quantities q(x) and r(x) are some complex-valued potentials, and the column vector α β is the wavefunction depending on x and ζ. We assume that the potentials q and r belong to the Schwartz class, i.e. the class of functions of x on the real axis R for which the derivatives of all orders exist and all those derivatives decay faster than any negative power of x as x → ±∞. Even though our results hold for potentials satisfying weaker restrictions, in order to provide insight into the development of the Marchenko method, for simplicity and clarity we assume that the potentials belong to the Schwartz class. The linear system (1.1) is associated with the first-order system of nonlinear equations given by x ∈ R, t > 0, (1.2) which is known [1,3,20,25] as the derivative NLS (nonlinear Schrödinger) system or as the Kaup-Newell system. The derivative NLS equations have important physical applications in plasma physics, propagation of hydromagnetic waves traveling in a magnetic field, and transmission of ultra short nonlinear pulses in optical fibers [1,20]. Hence, the study of (1.1) is physically relevant, and the development of the Marchenko method for (1.1) is significant. We remark that our concentration in this paper is not on integrable nonlinear systems such as (1.2) but rather on the linear system (1.1). We present the Marchenko method for (1.1) in such a way that the method can be applied on other linear systems and also on their discrete versions. We have already developed [9] the Marchenko method for the discrete analog of the linear system (1.1), and hence our emphasis in this paper is the development of the Marchenko method for the linear system (1.1). A linear system of differential equations such as (1.1), which contains the spectral parameter ζ and some potentials that are functions of the spacial variable x with sufficiently fast decay at infinity, yields a scattering scenario. It may be possible to establish a one-to-one correspondence between the potentials in the linear system and an appropriate scattering data set, which usually consists of some scattering coefficients that are functions of the spectral parameter ζ and the bound-state information related to the values of the spectral parameter at which the linear system has squareintegrable solutions. The direct scattering problem consists of the determination of the scattering data set when the potentials are known. On the other hand, the inverse scattering problem consists of the determination of the potentials when the scattering data set is known. One of the most effective methods in the solution to an inverse scattering problem is the Marchenko method, originally developed by Vladimir Marchenko [4] for the half-line Schrödinger equation The Marchenko method was later extended by Faddeev [19] to solve the inverse scattering problem for the full-line Schrödinger equation In the Marchenko method, the potential is recovered from the solution to a linear integral equation, usually called the Marchenko equation, where the kernel and the nonhomogeneous term are constructed from the scattering data set with the help of a Fourier transformation. The Marchenko equation for (1.3) has the form K(x, y) + Ω(x + y) + ∞ x dz K(x, z) Ω(z + y) = 0, x < y, (1.4) if the scattering data set is related to the measurements at x = +∞, and it has the form K(x, y) +Ω(x + y) + x −∞ dzK(x, z)Ω(z + y) = 0, y < x, (1.5) if the scattering data set is related to the measurements at x = −∞. The integral kernels and the nonhomogeneous terms in (1.4) and (1.5) are constructed from the corresponding scattering data sets, and the potential V is obtained from the solution K(x, y) to (1.4) as where K(x, x) denotes the limit K(x, x + ), or it is constructed from the solutionK(x, y) to (1.5) as whereK(x, x) denotes the limitK(x, x − ). The Marchenko method is applicable to various other differential equations as well as systems of differential equations. For example, when applied to the AKNS system [1,2] and Ω(x + y) are now 2 × 2 matrices. The nonhomogeneous term and the kernel are constructed from the scattering data in a similar manner as done for (1.3), and the two potentials u and v in (1.7) are recovered from the solution to the relevant Marchenko equation by using a slight variation of (1.6). The Marchenko method is also applicable to various inverse scattering problems for linear difference equations such as the discrete Schrödinger equation on the half-line lattice given by − ψ n+1 + 2ψ n − ψ n−1 + V n ψ n = λ ψ n , n ≥ 1, (1.8) where λ is the spectral parameter and the quantities ψ n and V n denote the values of the wavefunction and the potential, respectively, at the lattice location n. In this case, the Marchenko equation corresponding to (1.8) has the discrete form given by K nj Ω j+m = 0, n < m. (1.9) The nonhomogeneous term and the kernel are still constructed from the corresponding scattering data set, and the potential value V n is recovered from the double-indexed solution K nm to (1.9) via [11] V n = K (n−1)n − K n(n+1) , n ≥ 1, with the understanding that K 01 = 0. There are still many other inverse scattering problems described by various differential or difference equations, or system of differential or difference equations, for which a Marchenko method is not yet available, and (1.1) is one of them. In this paper, we develop the Marchenko method for (1.1) and present the corresponding matrix-valued Marchenko integral equation in (4.40). We note that (4.40) resembles (1.4), but the integral kernel of (4.40) slightly differs from that of (1.4). In (4.54) and (4.55), we present the recovery of q(x) and r(x) from the solution to (4.40). The main result presented in this paper, i.e. the derivation of the Marchenko system for (1.1) and the recovery of the potentials q and r from the solution to that Marchenko system, is significant because not only it extends the powerful Marchenko method to (1.1) but it also provides a procedure that can be applied to various other inverse problems. In our extension of the Marchenko method to solve the inverse scattering problem for (1.1), we use the following guidelines in order to refer to the extension still as the Marchenko method. First, the derived Marchenko system should resemble (1.4), where the nonhomogeneous term and the kernel should both be obtained from the scattering data for (1.1) with the help of a Fourier transform, but by allowing some minor modifications. Next, the potentials in (1.1) should be readily obtained from the solution to the derived Marchenko system, but by allowing some appropriate modifications. The same guidelines can also be used to establish a Marchenko method for other differential and difference equations, or systems of differential and difference equations. Let us remark that, in the literature related to the inverse scattering transform, some authors refer to the Marchenko equation as the Gel'fand-Levitan-Marchenko equation, but this is a misnomer [23]. The Gel'fand-Levitan integral equation [10,13,17,19,21,22,24] is different from the Marchenko integral equation. The standard Gel'fand-Levitan equation has the form A(x, y) + G(x, y) + x 0 dz A(x, z) G(z, y) = 0, 0 < y < x, (1.10) where G(x, y) appearing in the kernel and the nonhomogeneous term is constructed from the spectral function of the corresponding linear system. We note that that the integral limits in the Marchenko equation (1.4) are x and +∞, whereas the integral limits in the Gel'fand-Levitan equation (1.10) are 0 and x. Our paper is organized as follows. In Section 2 we provide the preliminaries by introducing the Jost solutions and the scattering coefficients for the linear system (1.1), and we present their relevant properties needed in the development of our Marchenko method. In Section 3 we introduce the relevant information on the bound states for (1.1), and we show that the bound-state information can be presented in a simple and elegant way for any number of bound states and any multiplicities, and this is done by using a pair of constant matrix triplets. In Section 4 we present the matrixvalued Marchenko system for (1.1), where the input to the Marchenko system consists of a pair of reflection coefficients and the bound-state information. We also show that the Marchenko system can be written in an equivalent but uncoupled format, and we describe how the potentials and the Jost solutions are obtained from the solution to the Marchenko system. In Section 5, when the reflection coefficients are zero, with the most general bound-state information expressed in terms of a pair of matrix triplets, we obtain the closed-form solution to the Marchenko system. This allows us to present some explicit solution formulas for the potentials and the Jost solutions for (1.1) expressed in closed form in terms of our matrix triplets. In Section 5, we also prove a relevant restriction on the bound states for (1.1) when the potentials q and r are reflectionless; namely, we prove that the bound-state poles of the corresponding transmission coefficients must be equally distributed in the four quadrants of the complex ζ-plane. We also prove that, for the AKNS system (1.7), in the reflectionless case the bound-state poles of the corresponding transmission coefficients must be equally distributed in the upper and lower halves of the complex λ-plane. Finally, in Section 6, we illustrate the theory developed in the earlier sections, and in particular we provide some examples of potentials and Jost solutions for (1.1) in terms of elementary functions when the sizes of our matrix triplets are small. Preliminaries In this section, in order to prepare for the derivation of the Marchenko system for (1.1), we introduce the Jost solutions and the scattering coefficients for (1.1) and we present their relevant properties. We use the notation of [8] and rely some of the results presented there. We let ψ(ζ, x),ψ(ζ, x), φ(ζ, x),φ(ζ, x) denote the four Jost solutions to (1.1) satisfying the respective spacial asymptotics x → +∞, We remark that the overbar does not denote complex conjugation. There are six scattering coefficients associated with (1.1), i.e. the transmission coefficients T (ζ) andT (ζ), the right reflection coefficients R(ζ) andR(ζ), and the left reflection coefficients L(ζ) andL(ζ). Because the trace of the coefficient matrix in (1.1) is zero, the transmission coefficients from the left and from the right are equal to each other, and hence we do not need to use separate notations for the left and right transmission coefficients. The six scattering coefficients can be defined in terms of the spacial asymptotics of the Jost solutions given by In order to present the relevant properties of the Jost solutions, we use the subscripts 1 and 2 to denote their first and second components, respectively, i.e. we let :=φ(ζ, x). (2.10) We relate the spectral parameter ζ appearing in (1.1) to the parameter λ in (1.7) as with the square root denoting the principal branch of the complex-valued square-root function. We use C + and C − to denote the upper-half and lower-half, respectively, of the complex plane C, and we let C + := C + ∪ R and C − := C − ∪ R. We recall that the Wronskian of any two column-vector solutions to (1.1) is defined as the determinant of the 2 × 2 matrix formed from those columns. For example, the Wronskian of ψ(ζ, x) and φ(ζ, x) is given by Due to the fact that the coefficient matrix in (1.1) has the zero trace, the value of the Wronskian of any two solutions to (1.1) is independent of x, and hence the six scattering coefficients appearing in (2.5)-(2.8) can be expressed in terms of Wronskians of the Jost solutions [8] as (2.14) It is possible to relate (1.1) to the AKNS system (1.7) by using (2.11) and by choosing the potentials u and v in terms of the potentials q and r as where the prime denotes the derivative and the quantity E(x) is defined as where we have defined the complex constant µ as Besides (1.7), it is also possible to relate (1.1) to another AKNS system given by by choosing the potentials p and s in terms of q and r as Let us remark that it is possible to analyze the direct and inverse scattering problems for (1.1) without relating (1.1) to the AKNS systems (1.7) or (2.21). As done for (1.3) [17,19,21,22], this can be accomplished for (1.1) by first determining the integral relations satisfied by the four Jost solutions to (1.1), where those integral relations are obtained by combining (1.1) and the asymptotic conditions (2.1)-(2.4). Using those integral relations, one can express the scattering coefficients for (1.1) in terms of certain integrals involving the potentials q and r. The relevant properties of the scattering coefficients can be determined from those integral relations. In a similar manner, the small and large ζ-asymptotics of the scattering coefficients, the bound states, and the inverse scattering problem for (1.1) can all be analyzed without relating (1.1) to (1.7) or (2.21). On the other hand, the analysis of the direct and inverse scattering problems for (1.1), by relating (1.1) to (1.7) or (2.21), brings some physical insight and intuition because the analysis of those two problems for an AKNS system is better understood. Note that (1.1) differs from the AKNS systems (1.7) or (2.21) because the off-diagonal entries of the coefficient matrix in (1.1) contain the potentials as multiplied by the spectral parameter ζ. This greatly complicates the analysis of the direct and inverse scattering problems for (1.1). On the other hand, the three linear systems (1.1), (1.7), and (2.21) can all be viewed as different perturbations of the first-order unperturbed system and this helps us to understand the connections among (1. Theorem 2.1. Assume that the potentials q and r appearing in the first-order system (1.1) belong to the Schwartz class. Let E denote the quantity E(x) defined in (2.18), and µ be the complex constant defined in (2.20), and assume that the spectral parameters ζ and λ are related to each other as in (2.11). Then, we have the following: (a) The linear system (1.1) can be transformed into the AKNS system (1.7), where the potential pair (u, v) is related to (q, r) as in (2.16) and (2.17). It follows that the potentials u and v also belong to the Schwartz class. The four Jost solutions to (1.1) appearing in (2.1)-(2.4), respectively, and the four Jost solutions ψ (u,v) ,ψ (u,v) , φ (u,v) ,φ (u,v) to (1.7), satisfying the corresponding asymptotics in (2.1)-(2.4), respectively, are related to each other as Next, we present the relevant analyticity and symmetry properties of the Jost solutions to (1.1), which are needed in establishing the Marchenko method for (1.1). Theorem 2.2. Let the potentials q and r in (1.1) belong to the Schwartz class. Assume that the spectral parameters ζ and λ are related to each other as in (2.11). Then, we have the following: (a) For each fixed x ∈ R, the Jost solutions ψ(ζ, x) and φ(ζ, x) to (1.1) are analytic in the first and third quadrants in the complex ζ-plane and are continuous in the closures of those regions. Similarly, the Jost solutionsψ(ζ, x) andφ(ζ, x) are analytic in the second and fourth quadrants in the complex ζ-plane and are continuous in the closures of those regions. Proof. The proof of (a) can be obtained by converting (1.1) and each of the asymptotics in (2.1)-(2.4) into an integral equation, then by solving the resulting four integral equations via iteration, and by expressing the Jost solutions as uniformly convergent infinite series of terms that are analytic in the appropriate domains in the complex ζ-plane and are continuous in the closures of those domains. Alternatively, the proof of (a) can be obtained with the help of Theorem 2.1 and by using the corresponding analyticity and continuity properties [2,18] in λ of the Jost solutions to the AKNS systems (1.7) and (2.21). The proof of (b) is obtained by using the results in (a) and either the relations (2.24)-(2.27) or (2.28)-(2.31). In the following theorem, we present the small spectral asymptotics of the Jost solutions to (1.1), which is crucial for the establishment of the Marchenko method for (1.1) Proof. The domains of continuity for the Jost solutions are specified in Theorem 2.2. The proof for (2.32) and (2.35) can be obtained by using (2.24) and (2.27), respectively, and the known small λ-asymptotics [8,18] of the Jost solutions ψ (u,v) (λ, x) andφ (u,v) (λ, x) to (1.7), and by taking into account the relationship between ζ and λ specified in (2.11). Similarly, the proof for (2.33) and (2.34) can be obtained by using (2.29) and (2.30) and the known small λ-asymptotics [8,18] of the Jost solutionsψ (p,s) (λ, x) and φ (p,s) (λ, x). In relation to Theorem 2.3, let us remark that the small λ-asymptotics of the Jost solutions to (1.7) and (2.21) expressed in terms of the quantities relevant to (1.1) can be found in Proposition 6.1 of [8]. In order to prepare for the derivation of the Marchenko system for (1.1), we also need the large ζ-asymptotics of the Jost solutions to (1.1). For convenience, in the following theorem those asymptotics are expressed in terms of λ, which is related to ζ as in (2.11). Theorem 2.4. Let the potentials q and r in (1.1) belong to the Schwartz class, and let the parameter λ be related to the spectral parameter ζ as in (2.11). Then, for each fixed x ∈ R, as λ → ∞ in C + , the Jost solutions ψ(ζ, x) and φ(ζ, x) to (1.1) appearing in (2.1) and (2.3), respectively, satisfy where E(x) and µ are the quantities appearing in (2.18) and (2.20), respectively, and the complexvalued scalar quantity σ(x) is defined as Similarly, for each fixed x ∈ R, as λ → ∞ in C − , the Jost solutionsψ(ζ, x) andφ(ζ, x) to (1.1) appearing in (2.2) and (2.4), respectively, satisfȳ Proof. The proof is obtained by using iteration on the integral representations of the Jost solutions aforementioned in the proof of Theorem 2.1 and by taking into consideration of the fact that ζ is related to λ as in (2.11). Alternatively, the proof can be obtained by using (2.24)-(2.27) and the known large λ-asymptotics [2,8,18] of the Jost solutions to (1.7), and by taking into account the fact that the quantity σ(x) defined in (2.37) corresponds to the product u(x) v(x) when u(x) and v(x) are chosen as (2.16) and (2.17), respectively. Equivalently, the proof can be obtained by using (2.28)-(2.31) and the known large λ-asymptotics [2,8,18] of the Jost solutions to (2.21), and by taking into consideration the fact that the quantity σ(x) defined in (2.37) corresponds to the product p(x) s(x) when p(x) and s(x) are chosen as (2.22) and (2.23), respectively. In the next theorem, in preparation for the establishment of the Marchenko method for (1.1), we present the relevant properties of the scattering coefficients for (1.1). Theorem 2.5. Assume that the potentials q and r in (1.1) belong to the Schwartz class. Let λ be related to the spectral parameter ζ as in (2.11), and let µ be the complex constant defined in (2.20). Then, the scattering coefficients T (ζ),T (ζ), R(ζ),R(ζ), L(ζ), andL(ζ) appearing in (2.5)-(2.8) have the following properties: (a) The transmission coefficient T (ζ) is continuous in ζ ∈ R and has a meromorphic extension from ζ ∈ R to the first and third quadrants in the complex ζ-plane. Furthermore, T (ζ) is an even function of ζ, and hence it is a function of λ in C + . Moreover, T (ζ) is meromorphic in λ ∈ C + with a finite number of poles there, where the poles are not necessarily simple but have finite multiplicities. The large ζ-asymptotics of T (ζ) expressed in λ is given by The transmission coefficientT (ζ) is continuous in ζ ∈ R and has a meromorphic extension from ζ ∈ R to the second and fourth quadrants in the complex ζ-plane. Furthermore,T (ζ) is an even function of ζ, and hence it is a function of λ in C − . Moreover,T (ζ) is meromorphic in λ ∈ C − with a finite number of poles, where the poles are not necessarily simple but have finite multiplicities. The large ζ-asymptotics ofT (ζ) expressed in λ is given bȳ Proof. Since the scattering coefficients can be expressed in terms of the Wronskians of the Jost solutions as in (2.13)-(2.15), their stated properties can be established by using the properties of the Jost solutions provided in Theorem 2.1. Alternatively, the proof can be obtained by using the relationships between the six scattering coefficients for (1.1) and the corresponding scattering coefficients for the two associated AKNS systems given in (1.7) and (2.21), respectively, when the potential pairs (u, v) and (p, s) are chosen as in (2.16), (2.17), (2.22), and (2.23). In fact, we have [8,18] where the superscripts (u, v) and (p, s) are used to refer to the scattering coefficients for (1.7) and (2.21), respectively. Using (2.43)-(2.48) and the already known [2,8,18] properties of the scattering coefficients of the associated AKNS systems, the proof is established. Let us now consider the question whether the scattering coefficients for (1.1) can be determined from the knowledge of the scattering coefficients for (1.7) or (2.21), and vice versa. The presence of the factor e iµ/2 in (2.43)-(2.46) gives the impression that this is possible only if we know the value of e iµ/2 independently. The next theorem shows that the value of e iµ/2 is indeed determined by either one of the transmission coefficients for either (1.7) or (2.21), and hence the scattering coefficients for (1.7) and (2.21) can be explicitly expressed in terms of the scattering coefficients for (1.1). Similarly, the value of e iµ/2 is indeed determined by one of the transmission coefficients for (1.1), and hence the scattering coefficients for (1.7) and (2.21) can be determined from the knowledge of the scattering coefficients for (1.1). Theorem 2.6. Assume that the potentials q and r in (1.1) belong to the Schwartz class. Furthermore, suppose that the potential pairs (u, v) and (p, s) appearing in (1.7) and (2.21), respectively, are related to the potential pair (q, r) as in (2.16), (2.17), (2.22), and (2.23). Let λ be related to the spectral parameter ζ as in (2.11), and let µ be the complex constant defined in (2.20). Then, we have the following: (a) The scalar constant e iµ/2 is uniquely determined by one of the transmission coefficients for either of (1.7) or (2.21). In fact, we have 50) where we recall that the superscripts (u, v) and (p, s) are used to refer to the scattering coefficients for (1.7) and (2.21), respectively. (b) The scattering coefficients for (1.1) are uniquely determined by the scattering coefficients for either of the linear systems (1.7) or (2.21). In fact, we have Thus, the proof of (a) is complete. By using the value of e iµ/2 from (2.49) or (2.50) in (2.43)-(2.48), we obtain (2.51)-(2.56), respectively. Thus, the proof of (b) is also complete. Finally, from (2.39) or (2.40) we see that the value of e iµ/2 is uniquely determined by one of the transmission coefficients for (1.1), and hence (2.43)-(2.48) can be used to express the scattering coefficients for (1.7) and (2.21) from the knowledge of the scattering coefficients for (1.1), which completes the proof of (c). The bound states The bound states for (1.1) correspond to square-integrable column vector solutions to (1.1). The existence and nature of the bound states are completely determined by the potentials q and r appearing in the coefficient matrix in (1.1). When the potentials q and r belong to the Schwartz class, the following are known [8] about the bound states for (1.1): (a) The bound states cannot occur at any real ζ value in (1.1). In particular, there is no bound state at ζ = 0. The bound states can only occur at a complex value of ζ at which the transmission coefficient T (ζ) has a pole in the first or third quadrants in the complex ζ-plane or at which the transmission coefficientT (ζ) has a pole in the second or the fourth quadrants. In fact, as indicated in Theorem 2.5 the parameter ζ appears as ζ 2 in the transmission coefficients T (ζ) andT (ζ), and hence the ζ-values corresponding to the bound states must be symmetrically located with respect to the origin in the complex ζ-plane. (c) The number of poles of T (ζ) in the upper-half complex λ-plane is finite and we use λ j to denote those poles and we use N to denote their number without taking into account their multiplicities. Similarly, the number of poles ofT (ζ) in the lower-half complex λ-plane is finite and we useλ j to denote those poles and we useN to denote their number without taking into account their multiplicities. The multiplicity of each of those poles is finite, and we use m j to denote the multiplicity of the pole at λ j and usem j to denote the multiplicity of the pole atλ j . We remark that the bound-state poles are not necessarily simple. In the literature [20,25], it is often unnecessarily assumed that the bound states are simple because the multiple poles may be difficult to deal with. However, we have an elegant method of handling bound states of any number and any multiplicities, and hence there is no reason to artificially assume the simplicity of bound states. (d) As indicated in the previous steps, the bound-state information for (1.1) contains the sets {λ j , m j } N j=1 and {λ j ,m j }N j=1 . Furthermore, for each bound state and multiplicity we must specify a norming constant. As the bound-state norming constants, we use the double-indexed quantities c jk for 1 ≤ j ≤ N and 0 ≤ k ≤ (m j − 1) and the double-indexed quantitiesc jk for 1 ≤ j ≤N and 0 ≤ k ≤ (m j − 1). The construction of the bound-state norming constants c jk from the transmission coefficient T (ζ) and the Jost solutions φ(ζ, x) and ψ(ζ, x) and the construction of the bound-state norming constantsc jk from the transmission coefficientT (ζ) and the Jost solutionsφ(ζ, x) andψ(ζ, x) are analogous to the constructions presented for the discrete version of (1.1), and we refer the reader to [9] for the details. Such a construction involves the determination of the double-indexed "residues" t jk with 1 ≤ j ≤ N and 1 ≤ k ≤ m j and the the double-indexed "residues"t jk with 1 ≤ j ≤N and 1 ≤ k ≤m j , respectively, by using the expansions of the transmission coefficients at the bound-state poles, which are given by Next, we construct the the double-indexed dependency constants γ jk with 1 ≤ j ≤ N and 0 ≤ k ≤ (m j − 1). The dependency constants γ jk appear in the coefficients when we express where k l denotes the binomial coefficient. Note that (3.3) is obtained as follows. From the first equality of (2.13), we have where we recall that the Wronskian is defined as in (2.12). Using (3.1) and the fact that ζ appears as ζ 2 in T (ζ), from (3.4) it follows that the λ-derivatives of order k for 0 ≤ k ≤ (m j −1) vanish when λ = λ j or equivalently when ζ = ζ j . We then recursively obtain (3.3). For the details of the procedure, we refer the reader to [9]. Similarly, the double-indexed dependency constantsγ jk with 1 ≤ j ≤N and 0 ≤ k ≤ (m j − 1) appear in the coefficients when we express at λ =λ j the value of each d kφ (ζ, We remark that (3.5) is derived with the help of the Wronskian relation which is obtained from the second equality of (2.13). Using (3.2) and the fact that ζ appears as ζ 2 inT (ζ), from (3.6) it follows that the λ-derivatives of order k for 0 ≤ k ≤ (m j − 1) vanish when λ =λ j or equivalently when ζ =ζ j . We then recursively obtain (3.5). The norming constants c jk are formed in an explicit manner by using the set of residues {t jk } m j k=1 and the set of dependency constants {γ jk } m j −1 k=0 , and this procedure is explained in the proof of Theorem 4.2 and it is similar to the procedure described in Theorem 15 of [9]. In a similar manner, the norming constantsc jk are formed by using the set of residues {t jk }m j k=1 and the set of dependency constants {γ jk }m j −1 k=0 . Thus, we obtain the bound-state information for (1.1) consisting of the sets In the first two examples in Section 6 we illustrate the relationships connecting the norming constants to the residues and the dependency constants. (e) Let us remark that it is extremely cumbersome to use the bound-state information in the format specified in (3.7) unless that information is organized in an efficient format. In fact, this is the primary reason why it is artificially assumed in the literature that the bound states are simple. The bound-state information given in (3.7) can be organized in an efficient and elegant manner by introducing a pair of matrix triplets (A, B, C) and (Ā,B,C) in such a way that the specification of the matrix triplet pair is equivalent to the specification of the bound-state information in (3.7). Furthermore, in the Marchenko method, the bound-state information is easily and in an elegant manner incorporated in the nonhomogeneous term and in the integral kernel in the corresponding Marchenko system when it is incorporated in the form of matrix triplets. The use of the matrix triplets enables us to deal with any number of bound states and any number of multiplicities in a simple and elegant manner, as if we only have one bound state of multiplicity one. Let us remark that the use of the matrix triplets is not confined to any particular linear system, but it can be used on any linear system for which a Marchenko method is available. In fact, this is one of the reasons why we are interested in establishing the Marchenko method for the linear system given in (1.1). (f) Without loss of any generality, the matrix triplets (A, B, C) and (Ā,B,C) can be chosen as the minimal special triplets described later in this section. We refer the reader to [6,14] for the description of the minimality. The minimality amounts to choosing each of the square matrices A andĀ with the smallest sizes by removing any zero columns or zero rows. By the special triplets, we mean choosing the matrices A andĀ in their Jordan canonical forms and choosing the column vectors B andB in the special forms consisting of zeros and ones, as described in (3.9), (3.11), (3.14), and (3.17). The choice of the special forms for the matrix triplets is unique up to the permutations of the corresponding Jordan blocks. We refer the reader to Theorem 3.1 of [6] for the details and for the proof why there is no loss of generality in using the matrix triplets in their minimal special forms. Next, we show how to convert the bound-state information given in (3.7) into the matrix triplet pair (A, B, C) and (Ā,B,C). Since there is no loss of generality in choosing the matrix triplets in their special forms, we only deal with those special forms. For simplicity and clarity, we outline the main steps of the procedure by omitting the details. We refer the reader to [9] where the details of the procedure are presented for the discrete version of (1.1). The steps presented in [9] are general enough to apply to (1.1) and other linear systems. Let us also remark that for linear systems for which the potentials appear in diagonal blocks in the corresponding coefficient matrix, only one matrix triplet (A, B, C) is needed. On the other hand, for linear systems for which the potentials appear in off-diagonal blocks in the corresponding coefficient matrix, a pair of matrix triplets (A, B, C) and (Ā,B,C) is used. The potentials q and r appear in the off-diagonal entries in the coefficient matrix in (1.1), and hence we convert the bound-state information into the format consisting of the triplets (A, B, C) and (Ā,B,C). For the use of matrix triplets for some other linear systems, we refer the reader to [5,6,7,12,15,16]. The conversion of the bound-state information from (3.7) to the matrix triplet pair (A, B, C) and (Ā,B,C) involves the following steps: (a) For each bound state at λ = λ j with 1 ≤ j ≤ N, we form the matrix subtriplet (A j , B j , C j ) as where A j is the m j × m j square matrix in the Jordan canonical form with λ j appearing in the diagonal entries, B j is the column vector with m j components that are all zero except for the last entry which is 1, and C j is the row vector with m j components containing all the norming constants in the indicated order. Note that if the bound state at λ = λ j is simple, then we have Similarly, for each bound state at λ =λ j with 1 ≤ j ≤N we form the matrix subtriplet whereĀ j is them j ×m j square matrix in the Jordan canonical form withλ j appearing in the diagonal entries,B j is the column vector withm j components that are all zero except for the last entry which is 1, andC j is the row vector withm j components containing all the norming constants in the indicated order. where N is defined as and it represents the number of bound-state poles in the upper-half complex λ-plane by including the multiplicities. We also form the column vector B with N components and the row vector C with N components as Similarly, we defineN asN which represents the number of bound-state poles in the lower-half complex λ-plane by including the multiplicities. We then useĀ j with 1 ≤ j ≤N in order to form theN ×N block-diagonal matrixĀ asĀ We also form the column vectorB withN components and the row vectorC withN components asB The Marchenko method In this section we develop the Marchenko method for (1.1) by deriving the corresponding Marchenko system of linear integral equations and also by showing how the Jost solutions and the potentials are recovered from the solution to that Marchenko system. We present the derivation of the Marchenko system in such a way that the method can be applied to other linear systems and to their discrete analogs. For the simplicity of the presentation, we first provide the derivation in the absence of bound states, and then we indicate the main modification needed to include the bound-state information in the Marchenko system. In the following we outline the basic steps in the development of our Marchenko method for (1.1) in order to show the similarities and differences with the development of the standard Marchenko method: (a) We start with the Riemann-Hilbert problem for (1.1) by expressing the two Jost solutions φ(ζ, x) andφ(ζ, x) as a linear combination of the Jost solutions ψ(ζ, x) andψ(ζ, x). This eventually yields the Marchenko system for (1.1) with x < y < +∞ as an analog of (1.4). Note that this is also the step used in the derivation of the standard Marchenko method. In order to derive the Marchenko system for (1.1) with −∞ < y < x as an analog of (1.5), we need to express the Jost solutions ψ(ζ, x) andψ(ζ, x) as a linear combination of the Jost solutions φ(ζ, x) andφ(ζ, x). However, we will only present the derivation of the former Marchenko system and hence only deal with the Riemann-Hilbert problem for the former case. We remark that the coefficients in the Riemann-Hilbert problem associated with the Marchenko system with x < y < +∞ are directly related to the scattering coefficients T (ζ), T (ζ), R(ζ), andR(ζ), and the coefficients in the Riemann-Hilbert problem associated with the Marchenko system with −∞ < y < x are directly related to the scattering coefficients T (ζ),T (ζ), L(ζ), andL(ζ). (b) Next, we combine the two column-vector equations arising in the formulation of the Riemann-Hilbert problem into a 2 × 2 matrix-valued system. This step is also used in the development of the standard Marchenko method. (c) We slightly modify our 2 × 2 matrix-valued system obtained in the previous step. This modification is not needed in the development of the standard Marchenko method. The modification involving the diagonal entries is carried out in order to take into account the large ζ-asymptotics of the Jost solutions. The modification involving the off-diagonal entries is carried out in order to formulate the 2 × 2 matrix-valued Riemann-Hilbert problem in the spectral parameter λ rather than in ζ, where λ and ζ are related to each other as in (2.11). (d) With the modification described in the previous step, we are able to take the Fourier transform from the λ-space to the y-space. This yields the 2 × 2 coupled Marchenko system. This step is also used in the development of the standard Marchenko method. (e) We uncouple the 2 × 2 matrix-valued Marchenko system and obtain the associated uncoupled scalar Marchenko integral equations. This is also the step used in the development of the standard Marchenko method. (f) With the help of the inverse Fourier transform, we show how the Jost solutions to (1.1) are constructed from the solution to the Marchenko system. This is also the step used in the development of the standard Marchenko method. (g) Finally, we describe how the potentials q and r appearing in (1.1) are recovered from the solution to our Marchenko system. This step is slightly more involved than the step used in the development of the standard Marchenko method. However, the formulas for the potentials are explicit in terms of the solution to our Marchenko system. In the next theorem we introduce the 2 × 2 matrix-valued Marchenko integral system for (1.1) in the absence of bound states. Tφ =Rψ + ψ, Using (2.9) and (2.10), we write (4.9) as (4.10) We first postmultiply (4.10) with the diagonal matrix diag{e iµ/2 E −1 , e −iµ/2 E} and then divide by ζ the off-diagonal entries in the resulting matrix-valued system. From the resulting 2 × 2 matrixvalued equation, we subtract the diagonal matrix diag{e −iλx , e iλx } from both sides, and we obtain where we have defined with the entries K 1 (x, y), K 2 (x, y), K 1 (x, y), and K 2 (x, y) are as in (4. with the matrix entries defined as are each equal to zero when x > y. Hence, using the inverse Fourier transform, from (4.3)-(4.6) we get Let us now show that each of the four entries of RHS defined in (4.15) is a convolution. By using the inverse Fourier transform, from (4.2) we have (4.28) Also, by taking the derivatives, from (4.2) we obtain (4.29) Using the inverse Fourier transform, from (4.29) we have Using (4.24) and the first equality of (4.30) on the right-hand side of (4.31), we get the convolution Proceeding in a similar manner, we write (4.23) as Using (4.27) and the second equality of (4.30) on the right-hand side of (4.33), we obtain the convolution RHS 22 = ∞ x dzK 2 (x, z)R (z + y). Hence, using (4.32), (4.35), (4.36), and (4.34) in (4.12), we see that RHS is equal to the sum of the second and third terms on the right-hand side of (4.1). Thus, in order to complete the derivation of (4.1), it is sufficient to show that LHS is the 2 × 2 zero matrix when x < y in the absence of bound states. This is proved as follows. When x < y, with the help of Theorems 2.2-2.5, we observe that the integrands in (4.16) and (4.18) are analytic in λ ∈ C + , continuous in λ ∈ C + , and uniformly O(1/λ) as λ → ∞ in C + . Hence, when x < y, using Jordan's lemma and the residue theorem we conclude that LHS 11 and LHS 21 are both zero. Similarly, when x < y, with the help of Theorems 2.2-2.5, we observe that the integrands in (4.17) and (4.19) are analytic in λ ∈ C − , continuous in λ ∈ C − , and uniformly O(1/λ) as λ → ∞ in C − . Hence, when x < y, using Jordan's lemma and the residue theorem we conclude that LHS 12 and LHS 22 are both zero. Thus, the proof is complete. The Marchenko integral system we have established in (4.1) is valid provided (1.1) has no bound states. When the bound states are present, the only modification needed in the proof of Theorem 4.1 is that the quantity LHS appearing in (4.12) and (4.14) is no longer equal to the zero matrix due to the fact that we must take into account the bound-state poles of the transmission coefficients in evaluating the integrals (4.16)- (4.19). It turns out that, using the matrix triplet pair and hence in (4.1) we also replaceR (y) andR (y) with Ω (y) andΩ (y), respectively. In fact, in the Marchenko equations for any linear system, the substitution R(y) →R(y) + C e iAy B,R(y) →R(y) +C e −iĀyB , (4.39) is all that is needed in order to take into consideration the effect of any number of bound states with any multiplicities. Certainly, for linear systems where the potentials appear in the diagonal blocks in the coefficient matrix rather than in the off-diagonal blocks, we only use one matrix triplet (A, B, C), and in that case (4.39) still holds with the understanding that the second matrix triplet (Ā,B,C) is absent. We remark that (4.39) is elegant for several reasons. When there is only one simple bound state, the eigenvalue of the matrix A becomes the same as the matrix itself. In that sense, there is an apparent correspondence between the factor e iλy in (4.2) and e iAy in (4.39) induced by λ ↔ A. The same is also true for the correspondence between the factor e −iλy in (4.2) and e −iĀy in (4.39) induced by λ ↔Ā. The information containing any number of bound states with any multiplicities and with the corresponding bound-state norming constants is all imbedded in (4.39) through the structure of the two matrix triplets there. Proof. As indicated in the proof of Theorem 4.1, the quantity LHS in (4.14) is no longer the 2 × 2 zero matrix when the bound states are present. When x < y, the integrands in (4.16) and (4.18) are continuous in λ ∈ R, are O(1/λ) as λ → ∞ in C + , and are meromophic in λ ∈ C + with the poles at λ = λ j with multiplicity m j for 1 ≤ j ≤ N, where those poles are the bound-state poles of T (ζ). Hence, when x < y those integrals can be evaluated by using the residue theorem. The resulting expressions contain the residues t jk appearing in (3.1) and d k φ(ζ j , x)/dλ k for 1 ≤ j ≤ N and 0 ≤ k ≤ (m j − 1). Using (3.3) in the resulting expressions, we express those integrals in terms of the residues t jk and the dependency constants γ jk appearing in (3.3). In a similar manner, when x < y the integrands in (4.17) and (4.19) are continuous in λ ∈ R, are O(1/λ) as λ → ∞ in C − , and are meromophic in λ ∈ C − with the poles at λ =λ j with multiplicitym j for 1 ≤ j ≤N , where those poles are the bound-state poles ofT (ζ). Thus, when x < y those integrals can be evaluated by using the residue theorem. The resulting expressions contain the residuest jk appearing in (3.2) and d kφ (ζ j , x)/dλ k for 1 ≤ j ≤N and 0 ≤ k ≤ (m j −1). Using (3.5) in the resulting expressions, we express those integrals in terms of the residuest jk and the dependency constantsγ jk appearing in (3.5). We omit the details because the procedure is similar to that given in the proof of Theorem 15 of [9]. The only effect of the contribution from LHS to (4.12) amounts to the substitution specified in (4.39). Hence, with the help of (4.1), (4.37), and (4.38) we obtain (4.40), where the norming constants c jk are explicitly expressed in terms of t jk , γ jk , and ζ j , and the norming constantsc jk are explicitly expressed in terms oft jk ,γ jk , andζ j . Let us remark that the 2 × 2 matrix-valued coupled Marchenko system presented in (4.40) can readily be uncoupled, and it is equivalent to the respective uncoupled scalar Marchenko integral equations for K 1 (x, y) andK 2 (x, y) given by where x < y, with the auxiliary equations given by x < y, x < y. (4.42) Having established the Marchenko system for (1.1), our goal now is to recover the potentials q and r in (1.1) from the solution K(x, y) to the Marchenko system (4.40) or from the equivalent system of uncoupled equations given in (4.41) and (4.42). In preparation for this, in the next theorem we evaluate K(x, x) andK(x, x) from K(x, y) andK(x, y) by letting y → x + . Proposition 4.3. Assume that the potentials q and r appearing in (1.1) belong to the Schwartz class. Let K(x, y) be the solution to the Marchenko system (4.40), with the components K 1 (x, y), K 2 (x, y),K 1 (x, y),K 2 (x, y) as in (4.13). In the limit y → x + we have where E(x), µ, and σ(x) are the quantities defined in (2.18), (2.20), and (2.37), respectively. Proof. Let us recall that ζ and λ are related to each other as in (2.11). We obtain the proof by establishing the large λ-asymptotics of the Jost solutions ψ(ζ, x) andψ(ζ, x) expressed in terms of the Fourier transforms given in ( The large ζ-asymptotics of ψ 1 (ζ, x) is given in the first component of (2.36), and we use it on the left-hand side of (4.49) and obtain By comparing the first-order terms on both sides of (4.50), we get (4.43). We then establish (4.44)-(4.46) by proceeding in a similar manner, i.e. by using integration by parts in (4.25)-(4.27), obtain the large λ-asymptotics in the resulting expressions with the help of the Riemann-Lebesgue lemma, then by using the large ζ-asymptotics from (2.36) and (2.38) in the resulting equalities, and finally by comparing the first-order terms in the corresponding asymptotic expressions. In the next theorem we show how to recover the relevant quantities for (1.1), including the potentials and the Jost solutions, from the solution to the corresponding Marchenko system (4.40). Proof. From (4.44) and (4.45), we see that the auxiliary scalar quantity Q(x) defined in (4.52) is related to the potentials q and r as (4.60) Hence, from (2.18) and (4.60) we see that E(x) is recovered as in (4.51), which completes the proof of (a). Similarly, from (2.20) and (4.60) we observe that µ is recovered as in (4.53), and therefore the proof of (b) is also completed. Let us now prove (c). Having obtained E(x) and µ, we see that we can recover q(x) with the help of (4.43). Thus, using (4.51) and (4.53) in (4.43) we recover q(x) as in (4.54). Similarly, having E(x) and µ already recovered, we see that we can obtain r(x) from (4.46). Therefore, using (4.51) and (4.53) in (4.46) we recover r(x) as in (4.55). Let us now prove (d). Having E(x) and µ at hand, we use (2.11), (4.51), and (4.53) in (4.24)-(4.27), respectively, and get (4.56)-(4.59). Hence, the whole proof is complete. As in any inverse problem, the inverse problem for (1.1) has four aspects: the existence, uniqueness, reconstruction, and characterization. The existence deals with the question whether there exists at least one pair of potentials q(x) and r(x) in some class corresponding to a given set of scattering data in a particular class. Once the existence problem is solved, the uniqueness deals with the question whether there is only one pair of potentials for that scattering data set or there are more such pairs. The reconstruction is concerned with the recovery of the potentials from the scattering data set. Finally, the characterization deals with the specification of the class of potentials and the class of scattering data sets so that there is a one-to-one correspondence between the elements of the class of potentials and the class of scattering data sets. It is clear that in this paper we only deal with the reconstruction aspect of the inverse problem for (1.1). The remaining three aspects are challenging and need to be investigated. Since the linear differential operator related to (1.1) is not selfadjoint, the analysis of the inverse problem for (1.1) is naturally complicated. We anticipate that the development of the Marchenko method in this paper will provide a motivation for the scientific community to analyze the other three aspects of the corresponding inverse problem. Solution formulas with reflectionless scattering data In this section we provide the solution to the Marchenko system (4.40) when the reflection coefficients in the input scattering data set are zero. Using the results of Section 4, we then obtain the corresponding potentials and Jost solutions explicitly expressed in terms of the matrix triplets (A, B, C) and (Ā,B,C) with the triplet sizes N andN , respectively. We recall that N andN are the integers appearing in (3.13) and (3.15), respectively. Thus, with R(ζ) ≡ 0 andR(ζ) ≡ 0, from (4.37) and (4.38) we get Ω(y) = C e iAy B,Ω(y) =C e −iĀyB , (5.1) With the input from (5.1) and (5.2), the Marchenko system (4.40) or the equivalent uncoupled Marchenko system given in (4.41) and (4.42) is explicitly solvable by the methods of linear algebra because the corresponding integral kernels are separable. Consequently, we obtain the closed-form formulas for the potentials and Jost solutions for (1.1) corresponding to all reflectionless scattering data, where the formulas are explicitly expressed in terms of the two matrix triplets. We present the relevant formulas when the matrix triplet sizes N andN are arbitrary. We then prove that, if the potentials q and r in (1.1) belong to the Schwartz class, in the reflectionless case we must have N =N . In the next theorem we present the solution to the Marchenko system with the input from (5.1) and (5.2), which are uniquely determined by the matrix triplets (A, B, C) and (Ā,B,C). Theorem 5.1. When the scattering data set in (5.1) is used as input, the Marchenko system (4.40) corresponding to (1.1) has the solution expressed in closed form given by with I denoting an identity matrix whose size is not necessarily the same in different appearances. Proof. Since the Marchenko system (4.40) is equivalent to the uncoupled system given in (4.41) and (4.42), we use (5.1) and (5.2) as input to that uncoupled sysytem. The first line of (4.41) yields whose solution has the form with H 1 (x) satisfying The matrix in the brackets in (5.11) is equal toΓ(x) defined in (5.8), and this can be seen by observing that where M andM are the constant matrices defined in (5.9). When the eigenvalues of A are located in C + and the eigenvalues ofĀ are in C − , we see that the two integrals in (5.9) are well defined. From (5.9) we also see that the matrices M andM can alternatively be obtained from the matrix triplets (A, B, C) and (Ā,B,C) by solving the respective linear systems given by Hence, using (5.14) in (5.10) we get (5.3). We obtain (5.4) in a similar manner, by using (5.1) and (5.2) as input in the second line of (4.41). We then havē x dsK 2 (x, z)CĀ e −iĀz−iĀsB C e iAs+iAy B = 0, whose solution has the formK 2 (x, y) = H 2 (x) e iAy B, (5.15) with H 2 (x) satisfying With the help of (5.12) and (5.13) we observe that the matrix in the brackets in (5.16) is equal to the matrix Γ(x) defined in (5.7), and hence (5.16) yields Using (5.17) in (5.15) we obtain (5.4). Finally, using (5.3) and (5.4) as input to (4.42), with the help of (5.9) we get (5.5) and (5.6). For the reflectionless scattering data set specified in (5.1), in Theorem 5.1 we have determined the corresponding solution to the Marchenko system (4.40), in Theorem 5.2 we have provided the corresponding potentials, and in Theorem 5.3 we have obtained the corresponding Jost solutions. In the next theorem, for that same data set we express the corresponding value of the constant µ defined in (2.20) 35) and hence the value of e iµ/2 is determined by the matrix triplet pair as where, as seen from (5.22) and (5.23), the quantities g 1 (z) and g 2 (z) are explicitly determined by our matrix triplet pair with the help of (5.7)-(5.9). (b) The transmission coefficients T (ζ) andT (ζ) corresponding to the reflectionless scattering data set specified in (5.1) are explicitly determined by the matrix triplet pair (A, B, C) and (Ā,B,C) as where, as seen from (5.29) and (5.30), the quantities g 4 (ζ, x) and g 5 (ζ, x) are explicitly determined by our pair of matrix triplets with the help of (5.7)-(5.9). Proof. We obtain (5.35) directly from (4.53) and the last equality in (5.24). Then, (5.36) is a direct consequence of (5.35). Alternatively, as seen from the second equality in (2.19), we get (5.36) from (5.18) by letting x → +∞ there. Hence, the proof of (a) is complete. Note that (5.37) follows from the second component of (2.5) with the help of (5.21) and (5.26). Similarly, (5.38) is obtained by using the first component of (2.6) with the help of (5.21) and (5.27). As indicated at the end of Section 4, in this paper we only deal with the reconstruction aspect of the inverse problem for (1.1). Hence, the results presented in this section should be interpreted in the sense of the reconstruction. The potentials and the corresponding Jost solutions are reconstructed explicitly in Theorems 5.2 and 5.3, respectively, from their reflectionless scattering data expressed in terms of a pair of matrix triplets. When the potentials q and r belong to the Schwartz class, there are additional restrictions on the two matrix triplets used in Theorem 5.2. As seen from (5.19) and (5.20), those restrictions amount to the following: The determinants of the matrices Γ(x) andΓ(x) defined in (5.7) and (5.8) should not vanish for any x ∈ R, and the exponential terms in (5.19) and (5.20) should not cause an exponential increase and in fact should not yield a nonzero asymptotic value as x → ±∞. In the next section we will illustrate this issue with some explicit examples. When the potentials q and r in (1.1) belong to the Schwartz class, in the reflectionless case we present an important restriction on the number of bound states for (1.1), and we now elaborate on this issue. Recall that the nonnegative integer N defined in (3.13) corresponds to the number of bound states, including the multiplicities, associated with the bound-state poles of the transmission coefficient T (ζ) in the first quadrant in the complex ζ-plane. Similarly, the nonnegative integerN defined in (3.15) corresponds to the number of bound states, including the multiplicities, associated with the bound-state poles of the transmission coefficientT (ζ) in the second quadrant in the complex ζ-plane. In general, N andN do not have to be equal to each other. However, in the reflectionless case, when q and r belong to the Schwartz class, we will prove that we must have N =N . In fact, we will prove that this is also true for the AKNS system (1.7), i.e. when the potentials u and v belong to the Schwartz class, in the reflectionless case the number of boundstate poles, including the multiplicities, of the transmission coefficient T (u,v) (λ) in C + must be equal to the number of bound-state poles, including the multiplicities, of the transmission coefficient T (u,v) (λ) in C − . Thus, in the explicit solution formulas presented in Theorems 5.1-5.4, unless we choose the sizes of the matrices A andĀ equal to each other, the corresponding potentials q and r both cannot belong to the Schwartz class. We will illustrate this in Example 6.7 in the next section. The next theorem indicates the restriction N =N in the reflectionless case when the potentials in (1.1) belong to the Schwartz class. Theorem 5.5. Let the potentials q and r in (1.1) belong to the Schwartz class, and assume that the corresponding reflection coefficients R(ζ) andR(ζ) appearing in (2.7) and (2.8), respectively, are zero. Then, we have the following: Proof. Recall that the spectral parameter ζ is related to the parameter λ as in (2.11). Based on the bound-state information provided in (3.7), we know that the transmission coefficient T (ζ) appearing in (2.5) is a meromorphic function of λ in C + with the poles at λ = λ j , each with multiplicity m j for 1 ≤ j ≤ N. Using Theorem 2.5 we conclude that the quantity 1/T (ζ) is analytic in λ ∈ C + , is continuous in λ ∈ C + , vanishes only at λ = λ j for 1 ≤ j ≤ N, and has the large λ-asymptotics described in (2.39). Similarly, from Theorem 2.5 we conclude that the transmission coefficientT (ζ) appearing in (2.6) is analytic in λ ∈ C − , is continuous in λ ∈ C − , vanishes only at λ =λ k for 1 ≤ k ≤N , and has the large λ-asymptotics described in (2.40). Let us write T (ζ) and T (ζ), respectively, as where 1/T 0 (ζ) is analytic in λ ∈ C + , is continuous in λ ∈ C + , does not vanish in C + , and has the large λ-asymptotics given by and 1/T 0 (ζ) is analytic in λ ∈ C − , is continuous in λ ∈ C − , does not vanish in C − , and has the large λ-asymptotics given by Note that we use an asterisk to denote complex conjugation. It is known [2,18] that the scattering coefficients for the AKNS system (1.7) satisfy Using the first equalities of (2.43)-(2.46) in (5.44) we obtain and hence, in the reflectionless case, from (5.45) we get where we recall that T (ζ) andT (ζ) each contain ζ as ζ 2 and thus (5.46) is valid for λ ∈ R. Using (5.40) and (5.41) in (5.46) we obtain Let us rewrite (5.47) so that the left-hand side is analytic in λ ∈ C + and the right-hand side is analytic in λ ∈ C − . For λ ∈ R, we then get (5.48) We must have either N ≥N or N ≤N . We will prove that either of those two inequalities can hold only in the case of an equality. The proof for the former is as follows. When N ≥N , with the help of (5.42) we conclude that the left-hand side of (5.48) has an extension from λ ∈ R to C + in such a way that that extension is analytic in λ ∈ C + , continuous in λ ∈ C + , and is asymptotic to a monic polynomial P (λ) of degree N −N as λ → ∞ in C + . Similarly, with the help of (5.43) we conclude that the right-hand side of (5.48) is analytic in λ ∈ C − , continuous in λ ∈ C − , and is asymptotic to P (λ) as λ → ∞ in C − . Thus, both sides of (5.48) must have an analytic extension to the entire complex λ-plane and be equal to a monic polynomial of degree N −N , i.e. we must have From (5.46) we see that neither T (ζ) norT (ζ) can have any zeros or any poles when λ ∈ R. Hence, from (5.41) we can conclude thatT 0 (ζ) does not have any poles when λ ∈ R. Consequently, from (5.50) we conclude that P (λ) cannot have any zeros when λ ∈ R. From the right-hand side of (5.50), we also see that P (λ) cannot have any zeros when λ ∈ C − and as a result any zero of P (λ) can only occur when λ ∈ C + . Consequently, any pole of 1/P (λ) can only occur when λ ∈ C + . Let us write (5.49) as From the right-hand side of (5.51) we see that 1/P (λ) cannot have any poles when λ ∈ C + . Therefore, we conclude that the monic polynomial P (λ) cannot have any zeros at all when λ ∈ C. Hence, we must have P (λ) ≡ 1, which yields N =N . A similar argument shows that the case N ≤N can occur only when N =N . Thus, the proof of (a) is complete. The proof of (b) is a direct consequence of (a) because the matrix triplet (A, B, C) has size N and the matrix triplet (Ā,B,C) has sizeN . Let us remark that the result presented in Theorem 5.5 for (1.1) holds also for the AKNS system given in (1.7). Next, we present that result as a corollary because its proof follows by essentially repeating the proof given for Theorem 5.5. Corollary 5.6. Let the potentials u and v in the AKNS system (1.7) belong to the Schwartz class. Let us also assume that the corresponding reflection coefficients R (u,v) (λ) andR (u,v) (λ) are zero. Then, the number of bound-state poles, including the multiplicities, of the transmission coefficient T (u,v) (λ) in C + must be equal to the number of bound-state poles, including the multiplicities, of the transmission coefficientT (u,v) (λ) in C − . Explicit examples In this section we elaborate on the results from the previous sections with some illustrative and explicit examples. As indicated in Section 3, for the linear system (1.1) one can construct the norming constants c jk appearing in (3.7) explicitly in terms of the set of residues {t jk } m j k=1 and the dependency constants {γ jk } m j −1 k=0 . Similarly, one can construct the norming constantsc jk appearing in (3.7) explicitly in terms of the set of residues {t jk }m j k=1 and the dependency constants {γ jk }m j −1 k=0 . In the first two examples, we illustrate that construction and observe that, especially in the case of bound states with multiplicities, it is cumbersome to deal with the individual norming constants, and it is better to use the bound-state information not in the form given in (3.7) but rather in the form of matrix triplet pair (A, B, C) and (Ā,B,C). The first example considers the norming constants for simple bound states. Example 6.1. Consider the linear system (1.1) with the potentials q and r in the Schwartz class. We elaborate on step (d) appearing in the beginning of Section 3. If the bound state at λ = λ j is simple, then we have m j = 1 and hence there is only one norming constant c j0 . By proceeding as in [9] we obtain where ζ j is the complex number in the first quadrant in C for which we have λ j = ζ 2 j , the complex constant t j1 corresponds to the residue in (3.1) in the expansion of the transmission coefficient T (ζ), i.e. φ(ζ j , x) = γ j0 ψ(ζ j , x), with ψ(ζ, x) and φ(ζ, x) being the Jost solutions appearing in (2.1) and (2.3), respectively. If the bound state at λ =λ j is simple, we havem j = 1 and hence there is only one norming constantc j0 , which is expressed asc whereζ j is the complex number in the fourth quadrant in C for which we haveλ j =ζ 2 j , the complex constantt j1 corresponds to the residue in (3.2) in the expansion of the transmission coefficientT (ζ), i.e.T (ζ) =t andγ j0 is the dependency constant appearing in (3.5), i.e. The next example considers the norming constants for bound states with multiplicities. Example 6.2. We consider the linear system (1.1) with the potentials q and r in the Schwartz class, and we elaborate on step (d) appearing in the beginning of Section 3. If the bound state at λ = λ j is double, we have m j = 2 and there are only two norming constant c j0 and c j1 , which are expressed in terms of the residues t j1 and t j2 and the dependency constants γ j0 and γ j1 as where we recall that ζ j is the complex constant in the first quadrant in C for which we have λ j = ζ 2 j . If the bound state at λ =λ j is double, we havem j = 2 and there are only two norming constant c j0 andc j1 , which are obtained from (6.4) by using the transformations given in (6.3). For a triple bound state at λ = λ j , we have m j = 3 and the three norming constants are expressed in terms of the residues t j1 , t j2 , t j3 and the dependency constants γ j0 , γ j1 , γ j2 as For a bound state at λ =λ j of multiplicity three, we can obtain the norming constantsc j0 ,c j1 , c j2 by using the transformations in (6.3) on (6.5). For bound states with higher multiplicities, the norming constants can be explicitly constructed by using the corresponding residues and the dependency constants. However, as already mentioned, the use of the matrix triplet pair (A, B, C) and (Ā,B,C) is the simplest and most elegant way to represent the bound-state information without having to deal with any cumbersome formulas involving the individual norming constants. The formulas presented in Theorems 5.1, 5.2, and 5.3 express all the relevant quantities in a compact form with the help of matrix exponentials. We have prepared a Mathematica notebook using the matrix triplets (A, B, C) and (Ā,B,C) as input and evaluating all the relevant quantities by unpacking the matrix exponentials and displaying all those relevant quantities in terms of elementary functions. In particular, our Mathematica notebook provides in terms of elementary functions the solution to the Marchenko system as indicated in Theorem 5.1, the potentials q and r given in Theorem 5.2, the Jost solutions given in Theorem 5.3, and the corresponding auxiliary quantities E(x) and µ given in (5.18) and (5.35), respectively. It also verifies that (1.1) is satisfied when those expressions for the potentials and the Jost solutions are used in (1.1). As the matrix sizes in the triplets get large, contrary to the compact expressions involving the matrix exponentials, the equivalent expressions presented in terms of elementary functions become lengthy. In the next example, we illustrate Theorems 5.2 and 5.3 by using a pair of matrix triplets corresponding to two simple bound states. In this example, we obtain the constant µ defined in (2.20) and the transmission coefficients as where we recall that λ = ζ 2 . As seen from (6.23), T (ζ) has a double pole at λ = i andT (ζ) has two simple poles at λ = −i and λ = −2i, respectively. Hence, we have one bound state of multiplicity two and two simple bound states. In Figure 4 we present the plots of the absolute values of the potentials q and r given in (6.20) and (6.21), respectively. In Figure 5 we present the plots of the absolute values of the potentials q and r given in (6.23) and (6.24), respectively. From (6.23) we observe that q belongs to the Schwartz class. On the other hand, from the graph in Figure 5 it is clear that r cannot belong to the Schwartz class because |r(x)| becomes unbounded as x → −∞. In this example, as x → −∞ we have
2022-03-08T06:47:31.404Z
2022-03-05T00:00:00.000
{ "year": 2022, "sha1": "6ce5f2071af0e3e3b4737f8d7657b431bb60b31b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7f8d79c24d488f5190e387f671670a1023e17404", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
215779846
pes2o/s2orc
v3-fos-license
The Smartphone Brain Scanner: A Portable Real-Time Neuroimaging System Combining low-cost wireless EEG sensors with smartphones offers novel opportunities for mobile brain imaging in an everyday context. Here we present the technical details and validation of a framework for building multi-platform, portable EEG applications with real-time 3D source reconstruction. The system – Smartphone Brain Scanner – combines an off-the-shelf neuroheadset or EEG cap with a smartphone or tablet, and as such represents the first fully portable system for real-time 3D EEG imaging. We discuss the benefits and challenges, including technical limitations as well as details of real-time reconstruction of 3D images of brain activity. We present examples of brain activity captured in a simple experiment involving imagined finger tapping, which shows that the acquired signal in a relevant brain region is similar to that obtained with standard EEG lab equipment. Although the quality of the signal in a mobile solution using an off-the-shelf consumer neuroheadset is lower than the signal obtained using high-density standard EEG equipment, we propose mobile application development may offset the disadvantages and provide completely new opportunities for neuroimaging in natural settings. Introduction In the last few years, the research communities that study human behavior have gained access to unprecedented computational and sensing power that basically "fits into a pocket".This has happened for both specialized equipment used for building research tools, such as Reality Mining Badges [1] or accelerometer sensors [2], as well as for consumer-grade, off-the-shelf devices.Smartphones and tablets are capable of sensing, processing, transmitting, and presenting information.This has already had a significant impact on many research domains, for example social science [3], computer human interaction [4], or mobile sensing [5,6].In neuroscience there is a widely recognized need for mobility, i.e., for devices that support quantitative measurements in natural settings [7][8][9].Here we present our work on the Smartphone Brain Scanner investigating the feasibility of off-the-shelf, consumer grade equipment in a neuroscience context, building a mobile real-time platform for stimulus delivery, data acquisition, and processing with a focus on real-time imaging of brain activity. Consumer grade neuroheadsets, capable of recording brain activity generated by post-synaptic potentials of firing neurons, captured through electrodes placed on the scalp using Electroencepahlography (EEG), have only recently made mobile brain monitoring feasible.Seen from a mental state decoding perspective, even a single channel EEG recording measuring the changes in electrical potentials, based on a passive dry electrode positioned at the forehead and a reference typically placed on the earlobe, allows for measuring mental concentration and drowsiness, by assessing the relative distribution of frequencies in brain wave patterns throughout the day.Or, simply measuring the dynamic variability of brain wave frequency components in a mobile scenario, may be translated into neural signatures, e.g.reflecting whether a user is on the phone while driving a car [10].Similarly, positioning the single EEG electrode headband over the temple, may provide the foundation for building a Brain Machine Interface (BCI) utilizing the ability to capture steady state visual evoked potentials (SSVEP) from the visual cortex when looking at flashing lights patterns, and thereby design a BCI interface for prediction with high accuracy and no previous training when a disabled user is focusing on a specific area of a screen, based on the time locked EEG traces automatically generated as multiples of the particular flashing light frequencies [11]. As an example of the underlying technology used in several consumer products, the ThinkGear module manufactured by NeuroSky 1 integrates a single dry electrode, reference and ground, attached to a headband.Essentially a system on a chip, it providies A/D conversion and amplification of one EEG channel, capable of capturing brain wave patterns in the 3-100 Hz frequency range, recorded at 512Hz sampling rate.Consumer neuroheadsets such as those manufactured by Emotiv 2 provide low density neuroimaging based on 16 electrodes and typically support real-time signal processing in order to complement standard EEG measures with aggregate signals, that provide additional information on changes in mental state, or facilitate control of peripheral devices related to games.Their portability and built-in wireless transmission make them suitable for development of fully mobile systems that allow for running EEG experiments in natural settings.The improved comfort of these mobile solutions also allows for extending neuroimaging experiments over several hours.Furthermore, the relatively low cost of the neuroheadsets and mobile devices potentially open new opportunities for conducting novel types of social neuroscience experiments, where multiple subjects are monitored while they interact [12,13]. However such 'low-fi' mobile systems present a number of challenges.In real-time applications, requiring signal processing to be performed with the lowest possible delay in order to present feedback to the user, the limited computational power of mobile devices may be a constraint.A solution might be to offload parts of the processing to an external server and retrieve the processed results over the network.Consumer-grade mobile devices also present technical challenges for writing high-quality software: the devices operate on non-real-time operating systems ill-suited for time-sensitive tasks.These limitations might also affect timing of visual or auditory stimuli presentation, as well as synchronization with other sensors.From a neuroscience perspective, both the low-resolution recordings and artifacts induced in a mobile setup present significant challenges.Noise and confounds are introduced by movement of the subject and electrical discharges, while the positioning of the electrodes might be less than ideal when compared to a standard EEG setup [14,15].Nevertheless, we hold that these drawbacks are clearly offset by the advantages of being able to conduct studies incorporating larger groups of subjects over extended periods of time in more natural settings.We suggest that mobile EEG systems can be considered from two viewpoints: As stand-alone portable low-fi neuroimaging solutions, or alternatively as an add-on for retrieving neuroimaging data under natural conditions complementary to standard neuroimaging lab environments. In terms of software programming, creating a framework for applications in C++ rather than prevalent environments such as MATLAB, while approaching the problem as a smartphone sensing challenge, might enable new types of contributions to neuroscience.The Human Computer Interaction (HCI) community is already starting to apply consumer-grade headsets to extend existing paradigms [16], thus incorporating neuroscience as a means to enhance data processing.Similarly the availability of low cost equipment means that even general 'hacker and tinkerers' audiences gain interest in using neuroscience tools 3 .We see a great value in the emerging potential of entirely new groups of researchers and developers getting interested in neuroscience and obtaining tools allowing them to develop new kinds of applications. Related Work Our real-time imaging EEG setup mediates between to hitherto disparate fields in sensorics, being on one hand a down-sized neuroimaging device and on the other hand a sophisticated smartphone sensor system for cognitive monitoring in natural conditions.We therefore briefly review the state of the art in both domains. Neuroimaging Several software packages for off-line and on-line analysis of biomedical and EEG signals are available.The most popular packages for off-line analysis are EEGLAB and FieldTrip; for building real-time BCI-oriented applications, notable frameworks are BCILAB, OpenViBE, and BCI2000. EEGLAB is a toolbox for the MATLAB environment for processing collections of single-trial or averaged EEG data [17].Functions available in this framework include data importing, preprocessing (artifact rejection, filtering), independent component analysis (ICA), and other.The framework can be used via a graphical interface or by directly manipulating MATLAB functions.The toolbox is available as open source (GNU license) and can be extended to incorporate various EEG data formats coming from different hardware.Similarly, FieldTrip is an open source (GNU License) MATLAB toolbox for the analysis of MEG, EEG, and other electrophysiological data [18].Among others, FieldTrip has pioneered high-quality source reconstruction methods for EEG imaging.Fieldtrip has support for real-time processing of data based on a buffer construction that allows chunking of data for further processing in the MATLAB environment. BCILAB is a toolbox for building online brain-computer interface (BCI) models from available data [19].It is a plugin for EEGLAB running in MATLAB, providing functionalities for designing, learning, use, and evaluation of real-time predictive models.BCILAB is focused on operating in realtime for detecting and classifying cognitive state.The classifier output from BCILAB can be streamed to a real-time application to effect stimulus or prosthetic control, or may be derived post-hoc from recorded data.The framework is extensible in various layers: additional EEG hardware as well as data processing steps (e.g.filters and classifiers) can be added.But as these tool-boxes are developed within the MATLAB environment, neither FieldTrip's real-time buffer nor BCILAB are suitable for mobile application development. OpenViBE is a software framework for designing, testing, and using Brain Computer Interfaces [20].The main application fields of OpenViBE are medical i.e. assistive technologies, bio-and neurofeedback as well as virtual reality multimedia applications .OpenViBE is open source (LGPL 2.1) and targets an audience focused on building real-time applications for Windows and Linux Operating Systems, and does not specifically support light-weight mobile platforms.A similar C++ based framework for building real-time BCI applications is BCI2000 [21].A comprehensive review of the BCI frameworks can be found in [22].Some of the consumer EEG systems also include Software Development Kits (SDKs) allowing for data acquisition, processing, and building applications.Emotiv SDK, available with Research Edition of Emotiv system is multi-platform, currently running on Windows and OSX, with Linux support in beta.The SDK allows for building applications either using raw EEG data or extracted features including affective state and recognition of facial expressions based on eye movements.The extracted features can be integrated into a C++/C# application through a set of dynamically linked libraries.Although such SDK frameworks can greatly speed up the process of building BCI applications, they are mostly targeted towards scenarios where immediate feedback is available such as gaming, and it remains a challenge to validate or tweak code for custom needs.To sum up, none of the aforementioned software platforms can easily be adapted to support mobile and embedded devices. Cognitive Monitoring Systems Mobile brain imaging might also be viewed as yet another sensor extension to self-tracking applications, which have become prevalent with smartphones and the emergence of low-cost wearable devices lowering the barriers for people to engage in life logging activities [23].With the availability of multiple embedded sensors modern smartphones have become a platform for out-of-the-box data acquisition of mobility (GPS, cellular network, WiFI), activity level (accelerometer), social interaction (Bluetooth, call, and text logs), and environmental context (microphone, camera, light sensor) [3].Recently non-invasive recording of brain activity has become common as several low-cost commercial EEG neuro-headset and headband systems have been made available, including apart from the previously mentioned Emotiv EPOC and NeuroSky4 , the InteraXon Muse5 , Axio6 , and Zeo7 .These sensors support applications ranging from BCI, game control, stress reduction, cognitive training, to sleep monitoring.These neuroheadsets feature up to 16 electrodes, but ongoing developments promise next-generation low-cost EEG devices with significantly higher number of electrodes, better quality signals, and improved comfort.The Smartphone Brain Scanner framework described in this paper, can be used with mobile EEG devices with various numbers of electrodes to allow for capture of neuroimaging data over several hours.Battery tests on Samsung Galaxy Note with all wireless radios and screen turned off resulted in 11 hours of uninterrupted recording and storage of data from an Emotiv EPOC headset.However in reality current generation neuroheadsets are limited by their solution-based electrodes which dry out, and more comfortable designs [24,25] may be required for continuous mobile neuroimaging throughout the day. Beyond EEG multiple bio signals and physiological parameters can contribute to cognitive state monitoring, such as respiratory rate [26], heart rate variability, galvanic skin response [27], blood pressure, oxygen saturation, body/skin temperature, ECG, EMG, and body movements [28].A webcam or a camera embedded in a smartphone allow measurements of heart rate, variability, and respiratory rate by analyzing the color channels in the video signal [29].Continuous monitoring of heart rate is enabled by pulse watches 8 and recently the Basis Band wrist-worn sensor9 that allow 24/7 recording under a subset of conditions (non-workout situations), allowing user mobility and measurements in natural conditions.The Q Sensor from Affectiva10 is an example of a system for monitoring galvanic skin response (GSR) and accelerometer and temperature data from a wrist-worn device.FitBit 11 is an example of a wearable pedometer, monitoring number of steps taken, distance traveled, calories burned, and floors climbed. Methods: The Smartphone Brain Scanner The Smartphone Brain Scanner (SBS2) is a software platform for building research and end-user oriented multi-platform EEG applications.The focus of the framework is on mobile devices (smartphones, tablets) and on consumer-grade (low-density and low-cost) mobile neurosystems (see Fig. 2. The SBS2 is freely available under the MIT License on GitHub at https://github.com/SmartphoneBrainScanner. The SBS2 framework is divided into three layers: low-level data acquisition, data processing, and applications.The first two layers constitute the core of the system and include common elements used by various applications.An overview of the architecture is shown in Fig. 1. Key Features With focus on the mobile devices, SBS2 is a multi-platform framework.The underlying technology -Qt -is an extension of C++ and is currently supported on the main desktop operating systems (Linux, OSX, Windows) as well as mobile devices (Android, BB10, and partially iOS) 12 .We have aimed for a modular framework, allowing for adding and modifying data acquisition and processing blocks.The modules are created as C++ classes and integrate directly with the core of the framework.The framework supports building real-time applications; data can be recorded for subsequent off-line analysis, however most of the implemented data processing blocks aim to provide real-time functionality for working with the EEG signal.The applications developed with SBS2 can be installed on desktop and mobile devices, started by the user in a usual way and distributed via regular channels, such as repositories and application stores. Data Acquisition The Data Acquisition layer is responsible for setting up communication with an EEG device, acquiring the raw data, and forming packets.Three primary objects are used: Sbs2Mounter, Sbs2DataReader, and Sbs2Packet, thereby abstracting all the specificities of the EEG systems (hardware) and OS + device running the software (platform).Different embedded devices, even with the same OS may require specific code for certain low-level functionalities, for example to access the USB port.A higher-fidelity architecture is shown in Figure 3.The EEG hardware is set up by a specialized Sbs2Mounter object.The information about the hardware (e.g.mounting point, serial number) is passed to a Sbs2DataReader object.This object subsequently begins reading the raw data from the hardware.The raw data are passed to a Sbs2Packet object to create a proper encapsulation, setting the values for all the EEG channels and metadata.Once formed, the packet is pushed to the Data Processing layer via a Sbs2Callback object. The Data Acquisition layer of the SBS2 is originally designed to support the Emotiv EPOC headset.It has been extended to support additional hardware, by implementing additional classes of the hardware mounter, data reader, and packet creator.For Emotiv headset, this layer also contains the data decryption module, as the stream coming from the device is encrypted. Mounting the EEG hardware on a desktop and embedded devices require drivers, either standard kernel modules or proprietary drivers created by the vendor.The Emotiv EPOC USB receiver is mounted as /dev/hidraw in Linux (desktop and Android), provided that the device and kernel support USB host mode and have the HIDRAW module enabled.Most desktop Linux flavors have both by default, but currently most Android mobile devices support only USB host mode out-of-the-box.In the current implementation a custom kernel needs to be compiled with the HIDRAW module enabled.Reading the data directly from the /dev/hidraw device requires 'root' privileges, which must be enabled on Android devices to acquire data from the Emotiv EPOC receiver.This is possible for most recent Android devices, e.g. for the Nexus (developer) line of devices.We can expect that the next generation of mobile neuroheadsets will use standardized Bluetooth low-energy protocols and Android devices will be able to support them by default.This will likely have a significant impact on the adoption of neuroimaging outside lab environments. Data Processing Well-formed EEG packet objects are used for data processing.The functionality of this layer is hardware agnostic and depends only on packet content, i.e. data for the EEG channels, reflecting a particular sensor configuration, and sampling frequency.Single packets are dispatched to different processing objects and methods, including recording, filtering, 3D reconstruction etc.Some operations need to collect data into frames and run asynchronously (in separate thread), pushing the results back to the callback object once the results are ready. Sbs2Callback is an object implementing the getData(Sbs2Packet*) method, to which single packets are always passed and can then be dispatched to the Sbs2DataHandler or pushed to the Application layer.Sbs2DataHandler is an object providing methods for data processing, by delegating them to specialized objects, including Sbs2FileHandler and Sbs2Filter. The framework for data processing is extensible and new modules can be added to the core, the data handler prepares the data in a format expected by the processing block (e.g.collecting packets into larger frames) and runs the processing method.The currently implemented blocks allow for a variety of processing operations.The raw EEG data can be recorded, including timestamped events (stimuli onsets, user responses etc.).Raw packets as well as extracted features and arbitrary values can be streamed over the network, either for data processing or for interconnection between devices, for example for multiplayer gaming.Other methods for data processing, including filter, FFT, spatial filter (CSP), and classifier (LDA) are also implemented and can be used for building the pipelines. 3D Imaging The most advanced data processing block of the Smartphone Brain Scanner is the source reconstruction aimed at real-time 3D imaging.Source reconstruction estimates the current sources within the brain that are most likely to have generated the observed EEG signal at the scalp level.As the number of possible source locations far exceeds the number of channels, this is known to be an extremely ill-posed inverse problem.A unique solution is obtained by imposing prior information in correspondence with e.g.anatomical, physiological, or mathematical properties [30][31][32].Implemented inverse methods in the SBS2 covers Bayesian formulations of the widely used Minimum-norm method (MN) [32] and low resolution electromagnetic tomography (LORETA) [33].The Bayesian formulation used in the SBS2 framework allows adaptation of hyper-parameters to different noise environments in real-time.This is an improvement over previous real-time source reconstruction approaches [34][35][36] that apply heuristics to estimate the parameters involved in the inverse method.The current source reconstruction is based on an assumed forward model matrix, A, connecting scalp sensor signals Y (channel by time) and current sources S (cortical locations by time) [37] Y = AS + E. ( The term E accounts for noise not modeled by the linear generative model.When estimating the forward model a number of issues are taken into consideration such as sensor positions, the geometry of the head model (spherical or 'realistic' geometry), and tissue conductivity values [38][39][40].With the forward model A given and the linear relation in Eq. ( 1) the source generators can be estimated.We assume the noise term to be normal distributed, uncorrelated, and time independent leading to the probabilistic formulation Where p (S) is the prior distribution over S with L given as a graph Laplacian ensuring spatial coher- ence between sources, and β −1 as the noise variance.Using Bayes' rule the posterior distribution over the sources is maximized by Here, L denotes a spatial coherence matrix, which in the current form take advantage of graph Laplacian using a fixed smoothness parameter (0.2). Methods: Experimental Designs In this section we briefly describe the design of experiments that demonstrate and validate the potential of the SBS2 framework, the specific hardware, and the mobile approach in general. Timing and Data Quality First, we analyze the data and timing quality.Many neuroscience paradigms rely heavily on accurate synchronization between EEG signal and stimuli, user response, or data from other sensors (e.g.P300, steady state visual evoked potentials).However, we can also envision applications in which the present 'low-cost' mobile setup will be used to collect data from many subjects over extended periods, where the precise synchronization is less important. Emotiv EEG sampling The measurements are all based on the Emotiv EEG neuroheadset.The nominal sampling frequency of this neuroheadset is 128Hz (downsampled from internal 2048Hz).For validation purposes we test the actual sampling rate obtained from 3 randomly picked Emotiv devices (10 × 10 min measurements for each). Data Quality The Emotiv hardware adds a modulo 129 counter (0 − 128) to every packet transmitted from the device.This allows for data quality control (dropped packets) with accuracy of modulo 129.It is possible to obtain long recordings (over one hour) using this neuroheadset and SBS2.The battery in the Emotiv hardware is rated at 12h of continuous operation; in recording-only setup mobile device such as Galaxy Note (offline mode, screen off, only decrypting and recording) lasts for around 10h. Provided that good visibility between the Emotiv EEG neuroheadset transmitter (located in the back part of the headset) and USB receiver is maintained, we were able to achieve zero packet loss in the full rundown recording.In order to acquire an EEG signal of good quality, the impedance between the electrodes and the scalp should be kept under 5kΩ.The Emotiv headset embeds the channel quality information in the signal directly (2Hz per channel, multiplexed into the signal).The values are unscaled, and come from applying a square wave of 128Hz to the DRL feedback circuit and extracting the amplitude of the inherent square wave using phase-locked detection on each channel.In principle the obtained values can be calibrated using a known impedance.For regular usage however, the hardware manufacturer assures that the green color of the indicator (channel quality value greater than 407) corresponds to sufficiently low impedance of the electrode.From our experience with the system this appears correct. Timing In order to measure the total delay in the system, we use the setup as depicted in Fig. 4. A sinusoidal audio tone of 10Hz and trailing and following periods of silence is generated and amplified so it can be detected by the EEG hardware and split into oscilloscope and EEG hardware.The software on the device performs peak detection on the signal and visualizes the peaks by changing the screen color from black to white.This change is detected by a photocell, connected to the second channel of the oscilloscope.We can then calculate dt1 = t2 − t1, indicating the total delay of the system from the physical signal reaching the EEG hardware to it being visualized on the screen (without any additional processing), see Figure 6.We also look at the jitter dt2 as the difference between min and max values of dt1.The observed delta depends on the EEG sampling rate (here 128Hz), the processing power of the device, and screen refresh rate (60Hz for all tested devices). Imagined Finger Tapping One of the best known and examined experiments from the BCI literature is a task in which a subject is instructed to select between two or more different imagined movements [41][42][43][44].Such experiments are rooted in a central aim of many BCI systems, namely of being able to assist patients with severe motor disabilities to communicate by 'thought'.In this contribution we replicate a classical experiment with imagined finger tapping (left vs. right) inspired by [44].The setup consisted of a set of three different images with instructions, Relax, Left, and Right.In order to minimize the effect of eye movements, the subject was instructed to focus on the center of the screen, where the instructions also appeared (3.5 inch display size, 800 x 480 pixels resolution, at a distance of 0.5 m).The instructions Left and Right appeared in random order.A total of 200 trials were conducted for a single subject. Results and Discussion In this section we present and discuss the results of the experiments, validating the performance of the software, the used platforms, and EEG hardware. Emotiv EEG sampling From Fig. 5 we can see that the Emotiv EPOC hardware a) has an actual sampling rate close to 127.88Hz and b) keeps this sampling rate in a fairly consistent manner.Depending on the analysis performed on the data, one can assume 128Hz, 127.88Hz, or measure the actual sampling rate for every Emotiv EPOC hardware device individually. Timing The results of the timing measurements (20 per device) are depicted in Fig. 6.We can see in the results that for all devices there is a significant delay between the signal reaching the EEG hardware and being fully processed in the software (80 − 125ms).This delay however, although significant is fairly stable (16 − 26ms jitter) and thus can be corrected for. In the second set of measurements, we test the stability of the timing of the packets as they appear in the system.To measure this, we collect the packets from the Emotiv EPOC device and change the screen color every 4 packets (limited by screen refresh rate, 60Hz).This change is then measured by a photocell and fed into the oscilloscope and the distance between the 4-packet packages is calculated.Fig. 7 shows these measurements.In summary, the stability and quality of the acquired signal is excellent.Most of the variations, including imperfect sampling rate or timing jitters are constant and can be largely accounted for in the data analysis if necessary. 3D Source Reconstruction On-Device Performance Source imaging was obtained using the Bayesian inverse solver for the linear model in Eq. (1).The forward matrix A and cortical source mesh grid was based on a coarse resolution (5124 vertices) of the SPM8 template brain [45], further reduced to 1028 using Matlab's function reducepatch.We tested the performance of 3D reconstruction and hyper-parameters calculation on 1s of signal: MacBookPro8,2 (Intel Core i7 Sandy Bridge 2.2GHz): 2ms/2s, Nexus 7: 8ms/1s, Galaxy Note: 8ms/11s, and Acer Iconia: 14ms/13s. Imagined Finger Tapping -Online Source Reconstruction In order to demonstrate the applicability of discriminating a simple task as the left and right imagined finger tapping on the cortical source level in an online framework, the EEG data were acquired with the Emotiv EPOC neuroheadset and compared with EEG recordings acquired with a Biosemi Active-II device 64 channels.The 64-channels were subsampled to represent the same channel locations as the Emotiv device. Imagined fingertapping is known to lead to a suppression of the alpha (8-13 Hz) activity over the premotor/motor regions, with the contra lateral areas normally being more desynchronized [46].Thus, imagined right finger tapping, should lead to the alpha activity being suppressed both in the left and right pre-motor region with the Left as the dominant one.This is confirmed in Fig. 8a and Fig. 8b, which demonstrate the SBS2 framework ability to online reconstruct meaningful current sources within the brain.Fig. 8a shows how alpha power (8-13 Hz) is suppressed over time in the two regions of interest; Precentral Left and Right AAL.The responses are calculated as the averaged response over 87 Right cued trials.Note, that even though this result is an average over runs, the source localization was carried out in online mode with model parameters (α and β) and current sources (S) estimated online.By collecting these source estimates over time we have just presented the averaged response at the end of the experiment.Similarly, Fig. 8b demonstrates the averaged power response across 79 Right cued trials.Interestingly, the suppression of the alpha power in the Left and Right Precentral AAL regions to right imagined finger tapping trials, looks quite similar for both devices (Emotiv EPOC and Biosemi), with the contralateral frontal regions (Left) being mostly suppressed. Conclusions We have presented the design, implementation, and evaluation of the first fully mobile 3D EEG imaging system: The Smartphone Brain Scanner.The open source software allows realtime EEG data acquisition and source imaging on standard off-the-shelf Android mobile smartphones and tablets with a good spatial resolution and frame rates in excess of 40 fps.In particular, we have implemented a real-time solver for the ill-posed inverse problem with online Bayesian optimization of hyper-parameters (noise level and regularization). The evaluation showed that the combined system provides for a stable imaging pipeline with a delay of 80-120ms.We showed results of a cue imagined finger tapping experiment and compared the smartphone brain scanner's average power in the alpha band in a relevant motor area, and we found that these aggregate signals compare favorably with those obtained with standard laboratory equipment.Both show the expected de-synchronization on initiation of imagined motor actions. We suggest that the mobility and simplified application development may enable completely new research directions for imaging neuroscience and thus offset the expected reduced signal quality of a mobile off-the-shelf, low-density neuroheadset relative to more conventional and controlled, highdensity laboratory equipment. Figure 1 . Figure 1.Overview of the layered architecture of the SBS2 framework.Data from the connected EEG hardware are acquired and extracted by specific adapters and all subsequent processing is hardware agnostic.The empty boxes indicate the extensibility of the architecture allowing additional hardware devices for data acquisition and additional processing methods. Figure 3 . Figure 3.The Smartphone Brain Scanner architecture.Data are acquired in the first layer from the EEG hardware, passed to the Data Processing Layer and extracted features as well as raw values are then available for applications. Figure 5 . Figure 5. Measured sampling frequency, including measurement resolution for 3 random Emotiv EPOC devices, 10 × 10min recordings for each.All measured rates, including uncertainty are between 127.8828Hz and 127.8841Hz, corresponding to .99908 and .99909 of nominal 128Hz.The measurements were performed with 1ms resolution (2ms accuracy) on 76800 EEG packets.All tests were performed in a normal temperature on a single day. Figure 7 . Figure 7. Distances between 4-sample frames.Red line indicates expected distance of 4/127.88= 0.03106ms.The bars indicate the observed distance.We can see that the Emotiv system compensates every 8 × 4 = 32 samples to keep the average (black line) at the correct level. Figure 8 . Figure 8. Mean (solid lines) and standard deviation (dashed lines) of reconstructed current source power in the left (L) and right (R) Precentral AAL regions calculated across Right cued imagined finger tapping conditions.Online estimation of the α and β parameters.Minimum Norm Solution.
2013-04-01T06:51:52.000Z
2013-04-01T00:00:00.000
{ "year": 2014, "sha1": "240fa6699701f7769def7b1f9bcbb76fdb8a2d02", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0086733&type=printable", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "2bbe00742678a355e0adce9febec8dc5d27e7153", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
205308491
pes2o/s2orc
v3-fos-license
Structure-Function Analysis of Core STRIPAK Proteins Cerebral cavernous malformations (CCMs) are alterations in brain capillary architecture that can result in neurological deficits, seizures, or stroke. We recently demonstrated that CCM3, a protein mutated in familial CCMs, resides predominantly within the STRIPAK complex (striatin interacting phosphatase and kinase). Along with CCM3, STRIPAK contains the Ser/Thr phosphatase PP2A. The PP2A holoenzyme consists of a core catalytic subunit along with variable scaffolding and regulatory subunits. Within STRIPAK, striatin family members act as PP2A regulatory subunits. STRIPAK also contains all three members of a subfamily of Sterile 20 kinases called the GCKIII proteins (MST4, STK24, and STK25). Here, we report that striatins and CCM3 bridge the phosphatase and kinase components of STRIPAK and map the interacting regions on each protein. We show that striatins and CCM3 regulate the Golgi localization of MST4 in an opposite manner. Consistent with a previously described function for MST4 and CCM3 in Golgi positioning, depletion of CCM3 or striatins affects Golgi polarization, also in an opposite manner. We propose that STRIPAK regulates the balance between MST4 localization at the Golgi and in the cytosol to control Golgi positioning. PP2A 4 is an essential serine threonine phosphatase involved in many aspects of cell function (1,2). PP2A acquires substrate and subcellular localization specificity via association with various scaffolding and regulatory subunits to form a number of different holoenzymes, most of which are trimers. In previous studies using affinity purification coupled to mass spectrometry, a portion of PP2A was also found in a higher order complex that we termed STRIPAK (striatin interacting phosphatase and kinase) (3,4). In addition to the catalytic subunit PP2A cat , its scaffolding subunit PP2A A and members of the striatin family of regulatory subunits (5), the core STRIPAK complex contains the striatin interactor Mob3 (6), the uncharacterized protein STRIP1, members of the germinal center kinase III (GCKIII) group (STK24, STK25, and MST4; Ref. 7), and the small molecular weight protein CCM3 (Fig. 1A). Additional proteins can associate with this core STRIPAK complex in a mutually exclusive manner (4). CCM3 is encoded by one of the three genes mutated in familial cerebral cavernous malformations (CCMs; Ref. 8) and was identified previously as an interactor for the GCKIII proteins (9,10). CCMs are vascular lesions of the brain characterized by enlarged capillaries that lack structural integrity and that form caverns that tend to bleed, leading to symptoms ranging from headaches and dizziness to severe strokes and death (reviewed in Ref. 11). Recent studies have implicated defective Rho signaling as one of the consequences of depletion (or overexpression) of the CCM1, CCM2, and CCM3 proteins (12)(13)(14). Further links between CCM3 and its kinase partners and cytoskeletal dynamics via the Golgi were also uncovered. The Ser/Thr kinases STK25 and MST4 were found to localize to the Golgi apparatus via an association with the Golgi resident protein GM130 (15). Mislocalization of these kinases results in defects in Golgi positioning and cell migration (15). Recently, CCM3 was shown to participate in this effect by stabilizing the GCKIII proteins to promote Golgi orientation and assembly and proper cell orientation (16). Here, we define the structural organization of the STRIPAK complex, identifying direct interactions and interacting regions within the complex. Specifically, we demonstrate that the striatins and CCM3 act as adapter molecules to bridge the kinase and phosphatase catalytic activities (an accompanying publication by Ceccarelli et al. characterizes interactions between the GCKIII proteins and CCM3; 49). We also report the surprising finding that CCM3 and striatins exhibit opposing functions on the targeting of MST4 to the Golgi and Golgi positioning. EXPERIMENTAL PROCEDURES Plasmids-pcDNA5-FRT-FLAG was engineered to inducibly express fusion proteins with a single N-terminal FLAG epitope and was constructed from the parent vector pcDNA5-FRT-TO (Invitrogen) and the vector pcDNA3-FLAG (17) as follows. A HindIII/XhoI cassette from pcDNA3-FLAG (containing the FLAG and the multiple cloning site) was subcloned into the pcDNA5-FRT-TO vector also digested with HindIII/ XhoI. An internal EcoRI site was subsequently destroyed by mutagenesis. pcDNA5-FRT-eGFP was constructed by subcloning the HindIII/AscI cassette from pcDNA3-eGFP into pcDNA5-FRT-TO. The complete sequences of the cloning vectors are available at the Gingras Laboratory website (Samuel Lunenfeld Research Institute). FLAG-tagged mammalian expression constructs for full-length STRIPAK proteins are described in Ref. 4. Truncations of STRN3 (amino acids 1-169, 1-338, and 220 -338) and mouse Strn (amino acids 46 -781 and 91-781) were cloned into pcDNA3-FLAG for mammalian expression (supplemental Fig. 1). All point mutations were generated by overlap extension PCR; CCM3 point mutants were subcloned into the pcDNA5-FRT-GFP vector. N-mut is L44D,A47D,I66D,L67D; C-mut (4A) is K132A,K139A, K172A,K179A, and the N-mut/C-mut construct contains both sets of mutations. Inserts were fully sequenced. The full-length and several truncations in human STRN3 (amino acids 1-57, 1-169, 220 -713, 58 -713, 58 -169, and 220 -338; supplemental Fig. 1) as well as full-length mouse Strn, mouse Mst4 D162A, and MOB3 were cloned into the GST-tagged expression vector pGEX-2T-TEV HTa for bacterial expression and purification. pGEX-2T-TEV HTa (which expresses a tobacco etch viruscleavable GST protein was described previously (18). Wild-type CCM3, CCM3 C-mut (4A) (generated by overlap extension PCR), and PP2A A were inserted into the His-tagged expression vector pProEx-HTa for bacterial expression and purification. The coding sequence of GOLGA2 (encoding protein GM130) was amplified by PCR from the cDNA clone from the mammalian gene collection BC069268. The full-length and minimal kinase interaction region at amino acids 72-271 (15) were cloned into pcDNA5-FRT-FLAG. Recombinant Protein Purification, Gel Filtration, and in Vitro Binding Assays-His-or GST-tagged fusion proteins were purified as described (20), using lysis buffer with 20 mM Hepes, pH 7.5, 500 mM NaCl, and 5 mM ␤-mercaptoethanol at 4°C. Gel filtration was performed to purify proteins and complexes based on size, in buffer containing 100 or 150 mM NaCl. For the PP2A A :STRN3(58 -169) complex, a 1:2 ratio of proteins (purified by gel filtration) was mixed before loading onto a Superdex 200 column. For the PP2A A :STRN3(1-338):CCM3 complex, a 1:2:2 ratio of proteins was mixed before loading onto a Superdex 200 column. Fractions encompassing the elution of these protein complexes (as detected by UV) were run on a SDS-PAGE gel and Coomassie stained. In vitro binding assays (GST pulldowns) were performed essentially as described (18), with the following modifications: GST-tagged proteins were purified as described above, without cleavage of proteins from GST resin. Untagged or His-tagged proteins were incubated with GST-tagged proteins on resin in 150 l of binding buffer (20 mM Hepes, pH 7.5, 100 mM NaCl, and 5 mM ␤-mercaptoethanol). Glutathione resin was washed rapidly three times in 500 l of binding buffer, and elution was performed by boiling in Laemmli sample buffer. Peptide Overlay Assay-Peptide libraries were produced by automatic SPOT synthesis and probed as described previously (21). They were synthesized on continuous cellulose membrane supports on Whatman 50 cellulose membranes using Fmoc (N-(9-fluorenyl)methoxycarbonyl) chemistry with the Auto-Spot-Robot ASS 222 (Intavis Bioanalytical Instruments AG, Köln, Germany). The interaction of spotted peptides with purified, recombinant GST and GST-CCM3 fusion proteins was determined by overlaying the cellulose membranes with 10 g/ml recombinant protein. Bound recombinant proteins were then detected following wash steps with rabbit anti-GST, and detection was performed with a secondary anti-rabbit horseradish peroxidase-coupled antibody. Fluorescence Polarization Peptide Binding Assay-A 25-mer peptide ( 286 EGLAADLTDDPDTEEALKEFDFLVT 310 ) from human STRN3 was synthesized by Biomatik Corporation (Wilmington, DE) and used for fluorescence polarization binding studies with purified CCM3 proteins as described (22). Equilibrium binding constant determination was carried out on a Beacon fluorescence polarization system (Pan Vera, WI) and data were analyzed using the GraphPad Prism software (GraphPad Software, Inc.) Mammalian Cell Culture, Immunoprecipitation, and Mass Spectrometry-Transient transfection and immunoprecipitation followed by immunoblotting and/or mass spectrometry were performed essentially as described (4), using either pools of stable cells (FLAG-Strn, FLAG-STRN3, FLAG-Mst4, and their deletion constructs) or a stable inducible clone (FLAG-GM130). Samples were analyzed on a ThermoFinnigan LTQ or an AB-SCIEX 5600 TripleTOF instrument, as described below. (siRNAs for all striatin paralogs were pooled, and 40 pmol total was used.) RNA silencing experiments were performed for 72 h before harvesting or imaging cells. Knockdown was assessed using immunoblotting and/or RT-PCR. RT-PCR Procedure and Primers-RT-PCR was performed as follows: RNA was purified from cells using the RNeasy kit (Qiagen 74104). Cells were lysed in 600 l using a 20-gauge needle for homogenization. The final product was eluted in 30 l of water. 200 ng of RNA was run on an agarose gel to check quality. Reverse transcription of RNA into cDNA was performed following the instructions from the Invitrogen SuperScript III reverse transcriptase guide (Invitrogen 18080-093). cDNA was amplified by PCR and analyzed on an agarose gel. Primers were GGATGACAATGGAAGAGATGAAG and GAC-AGATTTACTCGTTCTAGCTC for PDCD10 (encoding CCM3) and TGAATGACACGAGACTTTACC and TGAA-GAGGGAAGGTGGAAC for TIPRL (encoding an unrelated protein used as a loading control). Immunofluorescence-Immunofluorescence was performed on HeLa cells as described previously (24) with the following modifications: cells were permeabilized with 0.1% Triton X-100, incubated with primary antibodies for 2 h at room temperature, and mounted with ProLong Gold (Invitrogen, P36930). Images were acquired on a DeltaVision at 60ϫ magnification (with a 2ϫ digital zoom for Fig. 5A, essentially as described (25)). Fig. 5E and supplemental Figs. 5, 6 (C and D), and 7 were acquired on an Olympus epifluorescence microscope at 40ϫ magnification. Fig. 6A was acquired on an Olympus epifluorescence microscope at 20ϫ magnification. Wound Healing and Quantification of Golgi Polarization-Assays were performed as described previously (16) with the following modifications: Cells were plated to confluency on fibronectin-coated coverslips. After 6 -8 h, cells were serumstarved in DMEM with 0.1% FBS overnight, and the monolayer was wounded. Cells were incubated in DMEM with 10% FBS for 90 min and fixed with ice-cold 4% paraformaldehyde. Golgi staining was performed using anti-GM130 or anti-giantin, and the first row of cells was counted. A minimum of 150 cells were counted per treatment per experiment; experiments were performed in quadruplicates. Treatments were labeled in code, and polarization was assessed independently by two people. The Golgi of cells on the wound edge were counted as polarized when the majority of the stained Golgi was located within a 90°a ngle facing the wound (26). Mass Spectrometric Analysis-Acquired RAW files were converted to mgf format, which were searched with the Mascot search engine (Matrix Sciences, London, UK) against the human RefSeq database (release 37) with a precursor ion mass tolerance of 3.0 and a fragment ion mass tolerance of 0.6. Methionine oxidation and asparagine deamidation were allowed as variable modifications, and trypsin specificity (with one missed cleavage allowed) was selected. The data were analyzed in the "Analyst" module of ProHits (27) and exported into Excel files for spectral normalization and manual curation. For the STRIPAK pulldowns, only interaction partners previously reported and confirmed (4) are reported (supplemental Table 1). For GM130 pulldowns, database searches were performed as above, and the results were analyzed using SAINT (version 2.0) (28,29), using eight negative control runs as part of the modeling. Hits detected with SAINT AvgP Ն 0.7 and with a minimum of 10 spectra in at least one of the replicates are reported (supplemental Table 3; detailed mass spectrometry data are presented in supplemental Table 4); interactions with the wild-type protein were deposited to the BioGRID database. For quantitative analysis (Fig. 5C), immunoprecipitation of FLAG-Mst4 was performed after depletion of CCM3 or all striatins by esiRNA from HEK293 cells stably expressing FLAG-Mst4. Data were acquired on an ABSCIEX 5600 TripleTOF instrument using an Eksigent Ultra nanoLC with NanoFlex cHiPLC columns. The samples were loaded onto a C18 Trap chip at 500 nl/min and separated over a C18 column chip at 250 nl/min (120 min gradient). Data acquisition was done with 1 high resolution MS scan followed by 20 high resolution MS/MS scans. Resulting data were searched using ProteinPilot (version 4.0) against human proteins in Uniprot (release 8.8) (spectral counts are presented in supplemental Table 5). PeakView was used to extract peak areas for all peptides identified for target proteins (STRIPAK core components and GM130). The total number of peptides used for the quantification of each protein is shown in supplemental Table 5 (each of them was manually inspected). Total sum areas for proteins were determined and exported to Markerview where values were normalized to Mst4 across all samples. Further normalization for each protein in the esiCCM3 or esiSTRNs sample to the esiLuc control was performed. Structure Modeling-The crystal structure of human CCM3 (Protein Data Bank 3L8I (30)) was superimposed on the focal adhesion targeting (FAT) domain of focal adhesion kinase in complex with a peptide derived from paxillin (PDB 1OW7 (31)). Residues of CCM3 FAT domain (Lys-132, Lys-139, Lys-172, and Lys-179) analogous to focal adhesion kinase residues lining the paxillin binding groove were mutated to alanines for the purpose of interaction studies (see Fig. 3). RESULTS Striatins Act as Molecular Scaffolds within STRIPAK-The STRIPAK complex contains members of the evolutionarily Structure-Function Analysis of Core STRIPAK JULY 15, 2011 • VOLUME 286 • NUMBER 28 conserved striatin family of PP2A regulatory subunits. To investigate the binding topology of the STRIPAK complex with respect to striatin, full-length and truncation mutants of striatin molecules (Fig. 1B) were stably expressed in HEK293 cells and subjected to affinity purification coupled to mass spectrometry (as described in Ref. 4). As expected ( Fig. 1C; supple-mental Table 1), full-length striatins recovered all core STRIPAK components, as well as members of the Chaperone containing TCP complex. Although deletion of the first 45 amino acids of Strn did not affect any of the interactions, truncation of the amino-terminal 90 amino acids completely abolished the interactions with most STRIPAK components, including PP2A cat and PP2A A (shown as red circles in Fig. 1C). (Note that only "core" STRIPAK components as defined in Fig. 1A are shown on this figure; alternative STRIPAK components, including SLMAP, SIKE, and CTTNBP2NL are also unable to associate with this truncated striatin molecule, see supplemental Table 1). Importantly, however, interactions with components of the Chaperone containing TCP complex, Mob3, CCM3, and the GCKIII proteins were not abrogated by this truncation. Further mapping with STRN3 deletion mutants extended these observations (Fig. 1C), indicating that the N-terminal portion of the molecule is essential for mediating interactions with PP2A and various STRIPAK components, whereas the C-terminal region (amino acids 220 -713 in STRN3) is sufficient for interactions with Mob3, CCM3, GCKIII, and the Chaperone containing TCP complex. Interestingly, STRN3 fragments encompassing amino acids 1-169 and 1-338 interacted with all of the STRIPAK proteins (Fig. 1C). We attributed these observations to the fact that a coiled-coil element (amino acids 85-130; Ref. 32) is located within this region and likely mediates homo-and hetero-oligomerization (33) of these deletants with endogenous full-length striatin molecules. STRN3 Binds Directly to PP2A A and CCM3-Yeast two-hybrid interactions between PP2A A and striatins were detected previously in a high-throughput experiment, suggesting a direct association between these molecules (AfCS yeast twohybrid screen). To demonstrate that PP2A A and striatin did interact directly in the absence of bridging proteins, an in vitro binding assay was performed using bacterially expressed and purified proteins. Soluble His-PP2A A was incubated with immobilized GST-STRN3 in a pulldown experiment followed by an SDS-PAGE gel and Coomassie staining. His-PP2A A was efficiently captured by full-length GST-STRN3 ( Fig. 2A, top panel, compare lanes 1 and 2). To delimit the STRN3 region responsible for interacting with PP2A A , a series of STRN3 truncation mutants was assayed. This analysis revealed that amino acids 1-57 and 169 -713 were dispensable for PP2A A binding activity, whereas further truncations inside this region prevented interaction ( Fig. 2A). To test whether residues 58 -169 were sufficient to mediate the interaction, GST-STRN3(58 -169) and a negative control, GST-STRN3(1-57), were incubated with His-PP2A A . Only the GST-STRN3(58 -169) protein efficiently pulled down PP2A A (Fig. 2B, left panel). Our mass spectrometry results indicated that Mob3, CCM3, and the GCKIII proteins interact with a portion of the C terminus of both Strn and STRN3 (from amino acids 220 -713 in STRN3). GST pulldown assays were conducted to uncover possible direct interactions. Full-length GST-STRN3 efficiently pulled down CCM3 in vitro ( Fig. 2A, bottom panel). To map the interaction region(s), truncation mutants of STRN3 were tested for CCM3 binding, as above. Deletion of the first 219 amino acids or residues 338 -713 of STRN3 did not prevent associ- The thickness of each line is proportional to the number of spectral counts (total number of peptides) recovered for each of the proteins in the analysis of the striatin mutants, relative to the spectral counts for the same protein in the AP-MS of full-length Strn. Note that each node (and its associated edges) represents paralogous families, as defined in A. The complete mass spectrometry data used to make this figure are presented in supplemental Table 1. FIGURE 2. Striatin binds directly to PP2A A and CCM3. A, mapping of the direct in vitro association between GST-STRN3 truncation mutants and PP2A A (top) or CCM3 (bottom). Bacterially expressed and purified GST-STRN3 deletion proteins were used for GST pulldown assays with soluble PP2A A or CCM3; GST alone was used as a negative control. Proteins were visualized by SDS-PAGE and Coomassie staining. The soluble proteins were added to even numbered lanes only. The position of the soluble proteins are indicated by arrows. B, amino acids 58 -169 of GST-STRN3 are sufficient to mediate an interaction with PP2A A (left) and amino acids 220 -338 are sufficient to mediate the interaction with CCM3 (right) in a GST pulldown assay. GST-STRN3(1-57) was used as a negative control. C, PP2A A , STRN3(1-338), and CCM3 form a complex that is stable throughout the course of gel filtration. Bacterially expressed recombinant proteins were purified and loaded onto a Superdex 200 gel filtration column. The proteins elute as one major peak, ϳ8 -12 ml, as detected by A 280 nm (top) and SDS-PAGE followed by Coomassie staining (bottom). D, peptide array identifies the core STRN3 residues (amino acids 291-305) responsible for binding to GST-CCM3 in an overlay assay. 25-mer peptides derived from STRN3(220 -338) were spotted on a membrane (see supplemental Fig. 3 for Coomassie staining of the membrane) and subjected to an overlay assay with GST-CCM3 (or GST alone) followed by detection with anti-GST and horseradish peroxidase-coupled secondary antibodies. The sequence of the common minimal element from the peptides that display association is highlighted. E, fluorescence polarization indicates that a fluorescent 25-mer STRN3 peptide (amino acids 286 -310) interacts with wild-type CCM3. The minimal sequence of STRN3 determined in D, plus five flanking residues on either side, was synthesized as a fluorescent peptide and used in a fluorescence polarization assay. This peptide readily interacts with wild-type CCM3 (blue curve). However, substitution of Lys-132, Lys-139, Lys-172, and Lys-179 in CCM3 for alanines completely abrogated association with the STRN3 peptide (see Fig. 3 for description of this mutant). F, summary of the binding surfaces mapped on striatin (results from Figs. 1 and 2). JULY 15, 2011 • VOLUME 286 • NUMBER 28 JOURNAL OF BIOLOGICAL CHEMISTRY 25069 ation of CCM3 ( Fig. 2A). We next demonstrated that STRN3(220 -338) was sufficient to mediate the interaction with CCM3 (Fig. 2B, right panel). To further examine the interactions between PP2A A , STRN3, and CCM3, the recombinant proteins were mixed and analyzed by gel filtration chromatography followed by Coomassie staining. PP2A A and STRN3(58 -169) form a complex that is stable throughout chromatographic fractionation (supplemental Fig. 4). Similarly, a trimeric complex formed of PP2A A , STRN3(1-338), and CCM3(2-212) also eluted as a stable complex from gel filtration experiments (Fig. 2C). Structure-Function Analysis of Core STRIPAK To further refine the location of the CCM3 binding site on STRN3, overlay assays of peptides derived from STRN3(220 -338), using full-length GST-CCM3 as a probe, were performed. STRN3 peptides containing amino acids 291-305 were able to interact with CCM3 on a membrane (Fig. 2D). Furthermore, a peptide encompassing amino acids 286 -310 was sufficient for interaction in solution (as detected by fluorescence polarization) and had an apparent K d of 132 Ϯ 0.003 nM when modeled as one site binding (Fig. 2E and supplemental Table 2). Taken together, our mapping studies identified striatin as a scaffolding molecule within the STRIPAK complex and revealed direct interactions with both PP2A A (amino acids 58 -169 of STRN3) and CCM3 (amino acids 291-305 of STRN3; Fig. 2F). CCM3 Associates with Striatin via Its FAT Domain-Deletion mutants of CCM3 were next analyzed for their ability to bind to GST-STRN3(220 -338) in vitro. Although deletion of CCM3 amino acids 82-212 precluded interaction with STRN3, a construct expressing only amino acids 92-212 was sufficient to bind to STRN3 (Fig. 3A). This region of CCM3 forms a globular domain consisting of four ␣-helices exhibiting structural resemblance to the FAT domain, which mediates the interaction between focal adhesion kinase and paxillin. On this basis, an interaction of CCM3 with paxillin was validated previously (30) and shown to require four lysine residues that establish interactions with paxillin (Fig. 3B); the same residues were also implicated in mediating the interaction between CCM3 and CCM2, as CCM2 shares a stretch of homology to paxillin (30). The striatin peptide responsible for association with CCM3 exhibits an amino acid composition similar to the paxillin and CCM2-derived peptide (Fig. 3C), suggesting that the same mode of binding may be employed for striatin-CCM3 interac- . CCM3 interacts with STRN3 via its FAT domain. A. STRN3 interacts with the C-terminal portion of CCM3. A GST pulldown assay was performed with GST-STRN3 220 -338 and bacterially expressed and purified CCM3 deletions to map the region on CCM3 responsible for binding to GST-STRN3. Deletion of the first 92 amino acids of CCM3 did not affect the interaction, and a region encompassing amino acids 2-82 was unable to associate to GST-STRN3(220 -338). B, structural modeling of the CCM3 FAT domain with a peptide derived from paxillin. The CCM3 crystal structure (30) revealed that the region that we have mapped as interacting with STRN3 also folds as a focal adhesion targeting domain (yellow) similar to focal adhesion kinase (FAK, cyan). Interaction with peptides derived from paxillin (shown in red) are mediated via four lysine residues (highlighted). C, alignment of the peptide derived from STRN3 (and corresponding peptides in the STRN and STRN4 paralogs) with the CCM3 binding regions of CCM2 and paxillin suggests a common mode of association. D, mutation to alanines of the four conserved lysines (Lys-132, Lys-139, Lys-172, and Lys-179) in CCM3 C-mut (4A) abrogates interaction with GST-STRN3(220 -338) in vitro. GST pulldown assays were performed with GST-STRN3(220 -338) to monitor binding of wild-type CCM3 or CCM3 C-mut (4A). Only wild-type CCM3 is pulled down by GST-STRN3. GST-STRN3(1-57) was used as a negative control. E, mutation to alanines of the four conserved lysines in CCM3 abrogates the interaction with full-length Strn in HEK293T cells. Co-transfection of FLAG-tagged full-length Strn with GFP-tagged CCM3 constructs WT, C-mut (4A), N-mut, and N/C-mut was performed. Immunoprecipitation of FLAG-Strn was followed by immunoblotting with anti-GFP to detect CCM3 association. CCM3 C-mut, 4A is unable to interact with FLAG-Strn, whereas CCM3 N-mut has no effect on the interaction. A combination of both mutations also prevented the interaction, as expected. tions. These four surface lysine residues at the interface of the paxillin-CCM3 model (Lys-132, Lys-139, Lys-172, Lys-179) were mutated to alanines (called CCM3 C-mut 4A). As measured by fluorescence polarization, these mutations abrogated the interaction with the STRN3 peptide (Fig. 2E). These mutations also abrogated the interaction with a STRN3(220 -338) protein in a GST pulldown assay (Fig. 3D). (WT CCM3 is pulled down but not CCM3 C-mut 4A.) Lastly, we tested the recovery with endogenous striatin of transiently transfected GFP-tagged versions of CCM3 WT, C-mut 4A, N-mut (which should not abrogate the interaction) as well as N/C-mut (a combination of both the N and C mutations). Only those constructs with the C-mut 4A lost the interaction (Fig. 3E). Taken together, these data indicate a similar mode of association for CCM3-paxillin, CCM3-CCM2, and CCM3-striatin, suggesting mutually exclusive interactions between these proteins. (The CCM3-striatin interaction is more readily detected by AP-MS in our cells.) Consistent with this, an interaction between CCM2 and striatin was never detected by AP-MS (data not shown). CCM3 Bridges GCKIII Proteins to STRIPAK via Striatin-It was reported previously that CCM3 interacts with members of the GCKIII protein family (9, 10), and direct associations between this kinase family and CCM3 were mapped to a CCM3 region (amino acids 2-82) different from that implicated in striatin binding (49). Motivated by the finding that CCM3 binds directly to striatins, we sought to determine whether CCM3 could bridge GCKIII proteins to striatin. A direct interaction between the GCKIII protein Mst4 and CCM3 was first recapitulated in vitro (Fig. 4A). To test whether CCM3 acts as a bridge between the kinases and striatins, pulldown assays using GST-Mst4 and untagged STRN3(1-338) were performed in the presence or absence of untagged CCM3. STRN3 alone did not associate with GST-Mst4 (Fig. 4B, lane 3). However, in the presence of CCM3, GST-Mst4 efficiently pulls down STRN3 (in addition to CCM3; lane 4). CCM3 is therefore able to bridge interactions between the GCKIII protein Mst4 and STRN3. To determine whether CCM3 and the striatin proteins were responsible for bridging MST4 to PP2A in vivo, endogenous MST4 or STRN3 were immunoprecipitated from cells in which CCM3, MST4, or the three striatin family members were depleted by esiRNA (Fig. 4C). Recovery of PP2A was monitored using antibodies directed against either PP2A cat or PP2A A . Depletion of CCM3 or striatins largely abrogated the interaction between MST4 and both phosphatase subunits (Fig. 4C, lanes 7-9). Note, however, that depletion of CCM3 or MST4 in the binding assay. C, depletion of CCM3 or striatins by esiRNA prevents association of the PP2A phosphatase and MST4 kinase. HeLa cells were transfected with the indicated esiRNAs. (Note that the esiRNA mixture for striatins targets all three paralogs.) Luc indicates that a non-targeting esiRNA directed against luciferase was employed; none indicates a mock transfection. After cell lysis (total cell lysate input shown on the left), immunoprecipitation of MST4 (center) or STRN3 (right) was performed using antibodies against the endogenous proteins. SDS-PAGE was followed by immunoblotting using antibodies against the indicated endogenous proteins. Arrows indicate the position of each protein; stars indicate the decrease in intensities of the MST4 and CCM3 bands in the immunoprecipitates, as these proteins migrate close to cross-reacting species in the immunoprecipitates. Depletion of striatins, CCM3, or MST4 all prevent recruitment of PP2A cat and PP2A A to MST4, indicating that CCM3 and striatins are responsible for the interaction between the kinase and the phosphatase components of STRIPAK. STRN association with MST4 is prevented by the depletion of MST4 and CCM3, consistent with the bridge model described in B. Depletion of striatins has no effect on the recruitment of CCM3 to MST4. Although depletion of the striatins alters the recovery of PP2A cat and PP2A A to STRN3, as expected, depletion of CCM3 or MST4 has no effect on these interactions. D, model for the structural organization of core STRIPAK. Structure-Function Analysis of Core STRIPAK does not affect the interaction between STRN3 and PP2A ( lanes 12 and 13). On the basis of the data presented above, we propose the following architectural model for the STRIPAK complex (Fig. 4D). Striatin functions as a core scaffold within STRIPAK, mediating homo-and hetero-oligomerization, as well as (minimally) direct interactions with PP2A A and CCM3, via two separate regions. Through direct interactions, CCM3 docks onto striatin and recruits the GCKIII proteins to the phosphatase component of STRIPAK. Localization of MST4 to Golgi and Interaction with GM130 Is Regulated by CCM3 and Striatin in Opposite Manner-The kinase MST4 had been shown previously to localize to the Golgi and had been implicated in Golgi positioning and integrity (15,16). To begin to understand the functional consequences of the interaction between the kinase and phosphatase components of STRIPAK, the localization of MST4 to the Golgi was monitored in HeLa cells. Consistent with previous studies, we observed strong co-localization between MST4 and the Golgi protein giantin, although localization of MST4 in punctate structures in the cytoplasm (that are not stained with giantin) was also readily apparent (Fig. 5A; supplemental Fig. 4). MST4 was reported to be targeted to the Golgi at least partially due to its interaction with the protein GM130 (15). In agreement with these data, when we conducted an AP-MS analysis on FLAG-Mst4, GM130 was also recovered (data not shown), and the reciprocal AP-MS analysis of FLAG-GM130 recovered MST4 (but not the additional components of STRIPAK) as a major interactor (Fig. FIGURE 5. CCM3 and striatins exert opposite effects on MST4 localization. A, co-localization of endogenous MST4 (green) with the Golgi protein giantin (red). In the overlay (right), co-localization to the Golgi is shown in yellow; note that a fraction of MST4 does not localize to the Golgi but is instead detected as green punctae in the cytosol. Scale bar, 7.5 m. B, GM130 interactors identified by mass spectrometry. AP-MS was performed as described under "Experimental Procedures." Statistical analysis of the interactions using SAINT was performed; see supplemental Tables 3 and 4 for complete mass spectrometric data. The thickness of the edges is proportional to spectral counts (total number of peptides) for the prey, whereas the color indicates MST4 (blue), known Golgi proteins (green), tubulins (pink), or proteins other than MST4, Golgi proteins, or tubulins (gray). Note that MST4 is a major interaction partner for GM130. C, depletion of CCM3 decreases association of MST4 with STRIPAK but not the interaction with GM130. Stable HEK293 cells expressing FLAG-Mst4 were transfected with the indicated esiRNAs (see Fig. 4C for details). FLAG-Mst4 was immunoprecipitated using anti-FLAG antibodies, and the sample was processed for quantitative mass spectrometry. Relative quantification by mass spectrometry was performed using a TripleTOF 5600 with cells depleted of STRN proteins or CCM3; normalization to Mst4 (bait) levels and to the expression levels in the luciferase samples is shown. See supplemental Table 5 for mass spectrometric results. As shown in Fig. 4C, depletion of CCM3 affects recovery of all STRIPAK components with Mst4; depletion of the striatins affects recovery of all STRIPAK components with the exception of CCM3. By contrast, depletion of CCM3 appeared to increase the GM130 interaction with FLAG-Mst4, indicating that this interaction is not mediated via STRIPAK. D, GM130 interaction with endogenous MST4 is reduced by depletion of striatins in HEK293 cells stably expressing FLAG-GM130. Transfection of esiRNAs was followed by immunoprecipitation of endogenous MST4 and immunoblotting for FLAG-GM130 and STRIPAK proteins. To control for the amount of FLAG-GM130 non-specifically binding to the beads, we performed immunoprecipitation in parallel with an isotype-matched antibody (anti-HA). (There is no HA protein transfected in these cells.) E, esiRNA-mediated depletion of CCM3 in HeLa cells induces near complete localization of MST4 to the Golgi, whereas depletion of striatins prevents Golgi localization. Transfection of indicated esiRNAs was followed by immunofluorescence staining of MST4 and DAPI. Scale bar, 10 m. Tables 3 and 4). These results suggested that MST4 can be found in at least two separate complexes, one with STRIPAK and one with GM130. 5B; supplemental To test the effect of CCM3 depletion on the interactions established by MST4, AP-MS was performed in HEK293 cells expressing FLAG-Mst4 after CCM3 knockdown, using a quantitative mass spectrometric approach. CCM3 knockdown resulted in decreased interactions between FLAG-Mst4 and the remaining STRIPAK components (green bars), but not between FLAG-Mst4 and GM130. (In fact, the interaction with GM130 appeared to increase in some experiments ( Fig. 5C and supplemental Table 5).) Similarly, depletion of CCM3 in FLAG-GM130 expressing HEK293 cells did not disrupt the interaction between immunoprecipitated endogenous MST4 and FLAG-GM130, when evaluated by immunoblot. This confirms that MST4 is not recruited to GM130 via CCM3. We next evaluated the effect of CCM3 knockdown on the MST4 localization pattern in HeLa cells. Interestingly, in cells in which CCM3 was silenced, the localization of MST4 is shifted almost completely to the Golgi region (Fig. 5E). Similar results were obtained using independent silencing reagents (supplemental Fig. 7), demonstrating that the observed effects are caused by the depletion of CCM3, and very little of the protein remains localized to the cytosol. These data suggest that CCM3 may favor the cytosolic (and perhaps punctate) localization of MST4 over Golgi localization. Because CCM3 and striatin bridge MST4 to other STRIPAK components, we expected that silencing striatins would have the same effect on MST4 Golgi localization as silencing CCM3. However, when striatins were depleted, MST4 Golgi localization was strikingly perturbed, leading to a more prominent cytosolic localization (Fig. 5E). This was accompanied by a reduction in the amount of FLAG-GM130 precipitated with endogenous MST4 (Fig. 5D). (Note that this reduction was not significantly detected in the quantitative mass spectrometry experiment with FLAG-Mst4 cells shown in Fig. 5C.) Taken together, these results suggest that CCM3 and striatin exhibit opposing roles on the localization of the kinase MST4 to the Golgi, with striatins favoring a Golgi localization and CCM3 promoting cytosolic location. CCM3 and Striatin Oppose Each Other in Golgi Positioning-Depletion of MST4, and more recently CCM3, was shown to affect the positioning of the Golgi toward the leading edge of a wound (15,16). Prompted by the surprising results that CCM3 and striatin knockdowns have opposing effects on MST4 localization (and in some cases, on the interaction with GM130), the effects of depletion of each of these proteins on Golgi orientation was assessed. Golgi orientation was determined by a well established criterion: the Golgi of cells on the wound edge were considered to be oriented toward the leading edge when the majority of the stained Golgi was located within the quadrant facing the wound (26). In cells transfected with control esiRNA, ϳ40% (Ϯ 2%, n ϭ 4) of the cells displayed Golgi positioned toward the leading edge 1.5 h after wounding (Fig. 6, A and B). As reported previously, depletion of CCM3, using either esiRNA or siRNA, reduced the percentage of properly positioned Golgi to roughly 25% (the number of cells expected to randomly orient their Golgi toward the wound). By contrast, but consistent with the data presented above, depletion of striatin using esiRNA markedly enhanced Golgi orientation toward the wound, from ϳ40% in controls to Ͼ54%. Similar results were observed following striatin knockdown with siRNA (n ϭ 2). These data indicate that depletion of CCM3 and striatin not only have opposite effects on MST4 localization but also have opposing effects on Golgi repositioning during wound healing. DISCUSSION We have described the molecular organization of the STRIPAK complex and assigned a role to the disease-related CCM3 protein as an adaptor that links the kinase and phosphatase subunits of STRIPAK. We have also described a means by which the functions of striatin and CCM3 oppose each other, through the regulation of MST4 interactions and localization as well as their effect on Golgi positioning after stimulation by wounding the cell monolayer. These data suggest that the interaction between CCM3 and STRIPAK via direct association with striatin may serve as a regulatory mechanism to control the function of the MST4 kinase. Importantly, these results also suggest that Golgi localization of MST4 may be detrimental to polarization. The Golgi apparatus has emerged as a critical hub for intracellular signaling (34), and signaling is essential for Golgi polarization. For example, phosphorylation of the Golgi protein, GORASP1 (also known as GRASP65, a GM130 interaction partner), by the kinase ERK is required for Golgi reorientation (35). Interestingly, ERK activity has been shown to be modulated by CCM3 and MST4 (9); whether or not GORASP1 phosphorylation is modulated in our system remains to be tested. A large body of evidence suggests important roles for polarized localization of the Golgi (36). Cell migration requires polarized secretion at the leading edge for the regulated transport of vesicles, the delivery of adhesion molecules and cytoskeletal components, as well as the addition of new membranes. The polarized localization of the Golgi has also been intimately linked to the small proteins of the Rho-GTPase family (26,37). In light of the defects in Rho signaling following modulation of CCM1, CCM2, or CCM3 expression (12)(13)(14), it is tempting to postulate that CCM3, MST4, and perhaps STRIPAK may regulate Golgi polarization via regulation of Rho-GTPases. Whether striatins, CCM3, and MST4 play a role in all aspects of Golgi polarization, including cell migration, remains to be answered. Given that STRIPAK contains both kinase and phosphatase activities, our results suggest the existence of a molecular switch defined by the balance of phosphorylation and dephosphorylation at the Golgi. At this point, the target(s) for the MST4 kinase (or the PP2A phosphatase) in the Golgi polarization process are still unknown. Additionally, whether and how Golgi polarization may contribute to the vascular defects observed in CCM patients remains to be investigated. New roles for STRIPAK complex components are beginning to emerge, in large part through analysis of STRIPAK paralogs across species. It is noteworthy that a portion of the STRIPAK complex (lacking CCM3 and the GCKIII protein component) has been conserved throughout eukaryotic evolution. Ancestral roles for STRIPAK point to cytoskeletal and membrane dynamics functions. In Saccharomyces cerevisiae, Far8 (striatin), Far11 (STRIP1/2), Vps64/Far10 (orthologous to the alternate STRI-PAK component SLMAP), along with Far3 and Far7 (for which no human orthologs are known) form a protein complex implicated in cell cycle arrest following pheromone treatment (38). Orthologs of these ancestral STRIPAK genes are required for proper vegetative membrane fusion in filamentous fungi (39,40). The function of STRIPAK in mediating membrane fusion appears to have been conserved in mammals, as deregulation of SLMAP prevents myoblast fusion to myotubes (41). More recently, deletion of the orthologs of striatin (FAR8), STRIP1/2 (FAR11), or one of the PP2A catalytic subunits (PPG1) was demonstrated to suppress lethality and actin cytoskeleton disorganization caused by mutations of TORC2 (target of rapamycin complex 2) (42). Interestingly, TORC2 controls actin cytoskeleton assembly across multiple species, in part via regulation of the Rho1 GTPases (43)(44)(45). CCM disease, CCM3, and MST4 are intimately linked to Rho signaling in human cells (12)(13)(14)46), suggesting that this function of STRIPAK has been evolu-tionarily conserved. In addition to these roles in cytoskeleton and membrane dynamics, a surprising recent report implicated the Drosophila STRIPAK complex (including CCM3) in Hippo signaling (47), indicating that STRIPAK may control multiple signaling pathways. The elucidation of the substrates of the kinase and phosphatase components of STRIPAK will be required for a full molecular understanding of STRIPAK function. Finally, although our data point to the STRIPAK complex as the major interactor for epitope-tagged or endogenous CCM3 protein in HEK293 cells (4), HeLa cells, C2C12 myoblasts, and myotubes and in bovine endothelial aortic cells (data not shown) CCM3 is also capable of interacting with CCM2 (48) and paxillin (30). Because these interactions are apparently mediated via the same surface as the striatin binding site on CCM3, we propose here that they may be mutually exclusive. Further studies on CCM3 function in vascular disease and elsewhere will need to take these alternative protein assemblies into consideration.
2018-04-03T06:01:23.891Z
2011-05-11T00:00:00.000
{ "year": 2011, "sha1": "1b338b2154e07b386201d179dfe6ea81b546bc2e", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/286/28/25065.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "ae7151bb5a7ced61e6c1d409f6ed0a9a816e0b26", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
266863924
pes2o/s2orc
v3-fos-license
Immunoprotective Effect of Liver Allograft on Patients with Combined Liver and Kidney Transplantation Background Simultaneous liver-kidney transplantation (SLKT) and kidney transplantation (KT) after liver transplantation (LT) provide potential treatment options for patients with end-stage liver and kidney disease. There is increasing attention being given to liver-kidney transplantation (LTKT), particularly regarding the immune-protective effects of the liver graft. This retrospective, single-center, observational study aimed to evaluate the clinical outcomes of KT in LTKT patients – either SLKT or KT after LT (KALT) – compared to KT alone (KTA). Material/Methods We included patients who underwent KT between January 2005 and December 2020, comprising a total of 4312 patients divided into KTA (n=4268) and LTKT (n=44) groups. The LTKT group included 11 SLKT and 33 KALT patients. To balance the difference in sample sizes between the 2 groups, we performed 3: 1 propensity score matching (PSM). Results There was no significant difference in graft survival between the groups. However, the LTKT group exhibited significantly superior rejection-free survival compared to the KTA group (P<0.001). Although the difference in the rejection-free graft survival rate disappeared (P=0.081) following PSM, a significant difference was observed when the LTKT group was subdivided into SLKT and KALT groups (P=0.047). The SLKT group showed no rejection episodes during the follow-up period, while the KALT group did not exhibited significant difference in the presence of rejection compared to the KTA group. Conclusions These findings suggest that SLKT may have a protective effect against rejection compared to KTA, highlighting a potential immunoprotective role of liver grafts on kidney grafts. Background Liver transplantation (LT) is a life-saving procedure for patients with end-stage liver disease, many of whom also suffer from renal dysfunction [1].Approximately 20-30% of individuals with cirrhosis experience renal insufficiency, primarily due to glomerulonephritis associated with chronic viral or alcoholic hepatitis [2].Moreover, the occurrence of acute renal failure after liver transplantation is reported to be as high as 70%, with 7% of instances necessitating dialysis, thereby exacerbating patient mortality rates and decreasing the survival rate of the transplanted liver graft [3].Higher mortality rates for simultaneous liver-kidney transplantation (SLKT) were observed in the early 2000s [4]; however, it has emerged as a promising treatment option recently, with 10% to 20% of LT in the United States now being performed in conjunction with kidney transplantation (KT) [5].Even if liver transplantation is successfully performed, long-term exposure to calcineurin inhibitors increases the risk of developing chronic renal failure, especially in patients with borderline renal function.Additionally, pre-existing renal diseases such as hypertension, diabetes, and glomerulonephritis further contribute to long-term chronic renal failure, leading to an increasing number of patients requiring kidney transplantation after liver transplantation (KALT) [6]. Since implementing the Model for End-Stage Liver Disease (MELD) allocation policy in 2002, patients who undergo liver transplants have been experiencing a more significant burden of renal dysfunction, as serum creatinine is a significant factor in the MELD equation [7].The consideration of serum creatinine means that a subset of these patients may need a KT as well as an LT, resulting in an increase in the number of candidates for SLKT.Nonetheless, the presence of recurrent early complications remains a substantial barrier to the implementation of SLKT.Despite ongoing efforts to reduce surgical complications in SLKT, patients undergoing SLKT still experience a higher mortality rate compared to patients receiving KTA and KALT [8,9].This higher mortality rate is primarily attributable to surgical complications in the first year following transplantation.However, SLKT patients demonstrate superior rejection-free graft survival, indicating an immune-protective effect of the liver graft and its potential role in improving long-term outcomes for the kidney graft [8,9].Clinical evidence has provided significant support for the immune-protective impacts of liver allografts on kidney allografts in recipients of SLKT compared to those who undergo kidney transplantation alone (KTA) [4,6,[10][11][12][13][14][15].The phenomenon known as the sponge effect of a liver graft has been postulated to play a crucial role in the sequestration of circulating anti-human leukocyte antigen (HLA) antibodies.However, there is currently a lack of large-scale single-center studies reporting the clinical outcomes of KALT [6,12,13,[15][16][17].This study aimed to evaluate and compare the outcomes of patients who underwent SLKT or KALT with those who underwent KTA by utilizing propensity score matching.In particular, by comparing the differences in rejection-free kidney graft survival between the groups, our goal was to understand and substantiate the potential immune-protective effect of liver grafts on kidney grafts. Patients This study was a retrospective, observational analysis conducted at a single center, encompassing a cohort of patients who underwent KT from January 2005 through December 2020.We excluded patients who also received a pancreas (n=216) or heart (n=12) transplant concurrently.Consequently, a total of 4312 patients were considered for our investigation.These patients were classified into 2 primary groups: the KTA (n=4268) and LTKT (n=44) groups.Within the LTKT group, 11 patients underwent SLKT, while 33 patients had a KALT.All individuals in the cohort of SLKT received transplants from a single deceased donor for both the liver and kidney.However, in the KALT group, the liver and kidney grafts were obtained from different donors.Our study utilized data from the Asan Medical Center (AMC) registry in Seoul, Korea.The database undergoes a yearly update via a comprehensive evaluation of the medical records pertaining to patients who have undergone KT.The Institutional Review Board at the AMC (AMC IRB number 2022-0737) granted approval for the study's protocols.The Institutional Review Board (IRB) waived the requirement for informed consent because this was a retrospective study classified as Level 1, with minimal associated risk.We accessed the data for research purposes from June 1, 2022, to February 28, 2023.To maintain participant anonymity, the authors did not access any personally identifying information during the data collection procedure.For the objectives of this study, only the necessary de-identified medical information was retrieved.The present study adhered to the ethical principles outlined in the World Medical Association Declaration of Helsinki. Immunosuppression This study implemented immunosuppressive therapies, including those for SLKT, in accordance with the standard protocols for KT.According to immunological risk, either basiliximab, an anti-IL-2 receptor antibody, or anti-thymocyte globulin (ATG) was employed for induction therapy.We employed calcineurin inhibitors, corticosteroids, and mycophenolic acid for maintenance therapy.Tacrolimus target trough levels in the early postoperative phase were 7-10 ng/ml and 100-150 ng/mL, respectively.However, after the first postoperative year, these target concentrations for tacrolimus and cyclosporin were gradually decreased to 3-6 ng/mL and 50-100 g/L, respectively.For ABOincompatible (ABOi) and crossmatch (XM)-positive KT, a single dose of rituximab (200-500 mg) was given 1-2 weeks prior to plasmapheresis, with or without intravenous immunoglobulin. Definitions The time from transplantation to the recipient's death was defined as patient survival (PS), and the time from transplantation to the patient's return to dialysis, re-transplantation, or last follow-up with a functional graft was classified as graft survival (GS).We also examined death-censored GS (DCGS), which involved censoring data if a patient died with a functional transplant.The interval between a kidney transplant and the incidence of an acute rejection (AR), which was determined by pathology, was called rejection-free GS.The diagnosis of AR was made based on pathological examination following the Banff criteria [18].Additionally, C4d staining was conducted on all tissue samples.In this study, all references to AR pertain to kidney grafts only, not liver grafts.In our standard practice, we only conduct indication biopsies, and routine protocol biopsies were not performed. Statistical Analysis The chi-squared test or Fisher's exact test was employed in our study to compare categorical variables, while Student's t-test was utilized for continuous variables, as deemed appropriate.The researchers employed Kaplan-Meier analysis to compute cumulative survival and conducted group comparisons using a log-rank test.Univariate and multivariate Cox proportional hazard regression analyses were used to identify risk factors contributing to rejection-free graft survival, with the results presented as hazard ratios (HRs).Variables with a P-value of less than 0.1 were incorporated into the multivariate analysis, which was conducted using the forward conditional method.The statistical analyses were performed using SPSS version 18.0 (SPSS, Inc., Chicago, IL, USA).To address the issue of differences in group sizes, the researchers utilized propensity score matching (PSM) in a 3: 1 ratio, employing the Statistical Analysis System (SAS ® , SAS Institute, Inc., Cary, NC, USA) for this purpose.We included recipient variables in our PSM that have clinical relevance to post-transplant AR.These included age, sex, diabetes mellitus (DM), hypertension, hepatitis virus infection, body mass index (BMI), ABOi KT, FXCM KT, HLA-A, B, DR mismatches (MM), panel-reactive antibody (PRA) level, type of donor (deceased or living), and calcineurin inhibitors.The matching process was iterated until achieving the optimal matching of all covariates, ensuring that the standardized mean differences (SMD) were below 0.1.A significance level of 0.05 was chosen, and a P-value below this threshold was considered statistically significant. Patient Demographics Table 1 presents the baseline and clinical characteristics of the 4312 study patients.Statistically significant differences between the groups were observed in the proportion of female patients (42.3% vs 15.9%, P<0.001), the prevalence of hepatitis B virus (HBV) (3.1% vs 27.3%, P<0.001), and the incidence of hypertension (84.5% vs 72.7%, P=0.033).Furthermore, a trend toward a higher rate of DM was noted in the LTKT group (23.9% vs 36.4%,P=0.053).There were no significant differences between the 2 groups for any other factors, such as age, body mass index, ABO incompatibility, positive FXCM, HLA-A, B, DR MM, PRA class I, PRA class II, type of donor, or type of calcineurin inhibitor and induction agent utilized. Postoperative Clinical Outcomes A summary of the postoperative clinical outcomes seen in 11 individuals who had SLKT is shown in Table 2.The average length of stay for these patients in the intensive care unit (ICU) was 14.8 days, ranging from 0 to 40 days.Five of these patients had delayed graft function (DGF), which is defined by the need for hemodialysis.Graft function was not recovered in 1 patient, leading to graft failure.Three cases of postoperative hemorrhage required surgical intervention.The range of the patients' MELD scores was 21 to 33.The one-year PS rate showed no significant difference between the KTA and LTKT groups (1.7%, n=72 vs 4.5%, n=2; P-value=0.15). Long-term kidney graft survival was analyzed using Kaplan-Meier analysis (Figure 1).This study did not find a statistically significant difference in the overall GS (P=0.068) or DCGS (P=0.151).However, for rejection-free GS, a significant difference was noted between the LTKT and the KTA groups (P<0.001)(Figure 2).A statistically significant difference was observed when the LTKT group was divided into SLKT and KALT groups (P=0.012).Notably, no AR occurred in the SLKT group during the follow-up period, while the KALT group did not show a significant difference compared to the KTA group (P=0.10).Thus, the reason for the significantly better rejection-free GS in the LTKT group is attributed to the absence of rejection in the SLKT group.The LTKT group exhibited a significantly lower overall rejection rate of 6.8% compared to 22.8% in the KTA group (P=0.012).However, both groups showed similar 1-year rejection rates (P=0.43).and distribution of rejection types, with acute cellular rejection at 4.5% for LTKT and 8.9% for KTA, and acute antibody-mediated rejection at 6.8% for LTKT and 13.7% for KTA, without any significant differences (P=0.49)(Supplementary Table 1). In Table 3, we identify risk factors associated with acute rejection post-transplantation. Several variables demonstrated statistical significance in the univariate analysis examining risk factors for AR following transplantation.These included age, BMI, tacrolimus vs cyclosporin, flow cytometry (FC) XM positivity, HLA-A, B, DR MM, PRA class I and II, living vs deceased donor, and the LTKT vs KT group.Only variables from the univariate analysis with a P-value of less than 0.1 were included in the subsequent multivariate analysis.The results of the multivariate analysis indicate that age (HR=0.83,95% CI 0.78-0.81,P<0.001), BMI (HR=1.01,95% CI 1.00-1.01,P=0.004), tacrolimus use in comparison to cyclosporin (HR=0.56,95% CI 0.46-0.64,P<0.001), and positive FCXM (HR=1.39,95% CI 1.07-1.82,P=0.016).After adjusting for possible confounders, the LTKT group showed a protective effect against AR, with a hazard ratio of 0.27 (95% CI: 0.09-0.83;P=0.023). Propensity Score Matching Analysis Table 4 presents a comparative analysis of the cohort of patients who underwent KTA, comprising 132 individuals, and the LTKT group, comprising 44 individuals.Both groups were matched using a propensity score to reduce bias from the nonrandomized study design.The SMD values, which are all less than 0.1, suggest that after propensity score matching, the 2 groups were reasonably balanced regarding their baseline and clinical characteristics.Continuous data are presented as means±standard deviations.Categorical data are presented as a number (%).KT -kidney transplantation; LT -liver transplantation; FCXM -flow cytometry crossmatch; DSA -donor-specific antibody; PRA -panel reactive antibody; SLKT -simultaneous liver-kidney transplantation. Table 5 presents the impact of LTKT on the rejection-free GS of kidney grafts in both unadjusted and propensity score-matched conditions.The statistical significance of the HR for rejectionfree GS was 0.27 (95% CI: 0.09-0.83;P=0.023) prior to conducting the PSM analysis.However, after PSM, to account for possible confounding variables, the HR increased slightly to 0.36 (95% CI: 0.11-1.19) and was no longer statistically significant (P=0.095).This suggests that while there may be a protective effect of LTKT against graft rejection, the effect is not statistically significant when controlling for other factors. Long-Term Clinical Outcomes After PSM Following PSM, the Kaplan-Meier analysis revealed no statistically significant disparities in the overall GS (P=0.052) or DCGS (P=0.109)(Figure 3).Upon examination of the rejection-free GS, no significant distinction (P=0.081) was observed between the cohorts who underwent LTKT and those who received KTA.However, a meaningful difference was observed when the LTKT group was divided into the SLKT and KALT groups (P=0.047). The SLKT group had no rejection, while the KALT group showed no significant deviation compared to the KTA group (P=0.28)(Figure 4).These findings suggest that after adjusting for confounding factors through PSM, the SLKT group experienced markedly fewer rejection episodes than the KTA group. Discussion Our study revealed that the SLKT group exhibited no rejection during the observational period.Additionally, in the PSM analysis, the SLKT group showed a significant difference in longterm rejection-free GS compared to the KTA group.In contrast, there was no significant difference between the KALT and the KTA groups in terms of rejection-free GS.These findings imply that the immune-protective effect is more pronounced when the liver and kidney are transplanted at the same time, particularly when they are obtained from the same donor. Our findings are consistent with previous research investigating kidney transplant outcomes in SLKT patients with similar immunological risk profiles [4,9,15,19].These prior studies reported long-term rejection rates ranging from 4% to 10% in SLKT recipients.They also showed that the SLKT group had a considerably lower incidence of kidney graft rejection than the KTA group.In our study, the SLKT group comprised only 11 patients; 1 patient was eliminated from further follow-up due to early graft failure.Thus, 10 people served as the true sample size for studying rejection incidents in the SLKT group.that the combined liver-kidney transplantation group exhibits higher rejection-free renal GS rates at both 1 and 3 years (86% and 79%, respectively), as compared to the KALT group (75% and 61%, respectively), highlighting the potential immune-protective benefits of combined liver-kidney transplantation [9].According to our research, the immune-protective effect against rejection might not be as strong in the KALT group, which received the liver and kidney grafts from separate donors, as in the SLKT group.Additionally, in an intra-operative kinetics study of donor-specific antibody (DSA) in SLKT patients, it was observed that DSA levels significantly decreased after liver graft perfusion, while non-DSA levels remained unchanged [20].These findings suggest that the immune-protective effect of the liver grafts in decreasing DSA is antigen-specific.Continuous data are presented as means±standard deviations.Categorical data are presented as a number (%).KT -kidney transplantation; LT -liver transplantation; SMD -standardized mean difference; FCXM -flow cytometry crossmatch; PRA -panel reactive antibody; SLKT -simultaneous liver-kidney transplantation. One plausible speculation to account for the observed immuneprotective effects associated with the liver allograft in our study is that the liver allograft may have contributed to the clearance or neutralization of circulating HLA antibodies [12,20,21].The reduction in DSA was observed to start immediately following liver graft reperfusion [20].This decrease in DSA persists for 2-3 days after LT, demonstrating comparable renal transplantation outcomes between XM-positive and XM-negative patients after LT [21].Observational studies have indicated that SLKT leads to a decrease in DSA and subsequently results in a lower incidence of rejection episodes [4,12,15,21].Numerous studies have consistently reported a significant decrease in DSA, particularly in HLA class I DSA, whereas HLA class II DSA displays a decreasing trend without reaching statistical significance [12,20,21].A plausible hypothesis is that the expression of class I and class II HLA antigens in the liver parenchyma and vasculature differs from that in the renal allograft.Liver allografts primarily express MHC class I on hepatocytes, while the expression of class II molecules is less prominent [12,22].Animal experiments also showed that non-parenchymal hepatic cells in liver grafts could effectively absorb lymphocytotoxic alloantibodies and complement, highlighting the immunological role of liver grafts in antibody removal [16]. Several hypotheses have been proposed to elucidate the mechanisms underlying the immune-protective effect of liver grafts.First, the liver's distinct vast sinusoidal endothelial surface enables the absorption of circulating antibodies, leading to their clearance from the circulation [16,23].In a study conducted in rat models, evidence was presented to support the rapid clearance of DSA from the circulation following LT within approximately 30 min [16].Furthermore, with the aid of its resident phagocytic Kupffer cells, the liver secretes soluble HLA molecules with the capacity to bind and neutralize corresponding antibodies.Notably, especially in the initial post-transplant period, it has been found that the majority of circulating soluble HLA antigen molecules have the donor's phenotype [24,25].Additionally, the immune-protective effect of the liver can be attributed to its remarkable regenerative capacity, allowing it to recover and maintain its structure and function despite immune-mediated hepatocellular injury [25].In the context of SLKT in comparison to KTA, it has been observed that SLKT patients demonstrate reduced occurrences of alloreactive CD4+, CD8+, effector memory, and interferon-g producing T cells.This implies that SLKT is linked to the upregulation of gene transcription related to anti-inflammatory processes and the inhibition of immune cell subsets that cause damage [26].Given the immune-protective mechanisms exhibited by the liver, it is reasonable to suggest that the immunotolerance effect could potentially be relevant in the context of acute cellular rejection, as well as acute and chronic antibody-mediated rejection. In the present study, an assessment was conducted on each individual instance of rejection.Nonetheless, due to the limited number of participants in the LTKT group and the overall infrequency of rejection incidents, it was difficult to conduct a full comparison between each distinct form of rejection observed un the LTKT group and the KTA group. The surgical logistics of combined liver-kidney transplantation requires careful consideration of the timing of kidney transplantation, given the challenges presented by the simultaneous transplantation of both liver and kidney allografts.Maintaining a low central venous pressure and achieving a balanced fluid status to reduce graft congestion are necessary for maximizing liver allograft function.On the other hand, when low central venous and systolic pressures are present or vasopressors are required to keep blood pressure stable, the kidney allograft performs poorly [27].Patients undergoing liver transplantation are typically in a compromised physiological state, with significant coagulopathy and hemodynamic instability, which can negatively impact the newly implanted kidney allograft.Moreover, patients with hyperbilirubinemia may experience bilirubin crystallization in the transplanted kidney's tubules, which increases the risk of acute kidney injury and renal dysfunction [28].Although surgical complications in the LTKT group did not directly result in mortality in our study, they did contribute to longer ICU stays, requiring more transfusions and vasopressor support.This illustrates that the surgical burden may be greater in the LTKT group, highlighting an increased risk of bleeding and demonstrating the potential challenges and complexities associated with these procedures. In fact, within our study population, 1 patient who underwent SLKT did not recover from DGF.To minimize surgical complications in SLKT, various approaches have been introduced, and the "Indiana approach" serves as a notable example [27].This approach involves postponing the kidney graft implantation for 2-3 days after LT, during which time the kidney allograft is stored on a hypothermic pulsatile perfusion machine.Delaying KT provides several benefits, such as stabilizing the patien's hemodynamic condition and coagulopathy, reducing blood loss during the KT by decompressing varices, and minimizing the risk of pressor-related DGF by weaning the patient off vasopressors before KT.Furthermore, delaying the kidney implantation also allows for the clearance of post-liver reperfusion debris and bilirubin from the circulation [29].Machine perfusion for kidney transplantation has not been implemented in South Korea, primarily because the country's relatively small size and efficient transportation network generally allow for cold ischemic times to be within 5-6 h, rarely exceeding 12 h.In our study cohort as well, the average cold ischemic time CIT was 305±123.9(range: 30-995) min.Nonetheless, we anticipate that machine perfusion could become a valuable resource in multi-organ transplants such as SLKT, particularly in cases with a considerable risk of severe DGF, or in assessing the suitability of organs that might otherwise be discarded [30]. Our research has several limitations.First, it was a single-center, retrospective study, potentially constraining the applicability of the results.Moreover, notable disparities were observed in the sample sizes of the patient cohorts under investigation. To overcome the size disparities between the LTKT and KTA groups, we conducted a PSM analysis.Second, our research did not include immunological markers, such as DSA trends or protocol biopsy outcomes.Therefore, immunological effects that have not yet manifested clinically may have received insufficient analysis.However, given that the mean duration of follow-up for the included patient cohort was 148±6.9months, we believe it was sufficient for evaluating clinical outcomes.Third, our study enrolled patients over a long time span, January 2005 to December 2020.Consequently, patients in the earlier years of the study period may have received a different perioperative management or immunosuppressive protocol than those in more recent years.Multivariate analysis was utilized to compensate for these differences.Lastly, there was the possibility that the clinician's choice of induction agent could influence the study outcomes.To address this concern, we used a comprehensive multivariate analysis to minimize any potential influence. Conclusions In conclusion, our study demonstrated that SLKT is associated with a reduced incidence of rejection and improved long-term graft survival without rejection compared to KTA.However, there was no significant difference between the KALT and KTA groups in terms of rejection-free GS.These results demonstrate the immune-protective effect of concomitant liver and kidney transplantation, particularly when the organs come from the same donor.The increased rate of rejection seen in the KALT group compared to the SLKT group further strengthens this conclusion.Further research is needed to elucidate the underlying mechanisms of the immune-protective effect of liver grafts and to assess long-term outcomes in larger cohorts of SLKT recipients.Additionally, tailored patient selection for SLKT and optimization of pre-and postoperative management strategies are necessary to further enhance graft survival in patients with end-stage liver disease and chronic kidney disease. Figure 3 . Figure 3. Kaplan-Meier curves of (A) overall graft survival and (B) death-censored graft survival for the kidney allograft in the propensity matching analysis.LTKT -liver-kidney transplantation; KT -kidney transplantation. Figure 4 . Figure 4. Kaplan-Meier curve of rejection-free graft survival for the kidney allograft in the propensity matching analysis.LTKTliver-kidney transplantation; KT -kidney transplantation; SLKT -simultaneous liver-kidney transplantation; KALT -kidney transplantation after liver transplantation. Table 1 . Baseline and clinical characteristics of the study patients. Table 2 . Postoperative clinical courses of patients with SLKT. If our study had a larger sample size, we would have observed rejection episodes occurring at a frequency similar to previous studies.Compared to the KTA group, the KALT group's rejection rate did not differ significantly.Notably, few studies have addressed the results of KALT patients.Simpson et al showed Table 3 . Risk factors associated with acute rejection after transplantation. Table 4 . Baseline and clinical characteristics of propensity score-matched groups. Table 5 . Effect of LTKT on rejection-free graft survival of kidney transplantation after propensity score matching. Table 1 . Comparative analysis of rejection types and incidences.
2024-01-09T16:23:13.439Z
2023-12-27T00:00:00.000
{ "year": 2023, "sha1": "b04d4a9179230cb8883f9db482a9801a1e74c9fd", "oa_license": "CCBYNCND", "oa_url": "https://annalsoftransplantation.com/download/inPress/idArt/942763", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "2b5cfde43c694ba9cfe2bfbd7b8985ba8c301595", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229930628
pes2o/s2orc
v3-fos-license
Difunctionalization of Alkenes and Alkynes via Intermolecular Radical and Nucleophilic Additions Popular and readily available alkenes and alkynes are good substrates for the preparation of functionalized molecules through radical and/or ionic addition reactions. Difunctionalization is a topic of current interest due to its high efficiency, substrate versatility, and operational simplicity. Presented in this article are radical addition followed by oxidation and nucleophilic addition reactions for difunctionalization of alkenes or alkynes. The difunctionalization could be accomplished through 1,2-addition (vicinal) and 1,n-addition (distal or remote) if H-atom or group-transfer is involved in the reaction process. A wide range of moieties, such as alkyl (R), perfluoroalkyl (Rf), aryl (Ar), hydroxy (OH), alkoxy (OR), acetatic (O2CR), halogenic (X), amino (NR2), azido (N3), cyano (CN), as well as sulfur- and phosphorous-containing groups can be incorporated through the difunctionalization reactions. Radicals generated from peroxides or single electron transfer (SET) agents, under photoredox or electrochemical reactions are employed for the reactions. Introduction Radical-based reactions such as homolytic bond cleavage, single electron transfer, atom or group transfer, radical addition, and radical coupling processes are important organic transformations. In addition to the success of radical polymerization for making synthetic polymers, organic radical reactions have also become an increasing important tools for preparation of small molecules through radical addition, cyclization, cascade bond formation, H-atom and group-transfer, and radical-radical coupling reactions [1,2]. Active researches on direct C-H bond functionalization [3], remote (distal) functionalization [4], Smile-type ipso-group rearrangement [5], cascade reaction [6], photoredox catalysis [7], and electrochemical reaction [8] have significantly fueled the chemistry of synthetic radicals. This paper covers difunctionalization reactions which are initiated with a radical ad dition and ended with a nucleophilic addition. It is organized based on the nature of initia radicals including C-, N-, O-, P-, S-centered and perfluoroalkyl (Rf) radicals. Reactions in volving electrophilic addition (Scheme 1II-A), radical-radical coupling such as atom trans fer radical addition (ATRA) [19] or metal-catalyzed coupling (Scheme 1II-B) [20], radica cyclization for ring formation (Scheme 1II-C) [21], and cascade reactions [6] involvin more than two steps of radical additions are not included in this paper. Those type o reactions might be the topics of our future review articles. Carbon Radical-Initiated Reactions Presented in this section are carbon radical-initiated reactions and C-, O-, and N-con taining moieties are served as the nucleophiles for the difunctionalizations. The initial car bon radicals could be generated by metal-containing SET agents, using peroxides, or un Carbon Radical-Initiated Reactions Presented in this section are carbon radical-initiated reactions and C-, O-, and Ncontaining moieties are served as the nucleophiles for the difunctionalizations. The initial carbon radicals could be generated by metal-containing SET agents, using peroxides, or under photoredox conditions. Most alkenes used for difunctionalization are styrene derivatives. Because addition of initial radicals to styrenes to form stabilized benzylic radicals is regioselective which could be readily oxidized to benzylic cations for nucleophilic additions. The Du and Wang labs developed a highly efficient Re-catalyzed oxyalkylation of alkenes using hypervalent iodine reagents (HIRs) as a dual functionalization agent to provide alkyl radicals and carboxylic anions for additions (Scheme 2) [22]. In the reaction process, cationic species 2 generated through a heterolytic cleavage of an I-O bond from HIR are reduced by Re I to afford radical iodine intermediates 3. Homolytic breaking of the remaining I-O bond gives acyloxy radicals which undergo decarboxylation to generate alkyl radicals. Addition of alkyl radicals to alkene followed by oxidation with Re I give cations 4. Finally, nucleophilic attack of 4 by carboxylic anions gives difunctionalized products 1. Molecules 2021, 26,105 3 of 3 der photoredox conditions. Most alkenes used for difunctionalization are styrene deriva tives. Because addition of initial radicals to styrenes to form stabilized benzylic radicals i regioselective which could be readily oxidized to benzylic cations for nucleophilic addi tions. The Du and Wang labs developed a highly efficient Re-catalyzed oxyalkylation o alkenes using hypervalent iodine reagents (HIRs) as a dual functionalization agent to pro vide alkyl radicals and carboxylic anions for additions (Scheme 2) [22]. In the reaction process, cationic species 2 generated through a heterolytic cleavage of an I-O bond from HIR are reduced by Re I to afford radical iodine intermediates 3. Homolytic breaking o the remaining I-O bond gives acyloxy radicals which undergo decarboxylation to generat alkyl radicals. Addition of alkyl radicals to alkene followed by oxidation with Re I giv cations 4. Finally, nucleophilic attack of 4 by carboxylic anions gives difunctionalized products 1. The Li group reported Ag-mediated 1,2-alkylarylation of styrenes with α-carbony alkyl bromides and indoles (Scheme 3) [23]. The Fe-salt is used as Lewis acid to stabiliz the radicals.The initial alkyl radicals 6 produced from α-carbonyl alkyl bromides in th presence of 2 equiv of Ag I salt add to styrenes to afford radicals 6 which are oxidized by Ag II to cations and then undergo the Friedel-Crafts reaction with indoles to give product 5. Scheme 3. Ag-catalyzed 1,2-alkylarylation of styrenes. The Li group reported Ag-mediated 1,2-alkylarylation of styrenes with α-carbonyl alkyl bromides and indoles (Scheme 3) [23]. The Fe-salt is used as Lewis acid to stabilize the radicals.The initial alkyl radicals 6 produced from α-carbonyl alkyl bromides in the presence of 2 equiv of Ag I salt add to styrenes to afford radicals 6 which are oxidized by Ag II to cations and then undergo the Friedel-Crafts reaction with indoles to give products 5. Molecules 2021, 26, 105 3 of 31 der photoredox conditions. Most alkenes used for difunctionalization are styrene derivatives. Because addition of initial radicals to styrenes to form stabilized benzylic radicals is regioselective which could be readily oxidized to benzylic cations for nucleophilic additions. The Du and Wang labs developed a highly efficient Re-catalyzed oxyalkylation of alkenes using hypervalent iodine reagents (HIRs) as a dual functionalization agent to provide alkyl radicals and carboxylic anions for additions (Scheme 2) [22]. In the reaction process, cationic species 2 generated through a heterolytic cleavage of an I-O bond from HIR are reduced by Re I to afford radical iodine intermediates 3. Homolytic breaking of the remaining I-O bond gives acyloxy radicals which undergo decarboxylation to generate alkyl radicals. Addition of alkyl radicals to alkene followed by oxidation with Re I give cations 4. Finally, nucleophilic attack of 4 by carboxylic anions gives difunctionalized products 1. Scheme 2. Re-catalyzed difunctionalization with HIRs. The Li group reported Ag-mediated 1,2-alkylarylation of styrenes with α-carbony alkyl bromides and indoles (Scheme 3) [23]. The Fe-salt is used as Lewis acid to stabilize the radicals.The initial alkyl radicals 6 produced from α-carbonyl alkyl bromides in the presence of 2 equiv of Ag I salt add to styrenes to afford radicals 6 which are oxidized by Ag II to cations and then undergo the Friedel-Crafts reaction with indoles to give products 5. The Bao group reported Fe-catalyzed decarboxylative alkyl etherification of vi nylarenes with aliphatic acids and alcohols (Scheme 4) [24]. Primary, secondary and ter tiary aliphatic acids could be used as alkylating reagents, and primary and secondary a cohols as nucleophiles. Alkyl radicals generated from the reaction of Fe II X2 complex wit peresters add to styrenes to form benzylic radicals which are oxidized to carbocations b Fe III species and then undergo nucleophilic reaction to give products 8. The Nishikat group reported a Cu-catalyzed reaction for difunctionalizing styrene using α-bromocarbonyl compounds as a source for tertiary alkyl radicals and alcohols a nucleophiles (Scheme 5) [25]. Single electron transfer (SET) of Cu I induces the formatio of radicals 10 from α-bromocarbonyl compounds. The addition of radicals to styrenes fo lowed by oxidation with Cu II to benzylic cations and trapping with alcohol nucleophile give products 9. It was found that Lewis acids (LA), such as Zn(OTf)2, could accelerate th nucleophilic addition of alcohols. The Yang group reported Fe-catalyzed decarbonylative alkylazidation of styrene with aliphatic aldehydes and trimethylsilyl azide (Scheme 6) [26]. Using (t-BuO)2 (DTBP as an oxidant and radical initiator, a series of aliphatic aldehydes were converted to pr mary, secondary and tertiary alkyl radicals to react with styrenes followed by nucleophili azidation with TMSN3. In the presence of the Fe catalyst, t-butoxy radical formed throug the homolytic cleavage of DTBP abstracts the hydrogen from aldehydes followed by de carbonylation to give alkyl radicals which then react with styrenes to from benzylic radi The Nishikat group reported a Cu-catalyzed reaction for difunctionalizing styrenes using α-bromocarbonyl compounds as a source for tertiary alkyl radicals and alcohols as nucleophiles (Scheme 5) [25]. Single electron transfer (SET) of Cu I induces the formation of radicals 10 from α-bromocarbonyl compounds. The addition of radicals to styrenes followed by oxidation with Cu II to benzylic cations and trapping with alcohol nucleophiles give products 9. It was found that Lewis acids (LA), such as Zn(OTf) 2 , could accelerate the nucleophilic addition of alcohols. The Bao group reported Fe-catalyzed decarboxylative alkyl etherification of vinylarenes with aliphatic acids and alcohols (Scheme 4) [24]. Primary, secondary and tertiary aliphatic acids could be used as alkylating reagents, and primary and secondary alcohols as nucleophiles. Alkyl radicals generated from the reaction of Fe II X2 complex with peresters add to styrenes to form benzylic radicals which are oxidized to carbocations by Fe III species and then undergo nucleophilic reaction to give products 8. The Nishikat group reported a Cu-catalyzed reaction for difunctionalizing styrenes using α-bromocarbonyl compounds as a source for tertiary alkyl radicals and alcohols as nucleophiles (Scheme 5) [25]. Single electron transfer (SET) of Cu I induces the formation of radicals 10 from α-bromocarbonyl compounds. The addition of radicals to styrenes followed by oxidation with Cu II to benzylic cations and trapping with alcohol nucleophiles give products 9. It was found that Lewis acids (LA), such as Zn(OTf)2, could accelerate the nucleophilic addition of alcohols. The Yang group reported Fe-catalyzed decarbonylative alkylazidation of styrenes with aliphatic aldehydes and trimethylsilyl azide (Scheme 6) [26]. Using (t-BuO)2 (DTBP) as an oxidant and radical initiator, a series of aliphatic aldehydes were converted to primary, secondary and tertiary alkyl radicals to react with styrenes followed by nucleophilic azidation with TMSN3. In the presence of the Fe catalyst, t-butoxy radical formed through the homolytic cleavage of DTBP abstracts the hydrogen from aldehydes followed by decarbonylation to give alkyl radicals which then react with styrenes to from benzylic radi- The Yang group reported Fe-catalyzed decarbonylative alkylazidation of styrenes with aliphatic aldehydes and trimethylsilyl azide (Scheme 6) [26]. Using (t-BuO) 2 (DTBP) as an oxidant and radical initiator, a series of aliphatic aldehydes were converted to primary, secondary and tertiary alkyl radicals to react with styrenes followed by nucleophilic azidation with TMSN 3 . In the presence of the Fe catalyst, t-butoxy radical formed through the homolytic cleavage of DTBP abstracts the hydrogen from aldehydes followed by decarbonylation to give alkyl radicals which then react with styrenes to from benzylic radicals. Formation of cations by oxidization followed with the reaction of TMSN 3 give products 11. The authors also proposed another mechanism of radical coupling in the formation of products. cals. Formation of cations by oxidization followed with the reaction of TMSN3 give prod ucts 11. The authors also proposed another mechanism of radical coupling in the for mation of products. The Zhu group reported a Cu-catalyzed methylalkoxy difunctionalization of alkene using dicumyl peroxide (DCP) as a methyl source and alcohols as nucleophiles (Schem 7) [27]. The t-alkoxy radical generated from the cleavage of DCP undergoes β-scission t release the methyl radical. It is captured by alkenes to produce benzyl radicals 13 whic are oxidized to carbeniums by Cu II and then react with alcohols to furnish the methylate ether products 12. Scheme 7. Cu-catalyzed methylalkoxyl difunctionalization of styrenes. The Klussmann group reported a reaction for difunctionalization of styrenes wit carbon radicals derived from thioxanthene/xanthene and thiophenols followed by nucle ophilic addition of acetyl nitrile or alcohols (Scheme 8) [28]. The combination of benzoy peroxide (BPO) with HPF6 is a key factor of this reaction. The benzoyloxyl radical gener ated from BPO abstracts H-atom from thioxanthene to give a new radical which then add to styrenes to form the benzylic cations 15 after oxidation with BPO in the presence o HPF6. Acetonitrile or alcohols react with the carbocations in the fashion of a Ritter reactio to generate products 14a or 14b. The Zhu group reported a Cu-catalyzed methylalkoxy difunctionalization of alkenes using dicumyl peroxide (DCP) as a methyl source and alcohols as nucleophiles (Scheme 7) [27]. The t-alkoxy radical generated from the cleavage of DCP undergoes β-scission to release the methyl radical. It is captured by alkenes to produce benzyl radicals 13 which are oxidized to carbeniums by Cu II and then react with alcohols to furnish the methylated ether products 12. cals. Formation of cations by oxidization followed with the reaction of TMSN3 give products 11. The authors also proposed another mechanism of radical coupling in the formation of products. The Zhu group reported a Cu-catalyzed methylalkoxy difunctionalization of alkenes using dicumyl peroxide (DCP) as a methyl source and alcohols as nucleophiles (Scheme 7) [27]. The t-alkoxy radical generated from the cleavage of DCP undergoes β-scission to release the methyl radical. It is captured by alkenes to produce benzyl radicals 13 which are oxidized to carbeniums by Cu II and then react with alcohols to furnish the methylated ether products 12. The Klussmann group reported a reaction for difunctionalization of styrenes with carbon radicals derived from thioxanthene/xanthene and thiophenols followed by nucleophilic addition of acetyl nitrile or alcohols (Scheme 8) [28]. The combination of benzoyl peroxide (BPO) with HPF6 is a key factor of this reaction. The benzoyloxyl radical generated from BPO abstracts H-atom from thioxanthene to give a new radical which then adds to styrenes to form the benzylic cations 15 after oxidation with BPO in the presence of HPF6. Acetonitrile or alcohols react with the carbocations in the fashion of a Ritter reaction to generate products 14a or 14b. The Klussmann group reported a reaction for difunctionalization of styrenes with carbon radicals derived from thioxanthene/xanthene and thiophenols followed by nucleophilic addition of acetyl nitrile or alcohols (Scheme 8) [28]. The combination of benzoyl peroxide (BPO) with HPF6 is a key factor of this reaction. The benzoyloxyl radical generated from BPO abstracts H-atom from thioxanthene to give a new radical which then adds to styrenes to form the benzylic cations 15 after oxidation with BPO in the presence of HPF6. Acetonitrile or alcohols react with the carbocations in the fashion of a Ritter reaction to generate products 14a or 14b. The Greaney group reported photoredox catalytic arylalkoxy or alkylamino difunctionalization of styrenes using Zn(OAc)2 to enhance the reaction process (Scheme 9) [29]. Aryl radicals generated from the reduction of Ar2IBF4 under the catalysis of Ir III add to styrenes to form benzylic radicals which are oxidized with Ir IV to cations. Addition of nucleophiles (alcohols, H2O or nitriles) afford difunctionalized products 16. Scheme 9. Aryl/alkyloxy or alkylamino difunctionalization of styrenes. The Glorius lab developed a method for alkyloxylation of styrenes through photo electron transfer-initiated reaction of N-(acyloxy)phthalimides (Scheme 10) [30]. Water and alcohols act as nucleophiles and hydrogen-bond donors to afford alcohol and ether products 17. The reactions involve SET of light-emitting diode (LED) excited [Ir(ppy)2(dtbbpy)] + with the hydrogen bond complexes 18 to give radical anions 19, which then undergo N-O bond cleavage followed by decarboxylation to afford alkyl radicals. Radical addition to olefin followed by oxidation with [Ir(ppy)2(dtbbpy)] 2+ afford cations which are trapped by nucleophiles to afford alkyloxylated products 17. The Greaney group reported photoredox catalytic arylalkoxy or alkylamino difunctionalization of styrenes using Zn(OAc) 2 to enhance the reaction process (Scheme 9) [29]. Aryl radicals generated from the reduction of Ar 2 IBF 4 under the catalysis of Ir III add to styrenes to form benzylic radicals which are oxidized with Ir IV to cations. Addition of nucleophiles (alcohols, H 2 O or nitriles) afford difunctionalized products 16. The Greaney group reported photoredox catalytic arylalkoxy or alkylamino difunctionalization of styrenes using Zn(OAc)2 to enhance the reaction process (Scheme 9) [29]. Aryl radicals generated from the reduction of Ar2IBF4 under the catalysis of Ir III add to styrenes to form benzylic radicals which are oxidized with Ir IV to cations. Addition of nucleophiles (alcohols, H2O or nitriles) afford difunctionalized products 16. Scheme 9. Aryl/alkyloxy or alkylamino difunctionalization of styrenes. The Glorius lab developed a method for alkyloxylation of styrenes through photo electron transfer-initiated reaction of N-(acyloxy)phthalimides (Scheme 10) [30]. Water and alcohols act as nucleophiles and hydrogen-bond donors to afford alcohol and ether products 17. The reactions involve SET of light-emitting diode (LED) excited [Ir(ppy)2(dtbbpy)] + with the hydrogen bond complexes 18 to give radical anions 19, which then undergo N-O bond cleavage followed by decarboxylation to afford alkyl radicals. Radical addition to olefin followed by oxidation with [Ir(ppy)2(dtbbpy)] 2+ afford cations which are trapped by nucleophiles to afford alkyloxylated products 17. The Glorius lab developed a method for alkyloxylation of styrenes through photo electron transfer-initiated reaction of N-(acyloxy)phthalimides (Scheme 10) [30]. Water and alcohols act as nucleophiles and hydrogen-bond donors to afford alcohol and ether products 17. The reactions involve SET of light-emitting diode (LED) excited [Ir(ppy) 2 (dtbbpy)] + with the hydrogen bond complexes 18 to give radical anions 19, which then undergo N-O bond cleavage followed by decarboxylation to afford alkyl radicals. Radical addition to olefin followed by oxidation with [Ir(ppy) 2 (dtbbpy)] 2+ afford cations which are trapped by nucleophiles to afford alkyloxylated products 17. The Li lab reported an alkylaryl reaction of styrenes with α-carbonyl alkyl bromides and N,N-disubstituted anilines under photoredox catalysis of Ru(bpy)3Cl2 in the presence of a Cu salt (Scheme 11) [31]. The light excited catalyst Ru(bpy)3 2+ transfers single electron to ethyl 2-bromo-2-methylpropanoates to form alkyl radicals 21 which add to styrenes, oxidized by Ru(bpy)3 3+ to cationic intermediates 22, followed by the Friedel-Crafts reaction to give products 20. The Li lab reported an alkylaryl reaction of styrenes with α-carbonyl alkyl bromides and N,N-disubstituted anilines under photoredox catalysis of Ru(bpy) 3 Cl 2 in the presence of a Cu salt (Scheme 11) [31]. The light excited catalyst Ru(bpy) 3 2+ transfers single electron to ethyl 2-bromo-2-methylpropanoates to form alkyl radicals 21 which add to styrenes, oxidized by Ru(bpy) 3 3+ to cationic intermediates 22, followed by the Friedel-Crafts reaction to give products 20. The Li lab reported an alkylaryl reaction of styrenes with α-carbonyl alkyl bromide and N,N-disubstituted anilines under photoredox catalysis of Ru(bpy)3Cl2 in the presenc of a Cu salt (Scheme 11) [31]. The light excited catalyst Ru(bpy)3 2+ transfers single electro to ethyl 2-bromo-2-methylpropanoates to form alkyl radicals 21 which add to styrenes oxidized by Ru(bpy)3 3+ to cationic intermediates 22, followed by the Friedel-Crafts reac tion to give products 20. The Klussmann group reported an Ir-promoted photoredox reaction of styrenes wit aldehydes and N-alkylindoles (Scheme 12) [32]. Both aryl and aliphatic aldehydes could be used for the generation of acyl radicals. t-Butoxyl radical generated through the SET o photo-excited Ir III and tBuOOBz (TBPB) react with aldehydes to form acyl radicals whic The Klussmann group reported an Ir-promoted photoredox reaction of styrenes with aldehydes and N-alkylindoles (Scheme 12) [32]. Both aryl and aliphatic aldehydes could be used for the generation of acyl radicals. t-Butoxyl radical generated through the SET of photo-excited Ir III and tBuOOBz (TBPB) react with aldehydes to form acyl radicals which add to styrenes to from 24 which then oxidized with Ir IV to carbocations. The Friedel-Crafts reaction of carbocations with N-alkylindoles give final products 23. Nitrogen Radical-Initiated Reactions Presented in this section are azido radical (N3·)-initiated difunctionalization reactions. The azido radical can be generated from the Zhdankin's reagent or TMSN3 [33]. The Zhu group developed a transition-metal-free reaction for azidocyanation of alkenes involving 1,4-or 1,5-cyano migration (Scheme 13) [34]. An azido radical generated from the reaction of TMSN3 with PhI(OAc)2 adds to cyanohydrin alkenes. The resulted radicals undergo 1,4-cyano group migration (n = 1) through a cyclization and ring-opening process to give more stable hydroxyalkyl radicals 25. Oxidation of the radicals with PhI(OAc)2 (PIDA) followed by deprotonation affords the desired product. The Zhu group applied this strategy for the reaction of γ-hydroxyalkenes in the synthesis of 1,2-difuctionalized compounds 26 involving the 1,4-heteroaryl group migration (Scheme 14) [35]. Nitrogen Radical-Initiated Reactions Presented in this section are azido radical (N 3 ·)-initiated difunctionalization reactions. The azido radical can be generated from the Zhdankin's reagent or TMSN 3 [33]. The Zhu group developed a transition-metal-free reaction for azidocyanation of alkenes involving 1,4-or 1,5-cyano migration (Scheme 13) [34]. An azido radical generated from the reaction of TMSN 3 with PhI(OAc) 2 adds to cyanohydrin alkenes. The resulted radicals undergo 1,4-cyano group migration (n = 1) through a cyclization and ring-opening process to give more stable hydroxyalkyl radicals 25. Oxidation of the radicals with PhI(OAc) 2 (PIDA) followed by deprotonation affords the desired product. The Zhu group applied this strategy for the reaction of γ-hydroxyalkenes in the synthesis of 1,2-difuctionalized compounds 26 involving the 1,4-heteroaryl group migration (Scheme 14) [35]. Nitrogen Radical-Initiated Reactions Presented in this section are azido radical (N3·)-initiated difunctionalization reactions. The azido radical can be generated from the Zhdankin's reagent or TMSN3 [33]. The Zhu group developed a transition-metal-free reaction for azidocyanation of alkenes involving 1,4-or 1,5-cyano migration (Scheme 13) [34]. An azido radical generated from the reaction of TMSN3 with PhI(OAc)2 adds to cyanohydrin alkenes. The resulted radicals undergo 1,4-cyano group migration (n = 1) through a cyclization and ring-opening process to give more stable hydroxyalkyl radicals 25. Oxidation of the radicals with PhI(OAc)2 (PIDA) followed by deprotonation affords the desired product. The Zhu group applied this strategy for the reaction of γ-hydroxyalkenes in the synthesis of 1,2-difuctionalized compounds 26 involving the 1,4-heteroaryl group migration (Scheme 14) [35]. Zhu and coworkers developed a photoredox reaction for 1,2-difuctionalizatio dienes with N-aminopyridinium salts and TMSNCS (Scheme 16) [37]. Under the p talysis of fac-Ir(ppy)3, amino radicals derived from the N-aminopyridinium salt 1,3-dienes to form allylic radicals 30 which are oxidized to cations followed by a with TMSNCS to give difunctionalized products 29. Zhu and coworkers developed a photoredox reaction for 1,2-difuctionalizatio dienes with N-aminopyridinium salts and TMSNCS (Scheme 16) [37]. Under the p talysis of fac-Ir(ppy)3, amino radicals derived from the N-aminopyridinium salt 1,3-dienes to form allylic radicals 30 which are oxidized to cations followed by a with TMSNCS to give difunctionalized products 29. Zhu and coworkers developed a photoredox reaction for 1,2-difuctionalization of 1,3-dienes with N-aminopyridinium salts and TMSNCS (Scheme 16) [37]. Under the photocatalysis of fac-Ir(ppy) 3 , amino radicals derived from the N-aminopyridinium salts add to 1,3-dienes to form allylic radicals 30 which are oxidized to cations followed by a reaction with TMSNCS to give difunctionalized products 29. Oxygen Radical-Initiated Reactions Difunctionalization reactions of unsaturated carbon double bonds initiated with the addition of oxygen radicals are limited in literature. The oxygen radicals described in this section are alkoxy, acyl and phthalimide N-oxyl (PINO) radicals which are generated from the homolytic cleavage of N-O or O-H bonds. The Zhu group developed t-butyl hydroperoxide (TBHP)-induced synthesis of α-hydroxy esters 31a or α-hydroxy amines 31b by the reaction of alkenes with carboxylic acids or amines in the presence of H2O and using TBHP (70% in water) as an oxidant (Scheme 17) [38]. t-Butoxyl and t-butylperoxy radicals generated from TBHP induce the formation of carboxyl radicals which add to styrenes followed by oxidation to benzyl cations and trapped with H2O to give products 31a. Oxygen Radical-Initiated Reactions Difunctionalization reactions of unsaturated carbon double bonds initiated with the addition of oxygen radicals are limited in literature. The oxygen radicals described in this section are alkoxy, acyl and phthalimide N-oxyl (PINO) radicals which are generated from the homolytic cleavage of N-O or O-H bonds. The Zhu group developed t-butyl hydroperoxide (TBHP)-induced synthesis of α-hydroxy esters 31a or α-hydroxy amines 31b by the reaction of alkenes with carboxylic acids or amines in the presence of H 2 O and using TBHP (70% in water) as an oxidant (Scheme 17) [38]. t-Butoxyl and t-butylperoxy radicals generated from TBHP induce the formation of carboxyl radicals which add to styrenes followed by oxidation to benzyl cations and trapped with H 2 O to give products 31a. Oxygen Radical-Initiated Reactions Difunctionalization reactions of unsaturated carbon double bonds initiated with the addition of oxygen radicals are limited in literature. The oxygen radicals described in this section are alkoxy, acyl and phthalimide N-oxyl (PINO) radicals which are generated from the homolytic cleavage of N-O or O-H bonds. The Zhu group developed t-butyl hydroperoxide (TBHP)-induced synthesis of α-hydroxy esters 31a or α-hydroxy amines 31b by the reaction of alkenes with carboxylic acids or amines in the presence of H2O and using TBHP (70% in water) as an oxidant (Scheme 17) [38]. t-Butoxyl and t-butylperoxy radicals generated from TBHP induce the formation of carboxyl radicals which add to styrenes followed by oxidation to benzyl cations and trapped with H2O to give products 31a. The Xia group reported a method for oxyazidation of alkenes with TMSN 3 and N-hydroxyphthalimide (NHPI) for making 2-azido-2-(phenylethoxy)isoindolinone compounds 32 (Scheme 18) [39]. The oxygen-centered PINO radical generated from NHPI reacts with styrenes followed by oxidation to carbocations with PhI(OAc) 2 to from cations 33 which react with N 3 − to give oxyazidation products 32. The Dagousset group reported an alkoxy radical-initiated difunctionalization reac tion of alkenes for the synthesis of functionalized diethers 34a or amino ethers 34b unde batch or flow reaction conditions (Scheme 19) [40]. A radical/cationic addition mechanism was confirmed by electron paramagnetic resonance (EPR) studies. Alkoxy radicals gener ated from 4-cyano-substituted N-alkoxypyridinium salt under the catalysis of fac-Ir(ppy) add to alkenes followed by the SET of Ir(ppy)3 + to form corresponding carbocations. Nu cleophilic addition of alcohols or acetonitrile gives difunctionalized products 34. Scheme 19. Dialkoxylation and aminoalkoxylation of alkenes. * excited catalyst. The Zou and Zhang groups reported Cu-catalyzed difunctionalization of alkene with diphenylphosphine oxide (HPOPh2) and trimethylsilylcyanide (TMSCN) (Schem 20) [42]. The phosphinoyl radicals generated from the oxidation of HPOPh2 wit Mn(OAc)3 react with alkenes which are oxidized with Cu II and then cyanylated with CN to give products 35. [40]. A radical/cationic addition mechanism was confirmed by electron paramagnetic resonance (EPR) studies. Alkoxy radicals generated from 4-cyano-substituted N-alkoxypyridinium salt under the catalysis of fac-Ir(ppy) 3 add to alkenes followed by the SET of Ir(ppy) 3 + to form corresponding carbocations. Nucleophilic addition of alcohols or acetonitrile gives difunctionalized products 34. The Dagousset group reported an alkoxy radical-initiated difunctionalization reac tion of alkenes for the synthesis of functionalized diethers 34a or amino ethers 34b unde batch or flow reaction conditions (Scheme 19) [40]. A radical/cationic addition mechanism was confirmed by electron paramagnetic resonance (EPR) studies. Alkoxy radicals gener ated from 4-cyano-substituted N-alkoxypyridinium salt under the catalysis of fac-Ir(ppy) add to alkenes followed by the SET of Ir(ppy)3 + to form corresponding carbocations. Nu cleophilic addition of alcohols or acetonitrile gives difunctionalized products 34. Scheme 19. Dialkoxylation and aminoalkoxylation of alkenes. * excited catalyst. The Zou and Zhang groups reported Cu-catalyzed difunctionalization of alkene with diphenylphosphine oxide (HPOPh2) and trimethylsilylcyanide (TMSCN) (Schem 20) [42]. The phosphinoyl radicals generated from the oxidation of HPOPh2 with Mn(OAc)3 react with alkenes which are oxidized with Cu II and then cyanylated with CN to give products 35. The Zou and Zhang groups reported Cu-catalyzed difunctionalization of alkenes with diphenylphosphine oxide (HPOPh 2 ) and trimethylsilylcyanide (TMSCN) (Scheme 20) [42]. The phosphinoyl radicals generated from the oxidation of HPOPh 2 with Mn(OAc) 3 react with alkenes which are oxidized with Cu II and then cyanylated with CNto give products 35. Wang and coworkers reported Ce-catalyzed phosphinoylation and nitration reac tions of alkenes (Scheme 21) [43]. CAN (NH4)2Ce(NO3)6was used as P-radical initiator an also nitrate donor to afford β-nitrooxyphosphonates 36. The phosphinoyl radicals gener ated form phosphine oxides or phosphonates through a SET with Ce IV add to alkenes t form benzylic cations after the second SET with CAN. Reaction of benzylic cations wit nitrate give products 36. Scheme 21. P-radical initiated difunctionalizations of styrenes. Fang and coworkers reported TBHP and FeCl3-promoted difunctionalization of a kenes with phosphine oxides or phosphonates and anilines for the synthesis of product 37 (Scheme 22) [44]. In the reaction with diphenylphosphine oxide, phosphinoyl radical add to alkenes to form carbocations after oxidation with FeCl3. Trapping of carbocation by amines afford corresponding α,β-phosphinoamination products 37. Wang and coworkers reported Ce-catalyzed phosphinoylation and nitration reactions of alkenes (Scheme 21) [43]. CAN (NH 4 ) 2 Ce(NO 3 ) 6 was used as P-radical initiator and also nitrate donor to afford β-nitrooxyphosphonates 36. The phosphinoyl radicals generated form phosphine oxides or phosphonates through a SET with Ce IV add to alkenes to form benzylic cations after the second SET with CAN. Reaction of benzylic cations with nitrate give products 36. Fang and coworkers reported TBHP and FeCl 3 -promoted difunctionalization of alkenes with phosphine oxides or phosphonates and anilines for the synthesis of products 37 (Scheme 22) [44]. In the reaction with diphenylphosphine oxide, phosphinoyl radicals add to alkenes to form carbocations after oxidation with FeCl 3 . Trapping of carbocations by amines afford corresponding α,β-phosphinoamination products 37. Using a similar strategy of heteroaryl group migration described in Scheme 14, the Zhu group accomplished phosphinoyl or phosphonyl radical-initiated difunctionalization of γ-hydroxyalkenes (Scheme 23) [45]. In the presence of TBHP, the P-radicals generated from phosphine oxides or phosphonates add to alkenes followed by 1,4-heteroaryl migration of 39 to form radicals 40, oxidation of the radicals to cations to give products 38 after proton transfer. In addition to heteroaryl, other groups such as cyano and imino, could also be used as R 2 for the migration. Fang and coworkers reported TBHP and FeCl3-promoted difunctionalization of a kenes with phosphine oxides or phosphonates and anilines for the synthesis of product 37 (Scheme 22) [44]. In the reaction with diphenylphosphine oxide, phosphinoyl radical add to alkenes to form carbocations after oxidation with FeCl3. Trapping of carbocation by amines afford corresponding α,β-phosphinoamination products 37. Using a similar strategy of heteroaryl group migration described in Scheme 14, the Zhu group accomplished phosphinoyl or phosphonyl radical-initiated difunctionalization of γ-hydroxyalkenes (Scheme 23) [45]. In the presence of TBHP, the P-radicals generated from phosphine oxides or phosphonates add to alkenes followed by 1,4-heteroaryl migration of 39 to form radicals 40, oxidation of the radicals to cations to give products 38 after proton transfer. In addition to heteroaryl, other groups such as cyano and imino, could also be used as R 2 for the migration. The Wu group reported a method for the synthesis of β-halo vinylsulfones through Cu-catalyzed reaction of alkynes with aryldiazonium tetrafluoroborates and DABCO·(SO2)2 (Scheme 24A) [46]. Arylsulfonyl radicals 42 generated from the reaction of aryldiazonium cations with DABCO·(SO2)2 attack alkynes to form stable alkenyl radicals which are oxidized to alkenyl cations with CuCl followed by nucleophilic addition with halides to give β-halo vinylsulfones 41. The research group also extended the scope of the reaction of aryl alkenes under photoredox conditions and using alkyl nitriles as nucleophiles to afford aminosulfonylation products 43 (Scheme 24B) [47]. The Wu group reported a method for the synthesis of β-halo vinylsulfones through Cucatalyzed reaction of alkynes with aryldiazonium tetrafluoroborates and DABCO·(SO 2 ) 2 (Scheme 24A) [46]. Arylsulfonyl radicals 42 generated from the reaction of aryldiazonium cations with DABCO·(SO 2 ) 2 attack alkynes to form stable alkenyl radicals which are oxidized to alkenyl cations with CuCl followed by nucleophilic addition with halides to give β-halo vinylsulfones 41. The research group also extended the scope of the reaction of aryl alkenes under photoredox conditions and using alkyl nitriles as nucleophiles to afford aminosulfonylation products 43 (Scheme 24B) [47]. Majee and coworkers reported a method for dithiocyanation of alkynes and alkene with KSCN and Na2S2O8 to give products 44 and 45, respectively (Scheme 25) [48]. In th reaction of alkynes, the thiocyanate radical generated from the oxidation of KSCN add to alkynes, followed by oxidization and reaction with KSCN to give dithiocyanation al kenes 44. Other than nucleophilic addition of thiocyanate anion, coupling of thiocyanat radical is also a possible pathway. Majee and coworkers reported a method for dithiocyanation of alkynes and alkenes with KSCN and Na 2 S 2 O 8 to give products 44 and 45, respectively (Scheme 25) [48]. In the reaction of alkynes, the thiocyanate radical generated from the oxidation of KSCN adds to alkynes, followed by oxidization and reaction with KSCN to give dithiocyanation alkenes 44. Other than nucleophilic addition of thiocyanate anion, coupling of thiocyanate radical is also a possible pathway. Majee and coworkers reported a method for dithiocyanation of alkynes and alkene with KSCN and Na2S2O8 to give products 44 and 45, respectively (Scheme 25) [48]. In th reaction of alkynes, the thiocyanate radical generated from the oxidation of KSCN add to alkynes, followed by oxidization and reaction with KSCN to give dithiocyanation al kenes 44. Other than nucleophilic addition of thiocyanate anion, coupling of thiocyanat radical is also a possible pathway. The Han group developed an electrochemical oxysulfuration reaction of styrenes by addition of sufenylradicals and alkoxy nucleophiles (Scheme 26) [49]. The arylsufenyl rad icals generated from thiophenol through the SET oxidation at the anode add to alkene and then oxidize to carbocations at the anode. Trapping of the carbocations by alcohol affords the oxysulfuration products 46. The Han group developed an electrochemical oxysulfuration reaction of styrenes by addition of sufenylradicals and alkoxy nucleophiles (Scheme 26) [49]. The arylsufenyl radicals generated from thiophenol through the SET oxidation at the anode add to alkenes and then oxidize to carbocations at the anode. Trapping of the carbocations by alcohols affords the oxysulfuration products 46. Guo and coworker reported electrochemical sulfonylheteroarylation reaction of al kenes involving heteroaryl group migration (Scheme 27) [50]. The arylsulfonyl radical generated from sulfinic acid at the anode add to benzothiazole ring for heteroaryl migra tion followed by single-electron oxidation at the anode to afford cations and then product 47 after deprotonation. This ipso-aryl group migration process is similar to those presented in Schemes 14 and 23. An electrochemical oxidative alkoxysulfonylation of alkenes with sulfonyl hydra zines and alcohols have been reported by the Lei group (Scheme 28) [51]. The reaction were carried out at room temperature and only molecular nitrogen and hydrogen are re leased as byproducts. Sulfonyl radicals generated from sulfonyl hydrazines through oxi dation at anode react with alkenes and then undergo second oxidation at anode to form benzyl cation intermediates. Nucleophilic attack of cations with alkoxyl anions giv alkoxysulfonylation products 48. The alkoxyl anions are generated from alcohols on th cathode. Guo and coworker reported electrochemical sulfonylheteroarylation reaction of alkenes involving heteroaryl group migration (Scheme 27) [50]. The arylsulfonyl radicals generated from sulfinic acid at the anode add to benzothiazole ring for heteroaryl migration followed by single-electron oxidation at the anode to afford cations and then products 47 after deprotonation. This ipso-aryl group migration process is similar to those presented in Schemes 14 and 23. Guo and coworker reported electrochemical sulfonylheteroarylation reacti kenes involving heteroaryl group migration (Scheme 27) [50]. The arylsulfonyl generated from sulfinic acid at the anode add to benzothiazole ring for heteroary tion followed by single-electron oxidation at the anode to afford cations and then p 47 after deprotonation. This ipso-aryl group migration process is similar to those p in Schemes 14 and 23. An electrochemical oxidative alkoxysulfonylation of alkenes with sulfony zines and alcohols have been reported by the Lei group (Scheme 28) [51]. The r were carried out at room temperature and only molecular nitrogen and hydroge leased as byproducts. Sulfonyl radicals generated from sulfonyl hydrazines thro dation at anode react with alkenes and then undergo second oxidation at anode benzyl cation intermediates. Nucleophilic attack of cations with alkoxyl anio alkoxysulfonylation products 48. The alkoxyl anions are generated from alcohol cathode. An electrochemical oxidative alkoxysulfonylation of alkenes with sulfonyl hydrazines and alcohols have been reported by the Lei group (Scheme 28) [51]. The reactions were carried out at room temperature and only molecular nitrogen and hydrogen are released as byproducts. Sulfonyl radicals generated from sulfonyl hydrazines through oxidation at anode react with alkenes and then undergo second oxidation at anode to form benzyl cation intermediates. Nucleophilic attack of cations with alkoxyl anions give alkoxysulfonylation products 48. The alkoxyl anions are generated from alcohols on the cathode. The Lei group also reported sulfenyl radical-initiated oxysulfenylation and amino sulfenylation of alkenes using thiophenols/thiols as thiolating agents and alcohols/amine as nucleophiles (Scheme 29) [52]. Under electrochemical reaction conditions, thiophe nols/thiols are converted to sulfenyl radicals for addition to alkenes followed by anod oxidation of resulting radicals to cations and nucleophilic addition with alcohols o amines to give oxysulfenylation products 49 and aminosulfenylation products 50, respec tively. Scheme 29. Electrochemical oxysulfenylation and aminosulfenylation of alkenes. Han and coworkers reported a photoredox-catalyzed chlorosulfonylation reaction o alkynes using sulfonyl chlorides as both sulfonyl radical and chlorine sources (Scheme 30 [53]. A variety of (E)-selective β-chlorovinyl sulfones 51 were prepared from terminal and internal alkynes. The Lei group also reported sulfenyl radical-initiated oxysulfenylation and aminosulfenylation of alkenes using thiophenols/thiols as thiolating agents and alcohols/amines as nucleophiles (Scheme 29) [52]. Under electrochemical reaction conditions, thiophenols/thiols are converted to sulfenyl radicals for addition to alkenes followed by anode oxidation of resulting radicals to cations and nucleophilic addition with alcohols or amines to give oxysulfenylation products 49 and aminosulfenylation products 50, respectively. The Lei group also reported sulfenyl radical-initiated oxysulfenylation and amino sulfenylation of alkenes using thiophenols/thiols as thiolating agents and alcohols/amine as nucleophiles (Scheme 29) [52]. Under electrochemical reaction conditions, thiophe nols/thiols are converted to sulfenyl radicals for addition to alkenes followed by anod oxidation of resulting radicals to cations and nucleophilic addition with alcohols o amines to give oxysulfenylation products 49 and aminosulfenylation products 50, respec tively. Scheme 29. Electrochemical oxysulfenylation and aminosulfenylation of alkenes. Han and coworkers reported a photoredox-catalyzed chlorosulfonylation reaction o alkynes using sulfonyl chlorides as both sulfonyl radical and chlorine sources (Scheme 30 [53]. A variety of (E)-selective β-chlorovinyl sulfones 51 were prepared from terminal an internal alkynes. Scheme 29. Electrochemical oxysulfenylation and aminosulfenylation of alkenes. Han and coworkers reported a photoredox-catalyzed chlorosulfonylation reaction of alkynes using sulfonyl chlorides as both sulfonyl radical and chlorine sources (Scheme 30) [53]. A variety of (E)-selective β-chlorovinyl sulfones 51 were prepared from terminal and internal alkynes. The Nie and Niu group reported the preparation of β-ketosulfones 52 via graphitic carbon nitride (p-g-C 3 N 4 )-photocatalyzed hydrosulfonylation of alkynes in aerobic aqueous medium (Scheme 31) [54]. The heterogeneous semiconductor is recyclable at least 6 times without significant reducing activity. Arylsulfonyl radicals generated from the complex of DABCO·(SO 2 ) 2 and Ar 1 -N 2 BF 4 add to arylalkynes to form arylalkenyl radicals and then arylalkenyl cations after SET oxidation. Hydrolysis of the cations give β-keto sulfones 52. The Nie and Niu group reported the preparation of β-ketosulfones 52 via graphiti carbon nitride (p-g-C3N4)-photocatalyzed hydrosulfonylation of alkynes in aerobic aque ous medium (Scheme 31) [54]. The heterogeneous semiconductor is recyclable at least times without significant reducing activity. Arylsulfonyl radicals generated from the com plex of DABCO·(SO2)2and Ar 1 -N2BF4 add to arylalkynes to form arylalkenyl radicals and then arylalkenyl cations after SET oxidation. Hydrolysis of the cations give β-keto sulfone 52. The Zhu group reported photoredox heteroarylsulfonylation and oximinosulfonyla tion of unactivated alkenes via sulfonyl radical addition followed by heteroaryl o oximino group migration and oxidation of hydroxyl to ketone to afford products 5 (Scheme 32) [55]. The reaction process is similar to that presented in Scheme 14. The Nie and Niu group reported the preparation of β-ketosulfones 52 via graphiti carbon nitride (p-g-C3N4)-photocatalyzed hydrosulfonylation of alkynes in aerobic aque ous medium (Scheme 31) [54]. The heterogeneous semiconductor is recyclable at least times without significant reducing activity. Arylsulfonyl radicals generated from the com plex of DABCO·(SO2)2and Ar 1 -N2BF4 add to arylalkynes to form arylalkenyl radicals and then arylalkenyl cations after SET oxidation. Hydrolysis of the cations give β-keto sulfone 52. Scheme 31. Light-promoted hydrosulfonylation of arylalkynes. The Zhu group reported photoredox heteroarylsulfonylation and oximinosulfonyla tion of unactivated alkenes via sulfonyl radical addition followed by heteroaryl o oximino group migration and oxidation of hydroxyl to ketone to afford products 5 (Scheme 32) [55]. The reaction process is similar to that presented in Scheme 14. The Zhu group reported photoredox heteroarylsulfonylation and oximinosulfonylation of unactivated alkenes via sulfonyl radical addition followed by heteroaryl or oximino group migration and oxidation of hydroxyl to ketone to afford products 53 (Scheme 32) [55]. The reaction process is similar to that presented in Scheme 14. The Nie and Niu group reported the preparation of β-ketosulfones 52 via g carbon nitride (p-g-C3N4)-photocatalyzed hydrosulfonylation of alkynes in aerob ous medium (Scheme 31) [54]. The heterogeneous semiconductor is recyclable a times without significant reducing activity. Arylsulfonyl radicals generated from t plex of DABCO·(SO2)2and Ar 1 -N2BF4 add to arylalkynes to form arylalkenyl radi then arylalkenyl cations after SET oxidation. Hydrolysis of the cations give β-keto 52. Scheme 31. Light-promoted hydrosulfonylation of arylalkynes. The Zhu group reported photoredox heteroarylsulfonylation and oximinosu tion of unactivated alkenes via sulfonyl radical addition followed by hetero oximino group migration and oxidation of hydroxyl to ketone to afford prod (Scheme 32) [55]. The reaction process is similar to that presented in Scheme 14. Fluorocarbon Radical-Initiated Reactions Fluorine atom and fluorinated alkyl groups could be used to improve molecules' stability, bioavailability, and other physical, chemical and biological properties [56]. The development of new synthetic methods to introducing flouring-containing groups, such as CF 2 R, CF 3 and multi-carbon perflourinated alkyls (R f ), is a topic of current interest in both organic and medicinal chemistry [57]. Radical-initiated difunctionalization is a good approach for the synthesis of fluorinated compounds [58]. Most fluorocarbon radicals presented in this section are generated by metal-catalyzed photo-redox or electrochemical reactions from the substrates including Umemoto's [59], Togni's [60], and Langlois' (CF 3 SO 2 Na) reagents. Perfluoroalkanesulfonyl chloride (CF 3 SO 2 Cl), α-bromo-2,2-difluoroacetates, bromodifluoromethylphosphonates, and perfluoroalkyl-containing hypervalent iodines are also good sources for fluorocarbon radicals. The Koike and Akita group developed a photoredox catalyzed oxytrifluoromethylation reaction of alkenes using Umemoto's reagent as the CF 3 source (Scheme 33) [61]. In this catalytic system, the sunlight is an efficient light source. The reaction process includes the generation of a CF 3 radical from Umemoto's reagent by photo catalytic SET, addition of CF 3 radical to alkenes to form intermediate carbon radicals, oxidation to carbocation intermediates by catalytic SET, and alcohol nucleophilic addition give products 54. Fluorocarbon Radical-Initiated Reactions Fluorine atom and fluorinated alkyl groups could be used to improve molecules' stability, bioavailability, and other physical, chemical and biological properties [56]. The development of new synthetic methods to introducing flouring-containing groups, such as CF2R, CF3 and multi-carbon perflourinated alkyls (Rf), is a topic of current interest in both organic and medicinal chemistry [57]. Radical-initiated difunctionalization is a good approach for the synthesis of fluorinated compounds [58]. Most fluorocarbon radicals presented in this section are generated by metal-catalyzed photo-redox or electrochemical reactions from the substrates including Umemoto's [59], Togni's [60], and Langlois' (CF3SO2Na) reagents. Perfluoroalkanesulfonyl chloride (CF3SO2Cl), α-bromo-2,2difluoroacetates, bromodifluoromethylphosphonates, and perfluoroalkyl-containing hypervalent iodines are also good sources for fluorocarbon radicals. The Koike and Akita group developed a photoredox catalyzed oxytrifluoromethylation reaction of alkenes using Umemoto's reagent as the CF3 source (Scheme 33) [61]. In this catalytic system, the sunlight is an efficient light source. The reaction process includes the generation of a CF3 radical from Umemoto's reagent by photo catalytic SET, addition of CF3 radical to alkenes to form intermediate carbon radicals, oxidation to carbocation intermediates by catalytic SET, and alcohol nucleophilic addition give products 54. Scheme 33. Photoredox catalyzed oxytrifluoromethylation of alkenes. * excited catalyst. The Koike and Akita group also reported photoredox reaction of alkenes using Umemoto's reagent as CF3 source and Ru(bpy)3(PF6)2 as a catalyst for the synthesis of aminotrifluoromethylation products 55 (Scheme 34A) [62]. The Magnier and Masson group used the similar reaction conditions in the synthesis of β-trifluoromethylated azides 56a and amines 56b (Scheme 34B) [63]. The Koike and Akita group also reported photoredox reaction of alkenes using Umemoto's reagent as CF 3 source and Ru(bpy) 3 (PF 6 ) 2 as a catalyst for the synthesis of aminotrifluoromethylation products 55 (Scheme 34A) [62]. The Magnier and Masson group used the similar reaction conditions in the synthesis of β-trifluoromethylated azides 56a and amines 56b (Scheme 34B) [63]. Fluorocarbon Radical-Initiated Reactions Fluorine atom and fluorinated alkyl groups could be used to improve molecules' stability, bioavailability, and other physical, chemical and biological properties [56]. The development of new synthetic methods to introducing flouring-containing groups, such as CF2R, CF3 and multi-carbon perflourinated alkyls (Rf), is a topic of current interest in both organic and medicinal chemistry [57]. Radical-initiated difunctionalization is a good approach for the synthesis of fluorinated compounds [58]. Most fluorocarbon radicals presented in this section are generated by metal-catalyzed photo-redox or electrochemical reactions from the substrates including Umemoto's [59], Togni's [60], and Langlois' (CF3SO2Na) reagents. Perfluoroalkanesulfonyl chloride (CF3SO2Cl), α-bromo-2,2difluoroacetates, bromodifluoromethylphosphonates, and perfluoroalkyl-containing hypervalent iodines are also good sources for fluorocarbon radicals. The Koike and Akita group developed a photoredox catalyzed oxytrifluoromethylation reaction of alkenes using Umemoto's reagent as the CF3 source (Scheme 33) [61]. In this catalytic system, the sunlight is an efficient light source. The reaction process includes the generation of a CF3 radical from Umemoto's reagent by photo catalytic SET, addition of CF3 radical to alkenes to form intermediate carbon radicals, oxidation to carbocation intermediates by catalytic SET, and alcohol nucleophilic addition give products 54. Scheme 33. Photoredox catalyzed oxytrifluoromethylation of alkenes. * excited catalyst. The Koike and Akita group also reported photoredox reaction of alkenes using Umemoto's reagent as CF3 source and Ru(bpy)3(PF6)2 as a catalyst for the synthesis of aminotrifluoromethylation products 55 (Scheme 34A) [62]. The Magnier and Masson group used the similar reaction conditions in the synthesis of β-trifluoromethylated azides 56a and amines 56b (Scheme 34B) [63]. The She and Li group reported acyloxytrifluoromethylation of alkenes with the Umemoto II reagent using Ru(bpy) 3 (PF 6 ) 2 as a photoredox catalyst (Scheme 35) [64]. N,N-Dimethylformamide (DMF) was employed as a solvent and also an acetylation reagent. CF 3 radical generated from the Umemoto's reagent through catalytic SET adds to arylalkenes to give stable benzyl radicals which are oxidized to benzyl carbocations followed by nucleophilic addition of DMF to give imine intermediates and then hydrolyzed to give desired products 57. The She and Li group reported acyloxytrifluoromethylation of alkenes with the Umemoto II reagent using Ru(bpy)3(PF6)2 as a photoredox catalyst (Scheme 35) [64]. N,N-Dimethylformamide (DMF) was employed as a solvent and also an acetylation reagent. CF3 radical generated from the Umemoto's reagent through catalytic SET adds to arylalkenes to give stable benzyl radicals which are oxidized to benzyl carbocations followed by nucleophilic addition of DMF to give imine intermediates and then hydrolyzed to give desired products 57. Scheme 35. Acyloxytrifluoromethylation of styrenes. * excited catalyst. The Koike and Akita group reported trifluoromethylation-initiated reaction for the synthesis of α-trifluoromethylated ketones 58 (Scheme 36) [65]. CF3 radical generated from the Togni's reagent II through photocatalytic reductive SET of fac-[Ir III (ppy)3] adds to alkenes followed by oxidative SET gave β-CF3-substituted carbocationic intermediates. Nucleophilic attack of DMSO to carbocations affords alkoxysulfonium intermediates which then react with o-iodobenzoate for Korblum oxidation to give final products 58. Scheme 36. Photocatalytic keto-trifluoromethylation of styrenes. * excited catalyst. The Koike and Akita group reported trifluoromethylation-initiated reaction for the synthesis of α-trifluoromethylated ketones 58 (Scheme 36) [65]. CF 3 radical generated from the Togni's reagent II through photocatalytic reductive SET of fac-[Ir III (ppy) 3 ] adds to alkenes followed by oxidative SET gave β-CF 3 -substituted carbocationic intermediates. Nucleophilic attack of DMSO to carbocations affords alkoxysulfonium intermediates which then react with o-iodobenzoate for Korblum oxidation to give final products 58. The She and Li group reported acyloxytrifluoromethylation of alkenes with the Umemoto II reagent using Ru(bpy)3(PF6)2 as a photoredox catalyst (Scheme 35) [64]. N,N-Dimethylformamide (DMF) was employed as a solvent and also an acetylation reagent. CF3 radical generated from the Umemoto's reagent through catalytic SET adds to arylalkenes to give stable benzyl radicals which are oxidized to benzyl carbocations followed by nucleophilic addition of DMF to give imine intermediates and then hydrolyzed to give desired products 57. Scheme 35. Acyloxytrifluoromethylation of styrenes. * excited catalyst. The Koike and Akita group reported trifluoromethylation-initiated reaction for the synthesis of α-trifluoromethylated ketones 58 (Scheme 36) [65]. CF3 radical generated from the Togni's reagent II through photocatalytic reductive SET of fac-[Ir III (ppy)3] adds to alkenes followed by oxidative SET gave β-CF3-substituted carbocationic intermediates. Nucleophilic attack of DMSO to carbocations affords alkoxysulfonium intermediates which then react with o-iodobenzoate for Korblum oxidation to give final products 58. The Magnier and Masson group reported a photoredox-catalyzed carbotrifluoromethylation of enecarbamates using Togni's reagent II as the CF 3 source and various O-, N-, and C-containing compounds as nucleophiles (Scheme 37) [66]. The CF 3 radical generated from Togni's reagent by reductive photoredox SET adds to enecarbamates to form α-amido radicals which are quickly oxidized to acyliminium cations by a SET process. Nucleophilic trapping by alcohol, NaN 3 , or KCN affords the corresponding trifluoromethylated adducts 59a-c. The Magnier and Masson group reported a photoredox-catalyzed carbotrifluoromethylation of enecarbamates using Togni's reagent II as the CF3 source and various O-, N-, and C-containing compounds as nucleophiles (Scheme 37) [66]. The CF3 radical generated from Togni's reagent by reductive photoredox SET adds to enecarbamates to form α-amido radicals which are quickly oxidized to acyliminium cations by a SET process. Nucleophilic trapping by alcohol, NaN3, or KCN affords the corresponding trifluoromethylated adducts 59a-c. Scheme 37. Difunctionalization of enecarbamates. * excited catalyst. The Lei group reported an electrochemical reaction for oxytrifluoromethylation and aminotrifluoromethylation of alkenes using sodium trifluoromethanesulfinate (CF3SO2Na) as the source of CF3 radical (Scheme 38) [67]. CF3SO2Na is oxidized at the anode via SET to afford CF3 radical which adds to styrenes to generate benzylic radicals followed by oxidation to carbocations. Subsequent nucleophilic attack on the carbocations produces desired products 60. Han and coworkers reported a photoredox catalytic reaction for 1,2-chlorotrifluoromethylation of alkenes using CF3SO2Cl as a source for CF3 radical and chloride ion (Scheme 39A) [68]. The CF3 radical generated from CF3SO2Cl through reductive SET of Ru catalyst add to alkenes, followed by oxidative SET to carbocations and nucleophilic addition of chloride to give chlorotrifluoromethylated products 61. The Han group extended the scope of the reaction for alkynes to make vicinal chlorotrifluoromethylated alkenes 62 using Ir-based photoredox catalyst (Scheme 39B) [69]. The resultant trifluoromethyl substituted vinyl chlorides can be used for Suzuki coupling to make 1,1-bis-arylalkenes. The Lei group reported an electrochemical reaction for oxytrifluoromethylation and aminotrifluoromethylation of alkenes using sodium trifluoromethanesulfinate (CF 3 SO 2 Na) as the source of CF 3 radical (Scheme 38) [67]. CF 3 SO 2 Na is oxidized at the anode via SET to afford CF 3 radical which adds to styrenes to generate benzylic radicals followed by oxidation to carbocations. Subsequent nucleophilic attack on the carbocations produces desired products 60. The Magnier and Masson group reported a photoredox-catalyzed carbotrifluoro methylation of enecarbamates using Togni's reagent II as the CF3 source and various O-N-, and C-containing compounds as nucleophiles (Scheme 37) [66]. The CF3 radical gen erated from Togni's reagent by reductive photoredox SET adds to enecarbamates to form α-amido radicals which are quickly oxidized to acyliminium cations by a SET process Nucleophilic trapping by alcohol, NaN3, or KCN affords the corresponding trifluorometh ylated adducts 59a-c. Scheme 37. Difunctionalization of enecarbamates. * excited catalyst. The Lei group reported an electrochemical reaction for oxytrifluoromethylation and aminotrifluoromethylation of alkenes using sodium trifluoromethanesulfinat (CF3SO2Na) as the source of CF3 radical (Scheme 38) [67]. CF3SO2Na is oxidized at th anode via SET to afford CF3 radical which adds to styrenes to generate benzylic radical followed by oxidation to carbocations. Subsequent nucleophilic attack on the carbocation produces desired products 60. Han and coworkers reported a photoredox catalytic reaction for 1,2-chlorotrifluoro methylation of alkenes using CF3SO2Cl as a source for CF3 radical and chloride ion (Scheme 39A) [68]. The CF3 radical generated from CF3SO2Cl through reductive SET of Ru catalyst add to alkenes, followed by oxidative SET to carbocations and nucleophilic addi tion of chloride to give chlorotrifluoromethylated products 61. The Han group extended the scope of the reaction for alkynes to make vicinal chlorotrifluoromethylated alkenes 6 using Ir-based photoredox catalyst (Scheme 39B) [69]. The resultant trifluoromethyl sub stituted vinyl chlorides can be used for Suzuki coupling to make 1,1-bis-arylalkenes. Han and coworkers reported a photoredox catalytic reaction for 1,2-chlorotrifluoromethylation of alkenes using CF 3 SO 2 Cl as a source for CF 3 radical and chloride ion (Scheme 39A) [68]. The CF 3 radical generated from CF 3 SO 2 Cl through reductive SET of Ru catalyst add to alkenes, followed by oxidative SET to carbocations and nucleophilic addition of chloride to give chlorotrifluoromethylated products 61. The Han group extended the scope of the reaction for alkynes to make vicinal chlorotrifluoromethylated alkenes 62 using Ir-based photoredox catalyst (Scheme 39B) [69]. The resultant trifluoromethyl substituted vinyl chlorides can be used for Suzuki coupling to make 1,1-bis-arylalkenes. The Zhu group applied photoredox catalysis reaction for difluoroalkylary styrenes using α-carbonyl difluoroalkyl bromides for CF2R radicals and indoles ophiles in the synthesis of difluorolated indole derivatives 63 (Scheme 40) [70]. T radicals generated from α-carbonyl difluoroalkyl bromides through reductive S Ir-photocatalyst add to alkenes followed by oxidative SET to cations and then re indoles to give difluoroalkylated products 63 after base-mediated deprotonation Yang and coworkers reported a photocatalyzed reaction for amino methylphosphonation of alkenes using bromodifluoromethylphosphonates as th of CF2 radical (Scheme 41) [71]. The NaI was used as additive to stabilize the car intermediates and increase the product yields. The difluorocarbon radicals p through the SET process of bromodifluoromethylphosphonates add to alkenes by SET oxidation to cations for nucleophilic addition with amines to give the pro after deprotonation. The Zhu group applied photoredox catalysis reaction for difluoroalkylarylation of styrenes using α-carbonyl difluoroalkyl bromides for CF 2 R radicals and indoles as nucleophiles in the synthesis of difluorolated indole derivatives 63 (Scheme 40) [70]. The CF 2 R radicals generated from α-carbonyl difluoroalkyl bromides through reductive SET with Ir-photocatalyst add to alkenes followed by oxidative SET to cations and then react with indoles to give difluoroalkylated products 63 after base-mediated deprotonation. The Zhu group applied photoredox catalysis reaction for difluoroalkylary styrenes using α-carbonyl difluoroalkyl bromides for CF2R radicals and indoles a ophiles in the synthesis of difluorolated indole derivatives 63 (Scheme 40) [70]. T radicals generated from α-carbonyl difluoroalkyl bromides through reductive S Ir-photocatalyst add to alkenes followed by oxidative SET to cations and then re indoles to give difluoroalkylated products 63 after base-mediated deprotonation Yang and coworkers reported a photocatalyzed reaction for aminod methylphosphonation of alkenes using bromodifluoromethylphosphonates as th of CF2 radical (Scheme 41) [71]. The NaI was used as additive to stabilize the car intermediates and increase the product yields. The difluorocarbon radicals p through the SET process of bromodifluoromethylphosphonates add to alkenes f by SET oxidation to cations for nucleophilic addition with amines to give the pro after deprotonation. Yang and coworkers reported a photocatalyzed reaction for aminodifluoromethylphosphonation of alkenes using bromodifluoromethylphosphonates as the source of CF 2 radical (Scheme 41) [71]. The NaI was used as additive to stabilize the carbocation intermediates and increase the product yields. The difluorocarbon radicals produced through the SET process of bromodifluoromethylphosphonates add to alkenes followed by SET oxidation to cations for nucleophilic addition with amines to give the products 64 after deprotonation. The Koike and Akita group reported a photocatalytic oxydifluoromethylation of al kenes using shelf-stable and easy-to-handle N-tosyl-S-difluoromethyl-S phenylsulfoximine as a CF2H source for the synthesis of β-CF2H-substituted products 6 including alcohols, ethers, and an ester (Scheme 42) [72]. The CF2H radical generated from N-tosyl-S-difluoromethyl-S-phenylsulfoximine by SET process reacts with alkenes to af ford carbocationic intermediates after oxidative SET with Ir (IV) and then solvolysis with ROH to produce oxydifluoromethylated products 65. The Qing group developed a photoredox-catalyzed bromodifluoromethylation reac tion of alkenes using difluoromethyltriphenylphosphonium bromide for generating CF2H radical and also brominated anion as a nucleophile (Scheme 43A) [73]. The bromonated products 66 can be converted to CF2H-containing alkenes via an elimination process. Th research group also reported a oxydifluoromethylation reaction of styrenes using difluo romethyltriphenylphosphonium bromide as a radical source and alcohols/water as nucle ophiles for the synthesis of CF2H-containing alcohols and ethers 67 (Scheme 43B) [74]. The Koike and Akita group reported a photocatalytic oxydifluoromethylation of alkenes using shelf-stable and easy-to-handle N-tosyl-S-difluoromethyl-S-phenylsulfoximine as a CF 2 H source for the synthesis of β-CF 2 H-substituted products 65 including alcohols, ethers, and an ester (Scheme 42) [72]. The CF 2 H radical generated from N-tosyl-Sdifluoromethyl-S-phenylsulfoximine by SET process reacts with alkenes to afford carbocationic intermediates after oxidative SET with Ir (IV) and then solvolysis with ROH to produce oxydifluoromethylated products 65. The Koike and Akita group reported a photocatalytic oxydifluoromethylation of al kenes using shelf-stable and easy-to-handle N-tosyl-S-difluoromethyl-S phenylsulfoximine as a CF2H source for the synthesis of β-CF2H-substituted products 6 including alcohols, ethers, and an ester (Scheme 42) [72]. The CF2H radical generated from N-tosyl-S-difluoromethyl-S-phenylsulfoximine by SET process reacts with alkenes to af ford carbocationic intermediates after oxidative SET with Ir (IV) and then solvolysis with ROH to produce oxydifluoromethylated products 65. The Qing group developed a photoredox-catalyzed bromodifluoromethylation reac tion of alkenes using difluoromethyltriphenylphosphonium bromide for generating CF2H radical and also brominated anion as a nucleophile (Scheme 43A) [73]. The bromonated products 66 can be converted to CF2H-containing alkenes via an elimination process. Th research group also reported a oxydifluoromethylation reaction of styrenes using difluo romethyltriphenylphosphonium bromide as a radical source and alcohols/water as nucle ophiles for the synthesis of CF2H-containing alcohols and ethers 67 (Scheme 43B) [74]. The Qing group developed a photoredox-catalyzed bromodifluoromethylation reaction of alkenes using difluoromethyltriphenylphosphonium bromide for generating CF 2 H radical and also brominated anion as a nucleophile (Scheme 43A) [73]. The bromonated products 66 can be converted to CF 2 H-containing alkenes via an elimination process. The research group also reported a oxydifluoromethylation reaction of styrenes using difluoromethyltriphenylphosphonium bromide as a radical source and alcohols/water as nucleophiles for the synthesis of CF 2 H-containing alcohols and ethers 67 (Scheme 43B) [74]. The Magnier and Masson group employed S-perfluoroalkyl sulfilimino iminium ions 69 as a source of Rf radicals for oxyperfluoroalkylation of alkenes under photoredox catalysis conditions (Scheme 44) [75]. These stable perfluoroalkyl reagents containing CF3, C4F9, CF2Br, or CFCl2 can be readily prepared at gram scale from corresponding sulfoxides. The spin-trapping/electron paramagnetic resonance experiments confirmed that key radical intermediates involved in the radical/cationic process. Rf radicals generated from 69 by reductive SET add to alkenes to form stabilized benzylic radicals and then corresponding carbocations after oxidative SET. Final trapping with methanol provides corresponding products 68. The Magnier and Masson group employed S-perfluoroalkyl sulfilimino iminium ions 69 as a source of R f radicals for oxyperfluoroalkylation of alkenes under photoredox catalysis conditions (Scheme 44) [75]. These stable perfluoroalkyl reagents containing CF 3 , C 4 F 9 , CF 2 Br, or CFCl 2 can be readily prepared at gram scale from corresponding sulfoxides. The spin-trapping/electron paramagnetic resonance experiments confirmed that key radical intermediates involved in the radical/cationic process. R f radicals generated from 69 by reductive SET add to alkenes to form stabilized benzylic radicals and then corresponding carbocations after oxidative SET. Final trapping with methanol provides corresponding products 68. The Magnier and Masson group employed S-perfluoroalkyl sulfilimino iminium ion 69 as a source of Rf radicals for oxyperfluoroalkylation of alkenes under photoredox ca talysis conditions (Scheme 44) [75]. These stable perfluoroalkyl reagents containing CF C4F9, CF2Br, or CFCl2 can be readily prepared at gram scale from corresponding sulfox ides. The spin-trapping/electron paramagnetic resonance experiments confirmed that ke radical intermediates involved in the radical/cationic process. Rf radicals generated from 69 by reductive SET add to alkenes to form stabilized benzylic radicals and then corre sponding carbocations after oxidative SET. Final trapping with methanol provides corre sponding products 68. The Zhu group developed a visible light-promoted difluoroalkylarylation reaction of allylic alcohols for the synthesis of difluoro 1,5-dicarbonyl compounds 70 through radical addition and 1,2-aryl migration process (Scheme 45) [76]. The initial fluorocarbon radicals generated from ethyl 2-bromo-2,2-difluoroacetate or CF 3 I under SET add to allylic alcohols followed by ipso 1,2-aryl migration via spiro[2.5]octadienyl radicals 71 to give hydroxycarbon radicals. Oxidative SET of the radicals to corresponding carbocations followed by deprotonation give products 70. The Zhu group extended the scope of the reaction for 1,4-aryl and other group migrations using β-hydroxy alkenes as starting materials for the synthesis of 72a and 72b [77]. The Zhu group developed a visible light-promoted difluoroalkylarylation reaction o allylic alcohols for the synthesis of difluoro 1,5-dicarbonyl compounds 70 through radica addition and 1,2-aryl migration process (Scheme 45) [76]. The initial fluorocarbon radical generated from ethyl 2-bromo-2,2-difluoroacetate or CF3I under SET add to allylic alco hols followed by ipso 1,2-aryl migration via spiro[2.5]octadienyl radicals 71 to give hy droxycarbon radicals. Oxidative SET of the radicals to corresponding carbocations fol lowed by deprotonation give products 70. The Zhu group extended the scope of the reac tion for 1,4-aryl and other group migrations using β-hydroxy alkenes as starting material for the synthesis of 72a and 72b [77]. The Ma and Li group reported a photoredox-catalyzed 1,6-oxyfluoroalkylation of al kenes using DMSO as an oxidant and also a solvent (Scheme 46) [78]. The perfluoroalky (Rf) radicals are produced from RfBr through the photoredox of the Ir-catalyst in the assis tance of the Ag salt. The benzylic radicals generated from the addition of Rf radicals t alkenes undergo 1,5-hydrogen atom transfer (1,5-HAT) and then the Kornblum reaction with DMSO to afford 1,6-oxyfluoroalkylated ketones 73. The Ma and Li group reported a photoredox-catalyzed 1,6-oxyfluoroalkylation of alkenes using DMSO as an oxidant and also a solvent (Scheme 46) [78]. The perfluoroalkyl (R f ) radicals are produced from R f Br through the photoredox of the Ir-catalyst in the assistance of the Ag salt. The benzylic radicals generated from the addition of R f radicals to alkenes undergo 1,5-hydrogen atom transfer (1,5-HAT) and then the Kornblum reaction with DMSO to afford 1,6-oxyfluoroalkylated ketones 73. The Zhu group developed a method to generate CF3 radical from CF3SO2Na and PhI(O2CCF3)2 for difunctionalization of unactivated alkenes involving distal heteroaryl group migration to form products 74 (Scheme 47A) [79]. The Umemoto's reagent is employed as CF3 radical source for trifluoromethylative alkynylation of unactivated alkenes to form products 75 through photocatalyzed reaction involving the alkynyl migration reaction (Scheme 47B) [80]. The Wang group reported an electrochemical reaction for fluoroalkylarylation of unactivated alkenes to form products 76 using RfSO2Na as fluorocarbon radical source (Scheme 47C) [81]. All these three reactions involve distal group migration and their mechanisms are similar to that presented in Scheme 14. The Studer group reported a method for perfluoroalkyltriflation of alkynes using phenyl(perfluoroalkyl)iodonium triflates 78 in the presence of CuCl to afford selective Edifunctionalized alkene products 77 (Scheme 48) [82]. Rf radicals generated from 78 through reductive SET with CuCl add to alkynes to from vinyl radicals and then vinylic cations after oxidation. Trapping of cations with triflate anion affords the products 77. The Zhu group developed a method to generate CF 3 radical from CF 3 SO 2 Na and PhI(O 2 CCF 3 ) 2 for difunctionalization of unactivated alkenes involving distal heteroaryl group migration to form products 74 (Scheme 47A) [79]. The Umemoto's reagent is employed as CF 3 radical source for trifluoromethylative alkynylation of unactivated alkenes to form products 75 through photocatalyzed reaction involving the alkynyl migration reaction (Scheme 47B) [80]. The Wang group reported an electrochemical reaction for fluoroalkylarylation of unactivated alkenes to form products 76 using R f SO 2 Na as fluorocarbon radical source (Scheme 47C) [81]. All these three reactions involve distal group migration and their mechanisms are similar to that presented in Scheme 14. The Zhu group developed a method to generate CF3 radical from CF3SO2Na and PhI(O2CCF3)2 for difunctionalization of unactivated alkenes involving distal heteroaryl group migration to form products 74 (Scheme 47A) [79]. The Umemoto's reagent is employed as CF3 radical source for trifluoromethylative alkynylation of unactivated alkenes to form products 75 through photocatalyzed reaction involving the alkynyl migration reaction (Scheme 47B) [80]. The Wang group reported an electrochemical reaction for fluoroalkylarylation of unactivated alkenes to form products 76 using RfSO2Na as fluorocarbon radical source (Scheme 47C) [81]. All these three reactions involve distal group migration and their mechanisms are similar to that presented in Scheme 14. The Studer group reported a method for perfluoroalkyltriflation of alkynes using phenyl(perfluoroalkyl)iodonium triflates 78 in the presence of CuCl to afford selective Edifunctionalized alkene products 77 (Scheme 48) [82]. Rf radicals generated from 78 through reductive SET with CuCl add to alkynes to from vinyl radicals and then vinylic cations after oxidation. Trapping of cations with triflate anion affords the products 77. The Studer group reported a method for perfluoroalkyltriflation of alkynes using phenyl(perfluoroalkyl)iodonium triflates 78 in the presence of CuCl to afford selective E-difunctionalized alkene products 77 (Scheme 48) [82]. R f radicals generated from 78 through reductive SET with CuCl add to alkynes to from vinyl radicals and then vinylic cations after oxidation. Trapping of cations with triflate anion affords the products 77. Han and coworkers reported photoredox catalytic ketofluoromethylation of alk (Scheme 49) [83]. The CF3 radical generated from Umemoto's reagent through redu SET add to alkynes to produce vinyl radicals which are oxidized to vinyl cations by and then trapped by H2O to give products 79 after deprotonation with BF4 − and enoltautomerization. The Zhu and Li group reported photoredox-catalyzed remote ketofluoroalkyla and hydroxytrifluoromethylation of alkynes using RfX or Umemoto's reagent as fluo kyl source and DMSO or H2O as oxygen source for the synthesis of ε-oxygenated flu alkylated (Z)-alkenes 80 and 81 (Scheme 50) [84]. In the reaction with the Umemoto's gent, the CF3 radical generated through SET adds to alkynes to form vinyl radicals w then undergo 1,5-HAT followed by oxidative SET to form cationic intermediates 82 a (Z)-alkenyl group. Nucleophilic attack of 82 by DMSO followed by the Kornblum dation produces trifluoromethyl (Z)-alkenyl ketones 80. Alternatively, using H2O ins of DMSO as a nucleophile, trifluoromethylated (Z)-alkenols 81 are produced from th action. Han and coworkers reported photoredox catalytic ketofluoromethylation of alkynes (Scheme 49) [83]. The CF 3 radical generated from Umemoto's reagent through reductive SET add to alkynes to produce vinyl radicals which are oxidized to vinyl cations by SET and then trapped by H 2 O to give products 79 after deprotonation with BF 4 − and enol-keto tautomerization. Han and coworkers reported photoredox catalytic ketofluoromethylation of alk (Scheme 49) [83]. The CF3 radical generated from Umemoto's reagent through redu SET add to alkynes to produce vinyl radicals which are oxidized to vinyl cations by and then trapped by H2O to give products 79 after deprotonation with BF4 − and enoltautomerization. The Zhu and Li group reported photoredox-catalyzed remote ketofluoroalkyla and hydroxytrifluoromethylation of alkynes using RfX or Umemoto's reagent as fluo kyl source and DMSO or H2O as oxygen source for the synthesis of ε-oxygenated flu alkylated (Z)-alkenes 80 and 81 (Scheme 50) [84]. In the reaction with the Umemoto's gent, the CF3 radical generated through SET adds to alkynes to form vinyl radicals w then undergo 1,5-HAT followed by oxidative SET to form cationic intermediates 82 a (Z)-alkenyl group. Nucleophilic attack of 82 by DMSO followed by the Kornblum dation produces trifluoromethyl (Z)-alkenyl ketones 80. Alternatively, using H2O ins of DMSO as a nucleophile, trifluoromethylated (Z)-alkenols 81 are produced from th action. The Zhu and Li group reported photoredox-catalyzed remote ketofluoroalkylation and hydroxytrifluoromethylation of alkynes using R f X or Umemoto's reagent as fluoroalkyl source and DMSO or H 2 O as oxygen source for the synthesis of ε-oxygenated fluoroalkylated (Z)-alkenes 80 and 81 (Scheme 50) [84]. In the reaction with the Umemoto's reagent, the CF 3 radical generated through SET adds to alkynes to form vinyl radicals which then undergo 1,5-HAT followed by oxidative SET to form cationic intermediates 82 with a (Z)-alkenyl group. Nucleophilic attack of 82 by DMSO followed by the Kornblum oxidation produces trifluoromethyl (Z)-alkenyl ketones 80. Alternatively, using H 2 O instead of DMSO as a nucleophile, trifluoromethylated (Z)-alkenols 81 are produced from the reaction. Other Radical-Initiated Reactions Other than above mentioned difunctionalization reactions initiated with C-, N-, O-, P-, and fluorocarbon-radicals, we found a single example of iodo radical-initiated difunctionalization reported by the Zhu group (Scheme 51) [85]. The reaction of TBHP with I2 generates t-BuOI and HOI. Radical iodination of styrenes followed by oxidation to cations and nucleophilic attacking by TBHP give iodoalkylperoxylated products 83. Scheme 51. Iodoalkylperoxylation of alkenes. Conclusions This article summarizes the radical and nucleophilic difunctionalization reactions of alkenes and alkynes involving a cascade process of radical generation, radical addition, oxidation to cation, and trapping by a nucleophile. This one-pot reaction process demonstrates the step and pot economies as well as operational simplicity. A wide range C-, O-, Other Radical-Initiated Reactions Other than above mentioned difunctionalization reactions initiated with C-, N-, O-, P-, and fluorocarbon-radicals, we found a single example of iodo radical-initiated difunctionalization reported by the Zhu group (Scheme 51) [85]. The reaction of TBHP with I 2 generates t-BuOI and HOI. Radical iodination of styrenes followed by oxidation to cations and nucleophilic attacking by TBHP give iodoalkylperoxylated products 83. Other Radical-Initiated Reactions Other than above mentioned difunctionalization reactions initiated with C-, N-, O-P-, and fluorocarbon-radicals, we found a single example of iodo radical-initiated difunc tionalization reported by the Zhu group (Scheme 51) [85]. The reaction of TBHP with I generates t-BuOI and HOI. Radical iodination of styrenes followed by oxidation to cation and nucleophilic attacking by TBHP give iodoalkylperoxylated products 83. Scheme 51. Iodoalkylperoxylation of alkenes. Conclusions This article summarizes the radical and nucleophilic difunctionalization reactions o alkenes and alkynes involving a cascade process of radical generation, radical addition oxidation to cation, and trapping by a nucleophile. This one-pot reaction process demon strates the step and pot economies as well as operational simplicity. A wide range C-, O-Scheme 51. Iodoalkylperoxylation of alkenes. Conclusions This article summarizes the radical and nucleophilic difunctionalization reactions of alkenes and alkynes involving a cascade process of radical generation, radical addition, oxidation to cation, and trapping by a nucleophile. This one-pot reaction process demonstrates the step and pot economies as well as operational simplicity. A wide range C-, O-, N-, S-, P-centered and fluorocarbon radicals are used in combination with C-, N-, O-centered and halogenic nucleophiles for difunctionalizations to show substrate versatility. Currently active research techniques such as photoredox catalysis and electrochemical reactions have been applied to the difunctionalization reactions. It is worth noting that in addition to radical and nucleophilic difunctionalization reactions presented in this paper, there are other radical-initiated difunctionalization reactions such as radical chain reactions, radical-radical coupling with or without metal catalysis. In some reaction processes, the radical difunctionalizations could have more than one pathways. The complex reaction processes have attracted the attention of synthetic chemists to conduct mechanism studies which could lead to the discovery of new reactions. We have no doubt that radical additioninitiated difunctionalization reactions for making molecules with potential biological, medical and functional material utilities will become important tools in organic synthesis.
2020-12-31T09:12:26.711Z
2020-12-28T00:00:00.000
{ "year": 2020, "sha1": "0921ae146e9425ddad6c46213fe8263b32bedc82", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/1/105/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "328fa1ddb0b669bdc65c543eaa86611f97318001", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
256707405
pes2o/s2orc
v3-fos-license
Connecting Fano interference and the Jaynes-Cummings model in cavity magnonics We show that Fano interference can be realized in a macroscopic microwave cavity coupled to a spin ensemble at room temperature. Via a formalism developed from the linearized Jaynes-Cummings model of cavity electromagnonics, we show that generalized Fano interference emerges from the photon–magnon interaction at low cooperativity. In this regime, the reflectivity approximates the scattering cross-section derived from the Fano-Anderson model. Although asymmetric lineshapes in this system are often associated with the Fano formalism, we show that whilst Fano interference is actually present, an exact Fano form cannot be achieved from the linear Jaynes-Cummings model. In the Fano model an additional contribution arises, which is attributed to decoherence in other systems, and in this case is due to the resonant nature of the photonic mode. The formalism is experimentally verified and accounts for the asymmetric lineshapes arising from the interaction between magnon and photon channels. As the magnon–photon coupling strength is increased, these channels merge into hybridized magnon–photon modes and the generalized Fano interference picture breaks down. Our results are universally applicable to systems underlying the linearized Jaynes-Cummings Hamiltonian at low cooperativity and connect the microscopic parameters of the quantum optical model to generalized Fano lineshapes. INTRODUCTION Resonances in atoms, molecules, or matter reflect many of their properties. Coupling of resonances to the environment or to other resonances leads to emerging phenomena in a large variety of systems. Spurred by the possibility of coherent photon-magnon interaction and associated potential for quantum information processing, the field of cavity electromagnonics has experienced enormous growth in the past decade 1 . A single photonic mode can be strongly coupled to a single magnon mode 2,3 . Classical coherence effects reproduce the behavior predicted by quantum optical models both at cryogenic and at room temperatures 3,4 . Strong magnon-photon coupling via the so-called cavity-magnon polariton has been vital to reaching photon detection via electrical spin pumping probes 5 , dark mode memories 6 , and magnon coupling to a qubit 7 . Invoking the description of the polariton as a classical interaction between an electromagnetic wave and a matter polarization 8 , strong magnon-photon coupling may be understood as the hybridization of the magnonic and photonic modes 9 . For classically coupled harmonic oscillators, interesting behavior of the coupled resonances also emerges when the damping dominates over the coupling strength. For such classical systems, it has been shown that Fano interference occurs 10,11 at and around the exceptional point 12 . Based on the quantum picture of magnon-photon interaction, magnetically induced transparency (MIT) and the Purcell effect have been demonstrated 3 . These effects may be described as interference phenomena associated with the Fano effect 13 . Experiments on magnon-photon interactions show the existence of asymmetric lineshapes, which are associated with Fano resonances 9,13-15 . Our study connects the quantum model of cavity electromagnonics based on the linearized Jaynes-Cummings (JC) Hamiltonian to the Fano interference picture. The model links the cavity electromagnonics and the Fano models in a physically meaningful way. It also shows that a standard Fano model cannot account for either the cavity resonance or the magnon-photon coupling. We introduce a generalized Fano model that adequately describes the asymmetric lineshapes and reveals the physics that connects the linearized JC model to Fano interference at low coupling strength. Fano resonances emerge when a discrete state falls within the band of continuum states. The fingerprint of the Fano effect is a characteristic asymmetric lineshape. Coupling between the resonant and continuum states is described by the configuration interaction, which mixes the discrete and continuum states of the system 16 . The Fano effect involves interference between these mixed states and a discrete resonant state. Well-defined phases of a background channel and a resonant channel are required to unambiguously identify the Fano effect. The Fano-Anderson Hamiltonian 17 is used to describe the interaction of a discrete magnon state with a photonic continuum 15 . Here, b and c are the bosonic operators of the discrete and continuum states, respectively. The mode coupling strength is given by A k . The scattering cross-section of the Fano resonance is 16 where ε ¼ ðE À E 0 Þ=γ is the dimensionless resonant channel detuning with resonance energy E 0 , q is the asymmetry parameter, γ ¼ Γ=2 is the linewidth of the Fano resonance with damping parameter Γ, and σ 0 is the resonant cross-section. In the standard Fano form q is real. In case of a complexq, as required in the present case, the imaginary part signifies a contribution to the scattering cross-section typically associated with dephasing or dissipation 18,19 such that In contrast to the magnon resonance coupled to a photonic continuum in the Fano-Anderson model stands the strong coupling of two-level system and a harmonic resonance as described by the JC model 3 . With appropriate modifications, this JC model is employed to describe magnon-photon coupling in cavity electromagnonics. Specifically, at high polarization as is present in a homogeneously magnetized ferromagnetic element, a collection of two-level systems and a collection of harmonic oscillators behave identically. This is described by the linearized Holstein-Primakoff transformation 20 , which treats magnonic excitations as non-interacting bosonic quasiparticles. The socalled linearized JC Hamiltonian underlying the linearized Holstein Primakoff transformation is written as which is a common starting point in the field of cavity electromagnonics and also applies to the system studied in this work. Here, g is the coupling strength in terms of frequency. The two resonances can reversibly interchange energy in the case of strong coupling, which leads to an avoided crossing of the modes and Rabi oscillations. However, when damping dominates, the magnon-photon interaction becomes irreversible. A striking difference between the linearized JC model and the Fano-Anderson model is that in the latter the resonance lies within a continuum of states and energy exchange is intrinsically irreversible 21 . Similarity of the two models should therefore be expected when the coupling strength g of the linearized JC model is low and the magnon-photon system does not periodically exchange energy. In this work, we show that Fano interference emerges in the lineshapes of the reflectivity of a coupled magnon-photon system described by the linearized JC Hamiltonian at low coupling strength. We measure the evolution of Fano interference in a coupled magnon-photon system usually used to investigate the cavity-magnon polariton. In addition, we describe the transition of the system from the Fano description of interference between channels to hybridization of modes as the cooperativity is increased. The resonant nature of the cavity generates an imaginary part of the asymmetry parameterq. The Fano description of interfering channels only becomes visible within the generalized Fano model developed here. Although the model highly resembles the typical Fano scattering formalism, we show that only the generalized Fano model accounts for the magnon-photon interactions in the JC model at low cooperativity. The reflectivity from a microwave cavity in which the interaction between a photon and a magnon is described by the linearized JC Hamiltonian can be expressed in terms of the cavity detuning Δ c ¼ ω c À ω, the magnon detuning Δ m ¼ ω m À ω, the magnon damping rate y, the cavity damping rate κ, and the coupling strength g. The energy-dependent reflectivity at critical coupling according to the linearized JC Hamiltonian is 3 which has been obtained via the Heisenberg-Langevin approach using the input-output formalism. The latter describes the interaction between a cavity system and its environment by allowing the system to couple to a continuum of environmental modes. The reflectivity in Eq. (5) arises in all cavity-based systems underlying the linearized JC model and hence occurs in a multitude of systems. For instance, this behavior is found in quantum dots coupled to a cavity 22 , a condensate of 87 Rb atoms coupled to an optical cavity 23 , an ensemble of nitrogen vacancy centers in diamond interacting with a superconducting resonator 24 , as well as Mössbauer nuclei embedded in a cavity for Xrays 25,26 . In this work, an explicit connection between Eq. (5) and a generalized form of Fano interference will be presented. Generalized Fano form The absolute value of the reflectivity jr Δ m ; Δ c ð Þj of a t = 10 nm thick permalloy (Ni 80 Fe 20 ) film inserted into the cavity is shown in Fig. 1a. The linewidth of magnetic damping of permalloy used in this study is γ % 200 MHz and thus much larger than the linewidth of the cavity κ % 1 MHz. The coupling strength g is determined by the amount of magnetic material and spin density 3 of permalloy and is of the order of several MHz so that the experiment is performed in the Purcell regime where γ > g > κ. Thus, the cooperativity C ¼ g 2 =κγ is always below one. Fig. 1b shows the fit of the reflectivity via Eq. (5). The fit parameters include γ, κ, and g and are listed in the Supplementary Table 1. The best-fit values confirm that cavity photon-magnon interactions occur in the Purcell regime and that the coupling strength g is proportional to the square root of the permalloy thinfilm thickness as expected. Field swept lineshapes for fixed cavity detuning are shown for a few cavity detunings in Fig. 1c, e-h. On cavity resonance (Δ c = 0, Fig. 1g), the lineshape is a symmetric peak. Measurements are conducted for a magnonic detuning range Δ m determined by an external magnetic field, which sets the ferromagnetic resonance frequency ω m governed by the Kittel formula for an extended soft ferromagnetic thin film. When the cavity is detuned, the lineshape acquires an asymmetric shape that resembles Fano interference. Although Eq. (5) describes the reflectivity well, it gives neither insight into whether the fixed frequency lineshapes are due to Fano interference nor if a connection between the linearized JC model and the Fano model exist. To obtain this insight, the squared amplitude of the reflectivity of Eq. (5) is transformed to a generalized Fano form (see Supplementary Methods) which describes the fixed frequency lineshapes as a function of dimensionless magnon detuning ε ¼ ÀΔ m =γ in all coupling regimes. The resonant scattering cross-section σ 0 ¼ ð1 þ κ 2 =Δ 2 c Þ À1 gives the inverse Lorentzian resonant reflection of the cavity, on which the spectral signature of the magnon-photon interaction is imprinted. Here, the complex asymmetry parameterq ¼ Cκ=Δ c þ i determines the asymmetry of the fixed frequency lineshapes via ReðqÞ, which depends inversely on cavity detuning. The imaginary part of the asymmetry parameter is ImðqÞ = 1 and hence does not exhibit a functional dependence on external parameters. The parameter η ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ f ðε; Δ c ; CÞ p depends on the cavity and magnon detuning as well as on the cooperativity. For details of the calculation explicitly connecting the reflectivity to the generalized Fano form see the Supplementary Methods. The additional term η approaches η 2 % 1 for decreasing coupling strength but generally introduces a difference to the standard Fano model. It describes the coupling of magnon and photon modes. Because Eq. (6) is a reformulation of the reflectivity given in Eq. (5), this formula is valid at all coupling regimes. The experimentally observed fixed frequency lineshapes presented in Fig. 1 are fit using Eq. (6), showing excellent agreement. Inverse dependence of Req ð Þ on cavity detuning Δ c is shown in Fig. 2 resulting from the fits of Req ð Þ to the lineshapes (blue circles). The exact same dependence is observed with the cooperativity and damping obtained from fits of the reflectivity via Eq. (5) (red line, see Supplementary Methods). We note that such an inverse dependence of the asymmetry parameter on the cavity detuning in the cavity reflectivity is also evident in Fano lineshapes between hard X-ray cavities and the Mössbauer resonance of 57 Fe 26 . The linear dependence of Req ð Þ on the cooperativity C is visible in Fig. 2d. Equation (6) exhibits a strong similarity to the description of Fano interference in the presence of decoherence, namely Eq. (3) with a complexq. For increased cooperativity or magnon detuning, the deviation from Fano lineshapes becomes more pronounced. At low cooperativity, however, η 2 % 1 and the standard complex Fano relation (3) is recovered. Then, the first term in the parentheses represents the coherent part of the standard Fano form. The second term connected to the imaginary part ofq is a Lorentzian that peaks on magnon resonance. As ImðqÞ is a constant, the relative coherent contribution to the fixed frequency lineshapes is set by Req ð Þ. The ratio of both contributions to the generalized Fano lineshape is given by Req ð Þ=Imq ð Þ ¼ g 2 =γΔ c , which denotes that as cavity detuning Δ c increases, the relative contribution of ImðqÞ to the spectral signature increases. The constant form of ImðqÞ stands in contrast to other studies of generalized Fano interference in which phase shifts, dephasing, or dissipation are introduced that change both the real and imaginary parts ofq 18,27,28 . The coupling between a single magnon and a single photon mode makes the magnon-photon system fundamentally different to the Fano interference encountered when a matter resonance lies within a continuum of states. The JC model is coherent, with dissipation handled via input-output formalism and the intrinsic damping constants of the modes. Thus, we do not expect additional decoherence, explaining the constant form of Imq ð Þ. The contribution of ImðqÞ to the generalized Fano lineshapes is attributed to the resonant nature of the photonic mode. The resonance of the cavity signifies that photons are increasingly reflected by the cavity as it is detuned. This is described via the scattering cross-section σ 0 , which represents the spectral response of the empty microwave cavity. Hence, σ 0 describes the probability of a photon entering the cavity as a function of cavity detuning Δ c , with fewer and fewer photons able to enter the cavity as Δ c increases. However, the behavior of σ 0 does not explain the detuning-dependent size of the spectral feature in the fixed-frequency lineshapes. For large detuning of the cavity, all incoming photons are reflected and a flat line without any spectral feature evolves. In the limit of large cavity detuning, the reflectivity is jrðεÞj 2 % ε 2 þ 1 ð Þ= ε 2 þ η 2 ð Þ , which approaches unity for η 2 % 1, and the spectral features vanish. A functional dependence of real and imaginary parts ofq on each other, as in cases of incoherent processes 27 , is not present here. In the present case, the imaginary part is essential to obtain the correct reflectivity of the cavity and spectral features but is not related to decoherence. It rather is the addition of the resonant photonic mode behavior to the Fano model that normally describes a photonic continuum. The factor n emerges from magnon-photon coupling, which also stands in contrast to the Fano picture of interfering and non-coupled channels. The mode coupling is actually responsible for the dip-like spectral feature at large cavity detuning, which cannot be explained by the vanishing real part if the imaginary part is unity 18 . Fano interference Central to the Fano lineshape is its interpretation as an interference phenomenon between two scattering channels with well-defined phases. The magnon-photon coupling regimes for which the lineshape may be interpreted as Fano interference is determined by the cooperativity and the detunings via ηðε; Δ c ; CÞ. Under the assumption that η is real, the fixed frequency lineshapes can be written in an interference picture by reformulating Eq. (6) where φ c ¼ ÀargðReq ð Þ À iηÞand φ m ¼ argðε À iηÞ are the cavity and magnon channel phases, respectively. The asymmetry parameter is thus mapped to a universal phase factor 29 . This equation is valid in the Purcell and MIT regime. The first term in the parentheses describes Fano interference, whereas the last term adds the resonance of the cavity to the Fano model. In the spirit of previous works on Fano resonance 26 , we interpret the two terms in absolute value bars as interference between a photonic background channel set by the cavity and the discrete magnon resonance coupled to the cavity. This separation is possible because at low coupling strength the modes can be treated as almost independent. Radiation is input into the cavity at a fixed cavity detuning value Δ c while the magnon channel phase φ m progresses form Àπ to 0 as the magnonic detuning is swept through the magnon resonance. The interference between these channels results in characteristic Fano resonances with the asymmetry defined by the background phase φ c , which is set by the cavity detuning. The phase of the cavity background channel is set by its detuning and thus the asymmetry can be controlled. The situation differs when the transmission spectrum is calculated. In this case, one obtains a similar spectrum as Eq. (6) but with the scattering cross-section for transmission σ t 0 ¼ ð1 þ Δ 2 c =κ 2 Þ À1 and with a constant asymmetry parameter Req ð Þ ¼ 0. For transmission, the cavity phase is always π=2 despite the cavity detuning. The difference between transmission and reflection channel can be understood in terms of their interference with magnon resonance in the cavity (see Supplementary Methods). For the 10 nm film, the cavity phase φ c determined from the fits shown in Fig. 1 is plotted in Fig. 3d, e as a function of ReðqÞ and Δ c . Of the measured samples, only for the 10 nm film the cooperativity is small enough so that no variation in φ c occurs in the investigated detuning range. In the Fano model, the background phase sets the form of the spectrum and yields asymmetric fixed frequency lineshapes when φ c is not a multiple of π=2. In the model, φ c is independent of resonant channel detuning. Despite deviations from the standard Fano formalism with a purely realq that are caused by the imaginary part due to the cavity, Fano interference between the magnon and photon channels emerges at low cooperativity. At zero cavity detuning, photon input into the cavity is maximal and the standard Fano term alone determines the fixed frequency lineshapes. In this limit, the photonic mode mimics a broad continuum of modes without any influence of the imaginary part as ReðqÞ ! 1, because no photons are reflected and all incoming photons probe the magnon-photon interaction. The lineshape reduces to a Lorentzian as predicted by Eq. (2), which is indicative of the Purcell broadening of the cavity resonance. Rapid changes in the cavity phase in the region Δ c j jt κC yields a rapid transformation from a Lorentzian to an asymmetric lineshape. These lineshapes exhibit a large contribution from ReðqÞ with values decreasing to unity. The increased reflection of photons from the cavity modifies the spectral features due to the cavity resonance via ImðqÞ. The cavity phase is fixed for each lineshape. In the limit of very large cavity detuning, Δ c j j ! 1, all photons are reflected, and no spectral signatures arise (not shown). At intermediate cavity detunings Δ c j j\ κC (shown at Δ c ¼ ± 1.2 MHz in Fig. 1), almost symmetric dips occur with their amplitude depending on the mode coupling term η. These dips are understood in terms of the Purcell effect where the photonic dissipation is altered while the magnon is swept over its resonance. When the cooperativity is increased, as for the 40 and 200 nm films, η becomes purely imaginary for certain magnon and cavity detunings. In Fig. 3a-c, the cavity phase φ c calculated from the parameters obtained from the fit plotted in Fig. 1 are shown for permalloy films with thicknesses of 10, 40, and 200 nm. For the 10 nm thick film, φ c is effectively constant over the investigated magnon detuning range so that the Fano interpretation of Eq. (7) is valid. In contrast, for the 200 nm thick film φ c is no longer independent of magnon detuning. Rather, the cavity phase depends on magnon detuning via the mode coupling term η. Here, photon-magnon hybridization starts to manifest so that the physical interpretation of Fano interference of two weakly coupled and interfering channels is no longer accurate. Fano interference can be recovered at low cooperativity only. DISCUSSION The results presented here show that at low cooperativity, the linearized Jaynes-Cummings (JC) model leads to lineshapes that strongly resemble those characteristic of Fano interference. The similarity of the Fano scattering cross-section as derived from the Fano-Anderson Hamiltonian and the linearized JC Hamiltonian at low coupling strength is evident. A common thread between these models is the scattering of the incoming photon via a mixed state containing both discrete-state and continuum-state contributions. However, while the linearized JC model describes scattering into a single photonic mode, the Fano-Anderson model describes scattering into a continuum of photonic modes. At low cooperativity, the magnon-photon coupling from the linearized JC Hamiltonian is barely reversible and thus almost coincides with the irreversible scattering into a continuum in the Fano-Anderson model. The main difference is the contribution that arises from the resonant nature of the photonic cavity mode and stands in contrast to the dispersionless continuum. At higher cooperativities the magnon-photon modes hybridize, eventually merging into the strong coupling regime, and the Fano interpretation breaks down. Asymmetric lineshapes are not necessarily connected to Fano interference even though the equations resemble the Fano form (see Eq. (5)). Only the interference picture of Eq. (7) gives insight into whether the underlying physics are based on Fano interference. Especially in the context of two coupled resonances, the resonant photonic mode of the cavity and the mode coupling have to be accounted for properly. Control of the Fano parameters is given by the cavities' detuning dependence of the scattering cross-section σ 0 and the background channel phase φ c . In conclusion, we show that a generalized Fano form emerges from the linearized JC model and verify this experimentally in a microwave cavity coupled to the Kittel mode of a permalloy film in the Purcell regime. Our model connects the microscopic parameters of the linearized JC model to the phenomenological parameters of the generalized Fano form and uncovers magnon-photon coupling at low cooperativity as interference between scattering of a background cavity channel and the magnon channel. These two channels have well-defined phases in accordance with the Fano interference picture. This is remarkable as it shows that the physics of Fano interference surfaces even though the system consists of two coupled modes with finite linewidths. We thus open a new perspective in the connection between the Fano-Anderson and the linearized JC models at low cooperativities. Depending on the coupling strength, the magnon-photon coupling can be understood in terms of either Fano interference or mode hybridization. Finally, the linear JC model describes many types of systems in which a cavity mode interacts weakly with a matter-based harmonic resonance. The wide applicability of the linear JC model hence makes the results presented here important to many areas of physics including hard X-ray quantum optics, atomic interactions with optical cavities, and cavity electromagnonics. Experimental setup To experimentally obtain Fano profiles, we perform reflectivity measurements of a microwave cavity in which microwave photons are coupled to magnons in a metallic, ferromagnetic thin film. We measure the complex Fig. 3 The Fano interference picture. The cavity background channel phase φ c as calculated from Eq. (7) with the experimentally determined parameters is shown in dependence on the cavity and magnon detunings for the case of (a) 10 nm, (b) 40 nm, and (c) 200 nm thick permalloy film inserted into the cavity. The white regions in (b) and (c) correspond to regions in which η becomes imaginary. d, e The cavity phase for the 10 nm film at zero dimensionless magnon detuning as a function of cavity detuning and ReðqÞ, respectively. The blue open dots represent the cavity phase calculated on the basis of fits of the fixed frequency lineshapes, whereas the red lines represent the cavity phase of the theoretical Fano interference picture with η ¼ 1. f The cavity phase at cavity detuning Δ c = −0.2 MHz for the three thicknesses investigated. Note that only for the case t = 10 nm the cavity phase is essentially constant as a function of magnon detuning. reflectivity of the microwave cavity using a vector network analyzer (S 11 parameter) on a permalloy thin film placed in a rectangular microwave cavity operated in the TE101 mode at ω res =2π ≈ 3090 MHz. A static magnetic field H ext biases the permalloy thin film to set the magnonic resonance frequency in accordance with the Kittel relation for a magnetic thin film 30 . Measurements are performed for a cavity detuning of Δ c ¼ ± 10 MHz around the cavity resonance frequency of 3090 MHz and for a magnonic detuning range Δ m determined by the external magnetic field, which sets the ferromagnetic resonance frequency. The microwave cavity is machined out of oxygen-free copper and has a linewidth κ=2π (HWHM) of 1.13 MHz so that the cavity has a Q-factor of ≈2700 at room temperature. Power is input into the cavity via a terminal stub, the length of which has been adjusted to ensure critical coupling. The complex reflectivity of the cavity is obtained for permalloy films of 10, 40, and 200 nm thickness placed in the cavity. The absolute value of the reflectivity on the detuning grid is shown in the Supplementary Fig. 2, where a fit performed using Eq. (5) together with the fit parameters and further details of the experiment are shown.
2023-02-10T14:06:12.667Z
2021-07-19T00:00:00.000
{ "year": 2021, "sha1": "7a939229bff200c14bd3c880c114e4ad8fb2771e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41534-021-00445-8.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "7a939229bff200c14bd3c880c114e4ad8fb2771e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
20854226
pes2o/s2orc
v3-fos-license
Genistein Stimulates Hematopoiesis and Increases Survival in Irradiated Mice Radiation protection from death and stimulating hematopoietic recovery by oral administrations of genistein, 160 mg/kg b.w., once daily for seven consecutive days before whole-body γ -rays irradiation, were confirmed by tests with adult male BALB/c mice. Moreover, the protective action of genistein was compared to that of diethylstilbestrol (DES). Based on the studies of survival, behavior of hematograms, endogenous hematopoietic spleen colony formation (endoCFUs), and numbers of nucleated cell, granulo-cyte-macrophage colony forming units (CFU-GM) in bone marrow following irradiation, it was demonstrated that genistein was an effective radioprotector. The survival of irradiated mice protected by genistein was significantly increased and statistically higher than that of mice pre-treated with DES. Stimulated recovery of leukocytes, erythrocytes, lymphocytes and thrombocytes were observed in mice pre-treated with genistein or DES, however, the effects of genistein on promoting recovery of bone marrow nucleated cells, leukocytes and lymphocytes were significantly higher than those of DES. Enhanced endoCFUs, numbers of bone marrow nucleated cells and CFU-GM were also found in mice pre-treated with genistein as well as DES. Meanwhile, endoCFU numbers in mice pre-treated with genistein was 3.47-fold higher than that in the irradiated control group, although no significant difference was found between genistein administration and DES administration. It could be deduced that the radioprotective action against death is induced by a possible process of enhanced regeneration of the hematopoietic stem cells due to not only strengthened radioresistance and increased numbers of remained hematopoietic cells, but also enhanced post-irradiation repair or promoted proliferation of the hematopoietic stem cells. These effects of genistein may have some therapeutic implications for radiation-induced injuries. INTRODUCTION The hematopoietic system as well as the hematocytes is known to be sensitive to radiation, and low doses of radiation can induce damage. Radioprotective agents are those that are administered before exposure to ionizing radiation to reduce the damaging effects, including radiation induced lethality. 1) Many synthetic as well as natural agents have been investigated on whether they have the efficacy as a protector against radiation injuries in the recent past years. 2) Among the radioprotective compounds, estrogens have been extensively studied. Both the natural estrogens like estradiol and the synthetic estrogens like diethylstilbestrol (DES) exerted radioprotective actions on radiation sickness of experimental animals including improving the survival and accelerating the recovery of hematopoiesis. [3][4][5] Moreover, estrogens also ameliorated hematopoietic suppression induced by caner radiotherapy or chemotherapy in the clinic. 6,7) However, the inherent toxicities of these agents at the radioprotective concentration warranted further search of safer and more effective radioprotectors. 8,9) Genistein (4', 5', 7'-trihydroxy-isoflavone), a naturally occurring isoflavone found in soybeans, has structural similarity to17‚-estradiol but rather weaker estrogenic activities (10 -2 -to 10 -3 -fold). 10) Many studies have demonstrated that genistein, as one of the most important phytoestrogens, had no toxicity on human health at the pharmacological concentration and possessed potential properties to act as both an estrogen and anti-estrogen, inhibit the activities of tyrosine kinase and DNA topoisomerase II, and improve immune system. [10][11][12] Consequently, it has gained increasing attentions because of its association with beneficial effects on persons with breast cancer, prostate cancer, cardiovascular disease, high cholesterol levels and osteoporosis. [13][14][15] Moreover, the isoflavone was an effective antioxidant, which could eliminate the free radicals and boost the antioxidant enzymes activities, so it may provide protection against ultraviolet-B radiation when applied to the skin of hairless mice 1h before exposure. 16) Genistein also reduced the frequency of micronucleated reticulocytes 17) and increased survival of sublethally irradiated mice without exhibiting estrogenic actions on reproductive systems. 18) The purpose of this paper was to study in vivo radioprotection of genistein on hematopoietic recovery contributing to improve survival of sublethally irradiated mice. Materials Male BALB/c mice (10-12 weeks old, weighing 25 ± 2g) were purchased from the Center of Laboratory Animal of the Third Military Medical University. All materials were purchased from Sigma Aldrich (St.Louis, Missouri, USA), except for the materials stated here. Genistein was purchased from Baoshai Biotechnology Co. of Xi'an Jiaotong University (Shanxi, China). Iscoves Modified Dulbecco's Medium (IMDM) and fetal bovine serum were purchased from Hyclone (Logan, UT, USA). Culture plastic flasks and dishes were purchased from Coring Incorporated Life Sciences (Acton, MA, USA). Radiation and Administration According to our preliminary studies, genistein was dissolved in sesame oil and administered orally of 160 mg /kg b.w., once daily for seven consecutive days before irradiation. DES was subcutaneously injected 24 h before irradiation, a single dose of 5 mg/kg b.w.. 4) Control animals received sesame oil orally or saline for injection, respectively, in the same volume and at the same time as the treated group. Therefore, mice used for this study were divided in five groups: normal nonirradiation control (Group N), sesame oil plus irradiation control (Group O), genistein plus irradiation group (Group G), saline plus irradiation control (Group S), and DES plus irradiation group (Group D). Mice were quarantined for a period of 2 weeks and were housed in rodent cages with five to seven animals per cage at about 23 ° C with a relative humidity of 50%, and they were maintained under controlled condition and standard mouse food and water ad libitum. After treatments, Mice were placed in Plexiglass containers and the whole body exposed to 6.0 Gy of gamma rays (98.01-98.68 cGy/min) from a 60 Co source. Hematologic examinations Whole blood was collected from the tail ends of mice on different days following irradiation and the fluctuation of hematograms, including leukocytes, erythrocytes, lymphocytes and thrombocytes, were automatically counted by a hematocyte counter. The blood count response was expressed as a percentage of the normal count determined 1 day before irradiation. Average values for each group were obtained from eight mice per group and the same eight mice were not sampled until 10 days later to avoid the influence of infection. If the hemogram changes of expired mice could not be counted, the treatment was repeated to increase the number of mice for experimental statistical analysis. Each treatment group consisted of 250-300 mice and surviving mice were euthanized by cervical dislocation on day 31. Haemopoietic stem cell assays Endogenous hematopoietic spleen colony formation (endoCFUs) was done according to method of Till and McCulloch. 19) Briefly, endoCFUs were determined on day 10 after radiation. Mice were sacrificed by cervical dislocation; their spleens were removed and fixed in Bouins fixative for 24 h. The number of macroscopic spleen colonies was then scored. Bone marrow cells were obtained from anesthetized mice by aseptic isolation of the femurs followed by a flushing of the marrow with IMDM medium, using a 25-gauge needle. The cells were suspended in the medium, and single cell suspensions were made. Survival assays Survival was monitored daily and reported as the percentage of animals surviving 30 days after irradiation. Each treatment group consisted of 30 mice. The dying animals in this experiment were killed when moribund. On day 31, surviving mice were euthanized by cervical dislocation. Data were expressed as % survival. Statistical analysis All experiment data were expressed as mean ± standard deviation and statistically analyzed with ANOVA test fol- Fig. 2. Leukocyte counts on different days after irradiation in various groups (%). Percentages leukocyte were calculated from the pre-irradiation values taken as 100%. The bars represent standard deviation. a p < 0.05, compared with group N; b p < 0.05, compared with group O; c p < 0.05, compared with group S; * p < 0.05, compared with group D Fig. 3. Erythrocyte counts on different days after irradiation in various groups (%). Percentages leukocyte were calculated from the pre-irradiation values taken as 100%. The bars represent standard deviation. a p < 0.05, compared with group N; b p < 0.05, compared with group O; c p < 0.05, compared with group S lowed by Newmann-Keuls test. The chi-square test was employed to assess the statistical significance of thirty-day survival rate of irradiated mice. Statistical significance was assumed at the p value less than the 0.05 level. Survival rate of mice after irradiation It followed from results that mortality increased markedly in all irradiated groups and most mice were dead within the 7-14 days following irradiation. As illustrated in Fig. 1A, the percentages of mice surviving after 30 days, by group, were group S, 15.56%; group O, 16.67%; group G, 53.33%; group D, 45.56%. It showed that the 30-day survival of group G was significantly higher than other groups. The survival curve illustrated that, compared with the control group data, the time to death was significantly shifted to the right for mice pre-treated with genistein and DES, respectively (Fig. 1B). Those results demonstrated that genistein possessed highly radioprotective efficacy on prevention of mortality in sublethally irradiated mice and its protective action was superior to that of DES. Hematologic Examinations Leukocyte, erythrocyte, lymphocyte and thrombocyte counts are shown in Fig. 2-4, respectively. As indicated by the percentages of the normal count (determined 1 day before treatment), hemograms of peripheral blood changed markedly because of the hematopoietic damage caused by irradiation in all irradiated groups. In Fig. 2, leukocyte counts declined rapidly and elevated gradually from day 9 following irradiation. Within the whole post-irradiation period, the recovery of leukocytes in mice of group G was significantly rapider than those of group O and group S. On day 30 after irradiation, leukocyte counts of group G returned to normal level, as compared with group O values of 78.5% and group S values of 75.4%. Stimulating leukocyte recovery was also found in mice protected by DES, but its stimulating actions seemed to be lower than that of genistein at days 21 and 30 following irradiation, namely 90.5% and 102.8% with group G compared with group D values of 77.2% and 93.8%. Figure 3 showed that the sublethal dose of acute irradiation caused erythrocyte numbers decreased slowly and reached a minimum value at day 14 after irradiation. The prescriptions genistein as well as DES before irradiation exerted some active effects on recovery of radiation damage by increasing the number of erythrocyte. Significant differences from the irradiated controls were seen at days 14 and 21, namely 56.9% and 85.7% with group G compared with group O values of 40.1% and 66.3%. When compared to DES, this protective effect of genistein was no statistically significance. Figure 4 illustrated that thrombocyte counts decreased in a time-dependent manner and reached a minimum value at day 14 after irradiation. Reduction of the decrease and stim-ulated recovery of thrombocytes were found in mice pretreated with genistein and DES, respectively. From day 6 after irradiation, the number of thrombocytes in group G was statistically higher than that in group O and it rendered approximately 96.24% of normal range compared with 80.45% of group O on day 30 after irradiation. Enhanced thrombocyte numbers was also found in mice of group D. Actually, there were significant differences between group G and group D at days 14 and 21 after irradiation. It showed that protective effect of genistein on thrombocytes was lower than that of DES. Figure 5 showed that lymphocyte numbers decreased rapidly and reached a minimum value at day 6 after irradiation. Reduction of the decrease and stimulated recovery of lymphocytes were found in mice pre-treated with genistein and DES, respectively. Interestingly, protective effect of genistein on stimulating recovery of lymphocytes was stronger than that of DES. At days 21 and 30 following irradiation, the number of lymphocytes in group G was much more than that in group D, namely 75.4% and 94.5% with group G compared with group D values of 65.5% and 83.7%. Numbers of bone marrow nucleated cells, endoCFUs and CFU-GM Radiation decreased numbers of bone marrow nucleated cells and induced hematopoiesis suppression obviously in all irradiated control groups, and recovery of nucleated cells started at day 6 after irradiation. Compared to irradiated groups, the consecutive administrations of genistein clearly accelerated hematopoietic recovery by the increase of bone marrow nucleated cell numbers. At day 21 after irradiation, number of nucleated cells in group G rendered 89.96% of normal range comparison with 56.02% of group S and 56.7% of group O. Furthermore, numbers of bone marrow nucleated cells in group G were significantly higher than those in group D at day 914 and 21 (Fig. 6). The numbers of endoCFUs, by group, were group S, 4.21; group O, 3.78; group G, 16.91; group D, 13.56. Number of endoCFUs in mice of group G was approximately 3.47-fold higher than that in group O and no further significant differences between group G and group D were found (Fig. 7). In addition, enhanced numbers of CFU-GM were also observed in mice pre-treated with genistein as well as DES, and there seemed to be no significant difference between group G and group D (Fig. 8). DISCUSSION One of the major syndromes of hematopoietic system damaged by high dose total-body exposure to ionizing radiation is bone marrow aplasia. It is generally agreed that radiation death in the sublethal dose range is due to impairment of bone marrow hematopoietic function and that the leucopenia, erythropenia and thrombocytopenia which ultimately develop predispose to infection, hemorrhage and death. Survival brought about by radioprotectors after potentially lethal irradiation is thought to be due primarily to their effect on hematopoietic cells. 20) Accordingly, we used peripheral blood cell count as indicators of bone marrow function in order to assess the effect of radioprotection on normal tissue which is critical for survival. The present study revealed that pre-irradiation administrations of genistein or DES could increase the survival and stimulate recovery of peripheral hematocytes, falls of bone marrow nucleated cells induced by radiation. Interesting, the efficacy of genistein on enhancement of the survival and promoting recovery of leukocytes, lymphocytes and bone marrow nucleated cells were stronger than those of DES in our experiments, although its protection against the decrease of thrombocyte counts was weaker than that of DES. According to the conclusions of Floersheim et al, 20) it would appear that genistein affords hematological protection by both preventing the destruction blood cells and enhancing hematopoietic recovery. That means not only that the circulating blood cells but also the progenitor cells may be protected under irradiation by prior genistein administration. Furthermore, the date from our experiments showed that mice protected with genistein demonstrated much more powerful recovery of endo CFUs and CFU-GM numbers after irradiation, but no significant differences were found compared to those of DES pre-treated mice. The enhancement of endoCFUs counts in genistein pre-treated irradiated mice in comparison to irradiated control indicates the role of genistein in protecting the stem cells and /or stimulating the proliferation of survival cells. Measures of CFU-GM are good indications of myeloid haemopoietic activity in animals recovering from exposure to radiation. We accordingly infer that the mechanism of stimulating hematopoiesis recovery by genistein may involve the enhancement of bone marrow stem cell radiotolerance to inhibit of the decrease of bone marrow stem cell number and the promotion of proliferation of survival cells. Various mechanism such as prevention of damage through inhibition of free radical generation or their intensified scavenging, enhancement of DNA and membrane repair, replenishment of dead hematopoietic and other cells and stimulation of immune cell activity are considered important for radioprotection. Genistein has several of the above-mentioned properties under different experimental conditions, which might attribute to its stronger radioprotective efficacy than that of DES. Most radiation damages arise from and interaction of the radiation-induced free radicals with the biomolecules. Free radical interactions may relate to DNA and other cellular macromolecules damage. Molecules with the ability to scavenge free radicals, therefore, can prevent radiation damage. Evidences demonstrated that genistein has stronger antioxidant actions combining with its capacity to activate the antioxidant systems that results in reduction of the level of freeradical lipid peroxidation products and stabilization of cellular membrane structure, 27,28) Wei et al . reported that genistein provided protection against non-ionizing ultraviolet-B radiation through either direct quenching of reactive oxygen species or indirect anti-inflammatory effects when it was applied to the skin of hairless mice 1h before exposure. 16) Arora et al . found that soy isoflavonoids could hinder diffusion of free radicals and thereby decrease the kinetics of free radical reactions, which might help to stabilization of cellular membrane structure. 29) Genistein also reduced the frequency of micronucleated reticulocytes in the peripheral blood of mice receiving a sublethal dose of ionizing radiation. 17) Thus, the antioxidant activity of genistein, its ability to protect against radiation-induced cytogenetic damage could contribute to its radioprotective action. Genistein pre-irradiation administration rendered a significant increase in the number of leukocytes and lymphocytes that was reduced by irradiation. The increase in CFU counts in spleen associated with the increase in leukocyte and lymphocyte counts in genistein pre-treated mice in comparison to untreated control mice indicated that the immunostimulatory role of genistein. Some previously reports also demonstrated that genistein has immunomodulatory activity in different experiments. 30,31) Since immunosupression following radiation exposure and subsequent opportunistic infections are the major drawbacks of radiation damage, the immunomodulatory roles of genistein maybe another important mechanism of its radioprotective efficacy. In addition, genistein possesses some other biological properties that may relate to its radioprotective efficacy. These include its estrogenic activity and its role in signal transduction pathways where it is an inhibitor of topoisomerase, protein kinase and caspases involved in apoptotic pathways. 21,22) These properties have been associated preciously with radioprotection. [23][24][25][26] On the other hand, genistein has gained increasing attentions because of its association with beneficial effects on persons with cancer, cardiovascular disease, high cholesterol levels and osteopersis, specially its benefits in tumor prevention and therapy. [13][14][15] Some reports showed that genistein using alone in vivo as well as in vitro could delay the growth of tumors and induce apoptosis of cancer cells. 32,33) Now, radiotherapy and chemotherapy are two important methods of caner therapy. Recently, many studies demonstrated that genistein showed additive benefits in tumor radiotherapy or chemotherapy, resulting in greater therapy efficacy. Some authors indicated that genistein in combination with other agents could delay tumor growth by its antiangiogenic activity. 34,35) Yan et al . reported that genistein could enhance the radiosensitivity of DU145 prostate cancer cells. 36) Hillman et al . also showed that genistein combined with prostate tumor irradiation led to a greater control of the growth of the primary tumor and metastasis to lymph nodes than genistein or radiation alone. 37) Therefore, the use of genistein in radiotherapeutic or chemotherapeutic applications can also be exploited. In summary, the results of the present study demonstrated that genistein administration before irradiation has effects of increasing the survival and providing the intensification of post-irradiation hematopoiesis recovery of irradiated mice. Although our investigations might provide some information basis for the possible use of genistein as a radioprotector of hematopoietic system, further studies are necessary to determine the mechanism of its radioprotective action.
2018-04-03T05:55:14.946Z
2005-12-01T00:00:00.000
{ "year": 2005, "sha1": "fc2b56fcb91a2b3ff2c51e9147b240a007486c60", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jrr/article-pdf/46/4/425/2745961/jrr-46-425.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2898ff4dd8f2c0609d5bc47dc21c21bac6600859", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
46953595
pes2o/s2orc
v3-fos-license
Muticriteria decision making based on independent component analysis: A preliminary investigation considering the TOPSIS approach This work proposes the application of independent component analysis to the problem of ranking different alternatives by considering criteria that are not necessarily statistically independent. In this case, the observed data (the criteria values for all alternatives) can be modeled as mixtures of latent variables. Therefore, in the proposed approach, we perform ranking by means of the TOPSIS approach and based on the independent components extracted from the collected decision data. Numerical experiments attest the usefulness of the proposed approach, as they show that working with latent variables leads to better results compared to already existing methods Introduction Many practical situations in multicriteria decision making (MCDM) consist in obtaining a ranking of a set of alternatives based on their evaluation according to a set of criteria [1,2]. The main difference between the existing methods that perform ranking in MCDM is related to the criteria aggregation procedure. For instance, a natural way to perform aggregation is to consider a simple weighted sum [2] for all criteria and for a given alternative. Another strategy can be found in TOPSIS method (TOPSIS stands for Technique for Order Preferences by Similarity to an Ideal Solution) [3]. In this method, one firstly defines a positive and a negative ideal alternative. Then, aggregation for a given alternative is done by calculating the Euclidean distances between the alternative under evaluation and the (positive and negative) ideal alternatives. The original versions of the aforementioned approaches do not take into account any relation among criteria, which may lead to biased results in the aggregation step. Indeed, if, for instance, there are two criteria strongly correlated which are governed by a latent factor, then such a latent factor will have a strong influence on the aggregation step. In view of this inconvenient, there are some methods that try to deal with possible relations among the observed criteria [4,5,6,7,8]. Among them, an interesting approach is an extended version of TOPSIS [5,7,8]. In this version, instead of considering the Euclidean distance in the aggregation step, one applies the Mahalanobis distance. Therefore, the calculation of the distance measure takes into account the covariance matrix among criteria. However, a question that arises is whether the information about the covariance among criteria is sufficient to mitigate the biased effect of dependent criteria. Motivated by this question, this paper proposes a novel three-step procedure to deal with correlated criteria in decision making problems. In the first step of our proposal, we formulate the problem as a Blind Source Separation (BSS) [9] problem and apply an Independent Component Analysis (ICA) method to estimate the latent variables. The second step comprises the elimination of permutation and/or scale ambiguities provided by ICA. In the third step, we perform the TOPSIS approach based on the Euclidean distance on the estimated latent variables in order to obtain a global evaluation of the alternatives, thus allowing a final ranking. Aiming at verifying the proposed ICA-TOPSIS approach, we performed numerical experiments on synthetic data and compared the results obtained by our approach and the TOPSIS based on Mahalanobis distance. The rest of this paper is organized as follows. Section 2 discusses the main theoretical aspects about multicriteria decision making and blind source separation problems. Then, in Section 3, we present the proposed ICA-TOPSIS approach. The numerical experiments are described in Section 4. Finally, in Section 5, we present our conclusions and future perspectives. Theoretical background This section presents the theoretical aspects involved in multicriteria decision making and blind source separations problems. Multicriteria decision making problems and TOPSIS method The most relevant problems in MCDM consist in ranking a set of K alternatives (A = [A 1 , A 2 , . . . , A K ]) based on a set of M criteria (C = [C 1 , C 2 , . . . , C M ]). For each alternative A i , v i,j represents its evaluation with respect to the criterion C j . Therefore, in a MCDM problem, we often face with the following decision matrix (or decision data): Based on the decision matrix V and the set of weights w = [w 1 , w 2 , . . . , w M ], which represent the "importance" of criterion C j in the decision problem, the goal is to aggregate v i,j , j = 1, . . . , M in order to obtain a global evaluation for each alternative A i and, then, to establish a ranking. Several methods have been developed to deal with MCDM problems. Among them, a widely used one is the TOPSIS, developed by Hwang and Yoon [3]. The main idea of this method is to determine the ranking based on the distances between each alternative and the (positive and negative) ideal solutions, as will be described in the sequel. The following steps describe the algorithm 1 : 1. The first step comprises the normalization of each evaluation v i,j , given by 2. Based on r i,j , we calculate the weighted normalized evaluation, given by 3. In this step, we determine the positive ideal solution (PIS) and the negative ideal solution (NIS), given by where p + j = max{p i,j |1 ≤ i ≤ K}, j = 1, . . . , M , and where p − j = min{p i,j |1 ≤ i ≤ K}, j = 1, . . . , M . 4. Given P IS and N IS derived in the last step, we calculate the distances (using Euclidean distance) from each evaluation vector representing alternative A i and both ideal solutions, described as follows: and 5. In the last step, we determine the similarity measure of each alternative A i to the ideal solutions, given by and derive the ranking according to u i in descending order. In this approach, one may note that the criteria are aggregated without taking into account any interaction between them. For example, in scenarios in which the criteria are correlated, i.e. they are composed by a combination of latent variables, disregarding the interaction may lead to biased results. In this context, an extended version of TOPSIS was proposed [7,8], which takes into account the Mahalanobis distance [10] (instead of Euclidean distance) and, therefore, exploit the covariance among criteria. In this version, the distances calculated in step 4 are given by and where r i = [r i,1 , r i,2 , . . . , r i,M ], r + and r − are, respectively, the positive and the negative ideal solutions derived from the normalized data R = (r i,j ) K×M , ∆ = diag (w 1 , w 2 , . . . , w M ) is the diagonal matrix whose elements are composed by the weights w and Σ ∈ R M ×M is the covariance matrix of R. The similarity measure is calculated as described in step 5. Blind source separation problems and independent component analysis Let us suppose a set of signal sources s(k) = [s 1 (k), s 2 (k), . . . , s N (k)] that were linearly mixed according to where is the set of mixed signals and g(k) = [g 1 (k), g 2 (k), . . . , g M (k)] is an additive white Gaussian noise (AWGN). In this linear case, BSS problems consist in retrieving the signal sources s(k) based only on the observed mixed data x(k), i.e. without the knowledge of both s(k) and mixing matrix A [9]. This can be achieved by adjusting a separating matrix B ∈ R N ×M that provides a set of estimates y(k) = [y 1 (k), y 2 (k), . . . , y N (k)], given by which should be as close as possible from s(k). In this scenario, the separating matrix B should converge to the inverse of the unknown mixing matrix A. However, given the permutation and scaling ambiguities inherent in BSS methods [9], B may not be exactly the inverse of A. As will be discussed latter on this paper, we made some assumptions on the problem in order to avoid these inconveniences. There are several approaches used to deal with BSS problems. A common one, called ICA, is based on the assumption that the sources are i.i.d. (independent and identically distributed) and non-Gaussian. Given the mixing process expressed in (11), the observed sources are not independent anymore but close to Gaussian. Therefore, a simplified strategy to recover signal sources that are statistically independent is to formulate an optimization problem in which the cost function leads to the minimization of a Gaussian measure (e.g. kurtosis or negentropy) of the retrieved signals. An algorithm that is based on these assumptions is known as FastICA [11]. Another method that is used in BSS problems is the Infomax, proposed by Bell and Sejnowski [12]. This method, as demonstrated by Cardoso [13], is closed-related to the maximum likelihood approach, which estimate the separating matrix B from the distribution of x(k). Both strategies will be used in our experiments. The proposed ICA-TOPSIS approach In several problems in MCDM the criteria are dependent. For example, consider the case of determining a ranking of K students evaluated according to their grades in sociology, mathematics and physics 2 . It is possible that both grades in mathematics and physics are correlated criteria, since they usually measure similar competences. Therefore, the aggregation based on the collected data may lead to biased results. In this case, one may think that a proper analysis should be made in the latent variables l(k) = [l 1 (k), l 2 (k), . . . , l N (k)] T associated with the collected data V through the mixing process where A ∈ R M ×N represents the mixing process acting on the latent variables l(k) and g(k) = [g 1 (k), g 2 (k), . . . , g M (k)] is an additive white Gaussian noise (AWGN). One may note that equation (13) is similar to (11), with l(k) and V T representing, respectively, the set of signal sources and the mixed signals. Therefore, aiming at performing the MCDM analysis on the latent variables, as mentioned in Section 1, the application of Mahalanobis distance in TOPSIS approach may not be sufficient to deal with dependent criteria, since only the information of covariance among criteria is taken into account. In this context, this paper proposes to deal with the problem of dependent criteria in MCDM applying an ICA-TOPSIS approach, which comprises three steps. In the first one, we formulate a BSS problem whose aim is to recover the latent variables based on the mixed decision data V. In this formulation, we consider that the number of criteria is equal to the number of latent variables, which leads to the determined case M = N in BSS. Therefore, after estimating the separating matrix B, we obtain the estimated latent variableŝ l(k) = [l 1 (k),l 2 (k), . . . ,l N (k)] T , given bŷ similarly as described in (12). The second step comprises the adjustment of the estimated latent variables in order to avoid permutation and/or scale ambiguities. In this procedure, we made the assumption that the diagonal elements in the mixing matrix A is positive and greater, in absolute value, than all the off-diagonal elements in the same row, i.e. each latent variable has a positive majority influence in each mixed criterion. Therefore, based on the separating matrix B and, consequently, on the estimated mixing matrix = B −1 , we perform the following adjustment 3 : -For the first row inÂ, we find the column q in which the greater absolute value is located. Therefore, we permute the first and the q columns ofÂ. In order to correctly resetting the estimated latent variables, we also permute the first and the q estimates. After repeating this procedure for all rows inÂ, we obtain the estimated mixing matrix partially adjusted Adjp and avoid the permutation ambiguity provided by the BSS method. -Based on the assumption that the diagonal elements in the mixing matrix A is positive, if a diagonal element q of Adjp is negative, we multiply all the elements in the same column of q by −1. This leads to the signal inversion of the estimated latent variablel q , since equation (13) needs to be valid. After verifying all the diagonal elements of Adjp and performing the signal changes, we obtain the final adjusted estimated mixing matrix Adj f and avoid the scale ambiguity provided by the −1 factor. In order to illustrated these adjustments, suppose that we achieve the estimated mixing matrix andl Adj f (k) = [−l 2 (k),l 1 (k)], which corrects the signal of the retrieved sources. After performing the ICA and eliminating the ambiguities, the third step of the proposed approach comprises the application of TOPSIS based on Euclidean distance inl Adj f (k) and the ranking determination. Numerical experiments Aiming at verifying the application of the proposed ICA-TOPSIS approach to deal with dependent criteria in MCDM problems, we performed numerical experiments based on synthetic data and compared the results with the ones provided by existing methods. The next section describes the considered data and the obtained results. Data generation In this paper, we performed the experiments based on a decision data comprised by 100 alternatives and 2 criteria, both with the same importance (w 1 = w 2 = 0.5). The latent variables were randomly generated according to a uniform distribution in the range [0, 1]. In order to derive the "collected" observed data V, we considered the mixing matrix A = 1.00 −0.15 0.30 1.00 and the mixing process described in (11), in which s(k) and x(k) represent the latent variables and the observed data V, respectively. Moreover, the additive noise was applied considering a Signal-to-Noise Ratio (SNR), given by where σ 2 signal and σ 2 noise are, respectively, the signal power and the noise power, in the range (0, 50]. Comparison between the considered approaches In order to verify the application of the proposal, we first generate the latent variables and derive the ranking according to the original TOPSIS method (based on Euclidean distance). This ranking is considered as the correct one, since it is obtained directly from the (unknown) latent variables. Therefore, we perform the mixing process and, given the mixed observed data, we apply the proposed ICA-TOPSIS approach (based on FastICA and Infomax algorithms), the original TOPSIS and the TOPSIS based on Mahalanobis distance. The obtained results are compared according to a performance index called normalized Kendall tau distance [15], which calculates the percentage of pairwise disagreements between two rankings. This measure is defined by where N D is the number of pairwise disagreements between the rankings and K is the number of alternatives. Therefore, τ close to zero indicates that there is no disagreement between the two rankings, i.e. the obtained ranking is the same that the correct one provided by the original TOPSIS method applied on the latent variables. Figure 1 presents the Kendall tau distance for each considered method and SNR value (averaged over 1000 realizations). One may note that the TOPSIS based on Mahalanobis distance improves the original version of this method, leading to lower values of τ . However, the best results were obtained applying the ICA-TOPSIS, specially for SNR values greater than 25 dB. In terms of the FastICA and Infomax algorithms, the former achieved a better performance. Conclusions and perspectives Dependent criteria is an important issue in multicriteria decision making. In order to deal with this problem, several methods has been developed, such as the TOPSIS based on Mahalanobis distance. In this work, we presented preliminaries discussions on a novel approach used to mitigate biased results provided by dependent criteria. This approach, called ICA-TOPSIS, comprises the application of independent component analysis in order to extract the latent variables from the observed decision data and, then, the use of the original TOPSIS to derive the ranking based on the retrieved independent data. Based on the MCDM scenario considered in this work and the obtained results, one may remark that the proposed ICA-TOPSIS approach leads to better results compared to the methods found in the literature. For instance, our proposal achieved lower Kendall tau values compared to the TOPSIS based on Mahalanobis distance, which is used in several works in the literature. A possible explanation for this result is that the ICA methods exploit the independence among criteria, which is stronger than the covariance information used in TOPSIS based on Mahalanobis distance. Since we consider a MCDM problem comprised by a mixture of latent variables, our proposal can better mitigate the biased effect of the criteria dependence. It is worth mentioning that this work presented initial results on the application of ICA-TOPSIS approach to deal with MCDM problems. Future works comprise a further understanding on this proposal, especially on the latent variable estimation step. Different numbers of criteria and alternatives will also be considered in new experiments. Moreover, we aim at verifying the performance of the proposed approach on decision problems based on real data.
2018-06-08T13:05:22.900Z
2018-07-02T00:00:00.000
{ "year": 2020, "sha1": "bb1500291c08ec0d322182c37524688c323917ad", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2012.04085", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a22f5cad2b36dfeabe18d02b1a96fed0676cc45a", "s2fieldsofstudy": [ "Business", "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
248018057
pes2o/s2orc
v3-fos-license
Optimal sensor placement and model updating applied to the operational modal analysis of a nonuniform wind turbine tower . Test planning is a crucial step in the operational modal analysis (OMA) of wind turbines (WT), and it is an essential part of choosing the best positions for installing the sensors of the structures. On the other hand, updating the finite element model (FEM) with the OMA results implies a better prediction of the real structure’s dynamic and vibrational behavior. This paper aims to show how the OMA of a nonuniform and two-section wind turbine tower can be performed more effectively, using the required test planning and optimal sensor placement. Then, accordingly, the OMA is used in operating and parked conditions to find the objective bending mode characteristics. Moreover, the updating of the applicable FEM of the multi-sectional wind turbine tower will be described. The tailor-made genetic algorithm (GA) is used to find the MEMS (micro electro-mechanical system) sensors’ optimal positions of the WT under study. The OMA was performed and the acquired data analyzed using the stochastic subspace identification (SSI) method. Based on the OMA results, the FEM is updated by applying the sensitivity method. The results show that a tailor-made GA is a practical and quick approach to finding the optimal position of the sensors to obtain the best results for the objective modes of the WT. The OMA results, under operating and parked conditions, prove some modal characteristics of WTs. Based on the sensitivity analysis and engineering judgment, the modulus of elasticity was selected as a parameter for updating. Finally, we found that the updated FEM had less than 1 % error compared to the obtained frequencies from the test. Introduction Providing economic and reliable wind turbines is the most significant challenge for wind turbine designers. Understanding wind turbine dynamics is essential for meeting the requirements (Tittus and Diaz, 2020). Finding out or predicting the dynamic behavior can be obtained from creating a finite element model and modal testing. In this respect, using a modal test as a tool to understand wind turbine (WT) dynamics based on measured data seems very helpful. Generally, there are two methods for modal testing of massive structures like wind turbines, namely experimental modal analysis (EMA) or operational modal analysis (OMA). The engineering field that studies the modal properties of systems under ambient vibrations or normal operating conditions is called operational modal analysis (OMA) and provides useful methods for modal analysis of many structural engineering areas (Brincker and Ventura, 2015). Since testing large structures with the traditional method that requires the artificial excitation of the structure is difficult, time-consuming, and costly, OMA (operational modal analysis) is proposed as a practical and optimal solution for testing wind turbines (Carne et al., 1988). James et al. (1992James et al. ( , 1996 did the complementary research on modal testing using natural excitation. Then, Carne and James (2010) published their research reviews and then compared the OMA versus EMA of wind turbines and revealed the advantage of OMA for wind turbines' modal tests. Also, Osgood et al. (2010) did the modal field test on an on-shore wind turbine and compared the results. Ozbek et al. (2013), and other studies by Lorenzo et al. (2015), Allen et al. (2011) and Tcherniak et al. (2011), discussed the most crucial challenges in the operating modal analysis of wind turbines. To solve the problems, a careful test planning for considering the OMA is essential. Several matters like test objectives, sensor placements, equipment, measurement duration, and FE analysis should be determined. Brincker and Ventura (2015) reviewed the effective test planning for a successful OMA of wind turbines. Zierath et al. (2018) present a contribution that summarizes the comprehensive experimental modal analysis techniques on a 2 MW industrial wind turbine. Also, in this study, CMA (classical modal analysis) and OMA techniques for a rotor blade are applied, while the dynamics of the entire wind turbine with a locked rotor are analyzed by means of operational modal analysis. For the OMA, different identification procedures are applied, and then the resulting modal parameters are compared to each other. One of the most critical steps of the test planning procedure is the optimal placement of the available sensors on the wind turbine components to access the test objectives. In this regard, many techniques have been proposed for optimal sensor placement problems in the last 2 decades. Maul et al. (2007) reviewed the literature. In recent years, computational intelligence approaches have been applied to optimal sensor placement (OSP) effectively. The genetic algorithm (GA), as a computational intelligence method, is based on natural evolution theory. Jung et al. (2015) investigated the optimal layout of a flexible two-dimensional rectangular plane using the genetic algorithm method. Then, they compared it with the results of the proposed optimal layouts of the general methods like EI, EI-DPR, EVP, and ADPR. The results show that GA gives the best results compared to the other methods. Schulze et al. (2016) used GA for the optimal locations of 19 sensors for OMA of wind turbine blades, leading to high-quality and optimized results. Downey et al. (2017) developed an optimal sensor placement within a hybrid dense sensor network to construct accurate strain maps for large-scale structural components. The objective function and genetic algorithm are experimentally validated for a cantilever plate under three loading cases. Soman and Malinowski (2019) present a novel implementation of the genetic algorithm (GA) to improve the sensor network coverage for damage detection using guided wave structural health monitoring. Also, a wind turbine's practical design needs a finite element model or numerical models. Model updating is essentially a process of adjusting specific parameters of the finite element model. The sensitivity method is probably the most successful of the many approaches to updating finite element models of engineering structures based on vibration test data. Camargo et al. (2019) investigated the dynamic behavior of a reinforced and post-tensioned concrete structure for applications in wind turbine towers by considering the obtained modal parameters from the OMA results. Also, since the vibration behavior originates from the mode's inherent properties, forces exciting the system at resonant frequencies yield large vibration responses that lead to discomfort or even damage. First, this paper focuses on selecting the best location of the available sensors on the nonuniform tower, which is rarely applied, of studied wind turbine using genetic algorithms to achieve its bending modes. Then, the results of the wind turbine's OMA reveal its dynamic behavior. Finally, a sensitivity analysis was performed using a finite element model (FEM) to understand which parameters have the most significant effect on modal frequencies. Also, model adjustments were performed by updating the selected parameters to obtain the same values as in the experimental results. Optimal sensor placement (OSP) A successful wind turbine OMA is closely dependent on the quality of the data obtained by careful test planning. So, the critical step in obtaining useful quality data is that the test has been planned carefully and executed. Due to the enormous wind turbines and a limited number of sensors, an essential issue in the test planning is to find the best location of the existing sensors to reach the test objectives. The structure has many nodes onto which the sensor may be mounted. However, because of the limited number of sensors, they can only place at some locations. This notwithstanding, from a practical perspective, the OMA needs to optimize the sensor locations to obtain as much information on the structural system as possible. In this study, the operational modal analysis aims to take the modal parameters of the tower's first and second bending pair modes (fore-and-aft -FA; side-to-side -SS) in the 100 kW wind turbine installed at a research site. The studied wind turbine has three blades, a magnetic generator, and a medium-speed gearbox, with a multi-sectional and nonuniform steel tower of about 40 m, as shown in Fig. 1. OSP methodology The aim of selecting the optimal sensor positions is to determine the sensors' best locations for obtaining precise response data and the structural dynamic behavior. So, it is essential to select the nodes on the large structure that extract the objective modes with the least dependency. Therefore, the OSP procedure is an optimization problem with a suitable fitness function to reach the objective mode shapes dependency. Fitness function Considering the abovementioned concept, we should select the criterion that shows the modes' relation. Allemang (2003) reviewed the development of the original modal assurance criterion (MAC) and revealed how a simple statistical concept becomes a handy tool in experimental modal analysis and structural dynamics. Pastor et al. (2012) pointed out that the modal assurance criterion (MAC) is a suitable and most popular tool for evaluating this linear dependence. Bakhary et al. (2014) compared some of the most useful fitness functions (MAC; FIM -Fisher information matrix; MSE -mean square error) and concluded that MAC function performs the optimal sensor placement better. We determine the auto-MAC matrix build-up with target mode shapes from the following: where i and j are ith and j th mode shape vectors at the sensor position nodes, respectively. All diagonal elements of the auto-MAC matrix are equal to 1, since the mode shapes are correlated with themselves for the case i = j . In contrast, for the case i = j , the off-diagonal elements take values between 0 and 1, depending on the linear dependency between the mode shape pair i and j . Thus, the off-diagonal terms of the auto-MAC matrix can be used to check the mode shapes' linear independence for optimal sensor placement. For this purpose, the sum of off-diagonal terms should be close to 0, as far as possible. So, the OSP optimization problem aims to find the auto-MAC matrix with the minimum number off-diagonal elements. The fitness function (F ) can be defined as follows: (2) Genetic algorithm A genetic algorithm is an optimization algorithm which evolves analogously, as does the Darwinian principle of natural selection. To obtain the optimal solution for design problems, the GA has been implemented to progress similarly to natural evolution. A combination of selection, mutation, crossover, and recombination is at work to evolve those storage individuals from an initial population (Zhao el at., 2020). The optimization process through a genetic algorithm is either carried out randomly or by selecting candidate design variables to create the initial population. This initial population is generated through natural selection tools so that newer or better generations achieve the optimization goals. The quality and value of the produced generations are evaluated based on a fitness function. Depending on the goal, the optimization of this fitness function can be programmed to maximize or minimize. To perform the genetic algorithm, it is necessary to define a coding system to express the optimization variables. The design variables should be coded by binary expression. In order to apply a genetic algorithm to the sensor optimization placement problem, we have the following steps. (1) Create an initial population randomly, and calculate the fitness values of the strings. (2) Select the fittest individuals according to fitness values, and apply the crossover operation and mutation operation. (3) Calculate the fitness values of the new strings. This study used the genetic algorithm toolbox in MATLAB to select the sensors' optimal position. We linked it to the FEM model in Ansys to evaluate each generated sensor placement's fitness function. In optimizing the sensor's position by a genetic algorithm, the set of possible positions for the sensors' arrangement is considered to be an individual. A simple way to encode an individual is to use a binary vector to combine the possible positions of the sensor installation as follows: The length s p is equal to the available degrees of freedom (DOF) for installing the sensors (n). The value of 1 in the vector indicates that one sensor is located on the DOF of a node. Therefore, the sum of the S p0 element's values (as an initial individual) indicates the number of existing sensors in the optimization. Thus, the studied population in this optimization includes a set of s p layouts (possible arrangements for installing sensors in available locations and its related degrees of system freedom). After coding the individuals, it is necessary to determine the fitness function to achieve the optimization goals. The fitness function indicates an individual's ability and determines how to ascend to the next generation. The value index is defined as follows: As mentioned above, the objective function to optimize sensor placement is defined to minimize the non-diagonal elements of the auto-MAC matrix. It is necessary to generate an initial population S p0 with random individuals and the examined the fitness function to start optimization. The genetic selection criterion is defined by a genetic algorithm that determines which individual is passed on to the next generation. In the GA process calculation, the population size of 50, the crossover rate of 90 %, and the mutation with a probability of 10 % are used. To ensure that the best generations are not eliminated from the next stage by random selection, it states that the top 10 % of the generated generations automatically go to the next stage. Most generations are not eliminated by random selection, and the remaining 90 % is generated using genetic algorithms. The above process continues until the defined termination criterion is met to achieve the individual that gives us the sensors' best position. The termination criterion is defined as follows: The termination criterion is the difference between the fitness function's best value f min and the fitness function's average values f ave . The termination value for stopping optimization is assumed to be 0.05. Assumptions To do an effective and easy optimization procedure, considering some assumptions and limitations is necessary. The wind turbine tower's finite element model was made using the beam element in Ansys software. Since finding the bending modes of the wind turbine tower in the fore-and-aft and side-to-side directions are the goals, U X and U Y of the available nodes on the finite element were considered to be the individual element. Limitations or lack of access to some part of the tower lead us to remove their related nodes from the optimization process. Also, there are six sensors to be mounted on the tower to measure objective bending modes. Optimization results The considered mode shapes for selecting the sensors' optimal arrangement were the first and second bending modes in the fore-and-aft (U X ) and side-to-side (U Y ) directions of the wind turbine tower structure under study. The tower mode shape vectors were extracted from Ansys software and defined as optimization inputs in the MATLAB software optimization toolbox. The fitness plot to reach the sensors' best configuration, using a genetic algorithm, is presented in Fig. 3. The column shows the obtained fitness value for each generation, and the row is the number of produced generations in the optimization problem up to obtain the best sensor configuration. This diagram shows the comparison of the best value of the fitness function and the average values of the fitness function obtained among each generation's population to find the best result. It is observed that, by increasing the number of generations, the fitness function's optimal and average values become closer to lower values. Obviously, all the minimum fitness (best) values tend to a constant quickly, and the average fitness value steadily tends to the best fitness value along with the increasing number of generations. It shows a good characteristic of convergence. Based on the OSP results that considered available nodes on the actual tower structure, the six sensors (nos. 1, 2, 5, 6, 7, and 8) were mounted at levels of 38.8 m (Sect. 3; U X , U Y ), 26 m (Sect. 2; U X , U Y ), and 15.33 m (Sect. 1; U X , U Y ) to detect the first and second tower bending pair modes (FA/SS) of the wind turbine. The final sensor arrangement is shown in Fig. 4. Finite element modeling (FEM) Finite element modeling helps to provide a preliminary understanding of the system's primary modes' structural dy- namics, natural frequencies, and mode shape. So, finite element modeling should be created before the modal test for test planning and should specify the requirements, such as measurement duration, sampling frequency, and sensor placement. In this research, the 3-D finite element model of the wind turbine tower was created by Ansys software. The parametric FE model was created with changeable design parameters to improve the model updating parameters. Various types of Ansys library finite elements were tested to achieve a better numerical result of the wind turbine tower behavior, and finally, shell 281 was selected. It was also used in modeling the Ansys CERIG command, which creates a massless web of rigid bars. The extra masses of the nacelle, rotor hub, and blades are considered to be point masses located at the tower top on the height of the wind turbine. To access the optimized mesh grid size, many of the mesh sizes are examined in the FEM, and the comparison shows that the first frequency has remained fixed, approximately for a size smaller than 30 cm. Based on the modal analysis of the FEM model, the natural frequencies and related mode shapes of the wind turbine tower were obtained and are revealed in Table 1 and Fig. 5. Operational modal analysis (OMA) The tower wind turbine's OMA was carried out using the MEMS accelerometer, the eight-channel data logger, and its software based on the test planning. Based on the modal frequency of interest and their expected magnitude considerations, MEMS sensors ADXL320 are chosen to be sensitive enough and have a suitable measurement and frequency range. The sensors were calibrated and equipped with the amplifier by considering the predicted cable lengths to prevent signal noises as much as possible. The sampling frequency of 100 Hz was chosen. Then for this study, it resampled to the frequency of 24.8 Hz. To achieve an acceptable OMA analysis, we selected a sufficient measurement time length to identify the system's lowest natural frequency. Brincker and Ventura (2015) proposed the total measurement time length (T tot ) by Eq. (6), as follows: T tot > 20 2ζf min = 10 ζf min , where f min is the lowest natural frequency, and ς is the structural damping. Since the first natural frequency of a wind turbine tower was estimated to be about 1.6 Hz, based on the results of the initial finite element model, and with a damping ratio of about 0.01, the minimum data collection time for analysis is as follows: T tot = 1000 1.6 = 625 s = 10 min. The data were gathered in the two following cases during the test time: 1. The parked condition, which refers to a situation in which the wind turbine is not operating, and the rotor speed is 0, and the blade's pitch angle is fixed and in its large values (≈ 90 • ). 2. Under the operating condition, the rotor is rotating with constant speed, the pitch angle is at a minimum, and the nacelle direction is along the wind direction. From the recorded data matched to the SCADA (supervisory control and data acquisition) data, 10 min datasets covering OMA's time invariance assumptions were checked and screened. The few selected datasets were used in operational modal testing software, Artemis, to required extract modal parameters. Finally, an operational modal analysis of the datasets was done using stochastic subspace identification (SSI) methods in the software package. Stochastic subspace identification (SSI) refers to standard algorithms for extracting modal parameters in operational modal analysis. The SSI method operates in the time domain, estimating the assumed timeinvariant matrices of a linear dynamic system (Overschee et al., 2012;Boonyapinyo and Janesupasaeree, 2010). The SSI method can identify the modal parameters with the random input signal as the operational modal analysis. In the frequency domain decomposition (FDD) method, the inputs are unknown, similar to the SSI method. In this method, white noise with a Gaussian distribution and zero mean is used, and there is no need for using fast Fourier transform (FFT) to transform the signal from the time domain to the frequency domain, and the data in the time domain are utilized directly. This property eliminates the leakage error in the data and the variation in the stiffness matrix due to the use of windowing function (Mohammadi and Nasirshoaibi, 2017) Accordingly, the stabilization diagrams under parked and operating conditions are presented in Figs. 6 and 7, respectively. Other stable modes observed in the stabilization diagram may occur due to the frequencies of other elements connected to the tower, such as rotor blades, rotation harmonics, and excitation of generators and foundations. Since, in this study, the sensors were installed only on the wind turbine tower, it is not possible to accurately identify the origin of the other stable modes. According to the results (Table 2), the first and second tower's bending pair frequencies, around 1.5 and 9 Hz under the parked state, remained approximately constant in spite of operating conditions. Therefore, changing the rotor speed and pitch angles do not change these bending modes. In comparison, the changes in the damping ratios obtained in these two cases are relatively significant. Also, the damping ratio of the SS modes of the tower in case 1 (OP -operating wind turbine conditions) is more than those for FA. Inversely, in case 2 (OP), the damping ratios are higher in direction FA compared to the ones in direction SS. In describing this matter, since the blade angle is at a maximum in case 1 (PA), the blade surface is perpendicular to the SS direction. While a high drag force is taking place in the SS direction, the SS damping ratio will be higher than in the FA direction. On the other hand, in the operating wind turbine (OP), the blade angle is close to 0, and more resistance occurs in the FA direction. Therefore, the damping coefficients in the wind turbine's operating mode in FA modes are more than SS. Generally, the tower's bending damping ratios under the operational case are higher than those for parked cases (especially for the first FA mode). This behavior is due to the existing aerodynamic damping in the operation case. Aerodynamic damping has its origin in the wind load acting on the rotor or, more accurately, in the interaction between the wind flow and the motion of the structure. Kuhn (2001) described aerodynamic damping and its effect on a wind turbine performance. Model updating The FE model can be updated to validate the results obtained from the wind turbine tower's operational modal analysis using FEM updating. Since the finite element modeling was carried out and analyzed for the parked condition, the modal frequencies obtained from the OMA test, in this case, are compared with FEM results, as shown in Table 3. To perform the model updating process, every parameter considered in an FE model can be a candidate for the updating parameter. Some parameters that can be improved in the primary finite element model are Young's modulus, density, joints specifications, Poisson's ratio, thicknesses, and model dimensions. Many references have suggested methods for selecting the updating parameters. Most of the proposed methods are based on sensitivity analysis. One of the easiest and efficient methods that has been proposed is to combine the sensitivity analysis with engineering judgment, based on the knowledge of the original model. As mentioned, the primary purpose of updating the finite element model is to minimize the difference between the model and the obtained natural frequencies from the test. This optimization problem is solved in the software. This method is a sensitivity-based method in which the physical parameters are changed, and as a result of this change, the mass and stiffness parameters of the structure are updated. This makes it possible to weigh the structure's physical parameters based on their effect on the structure's dynamic response, which is critical because the natural frequencies and mode shapes have different uncertainty levels. Assuming that the parameter exists in the finite element model (as the initial value and the amount of variation), the frequency sensitivity ( ∂ω i ∂p ) to the parameter needs to be obtained. When the corresponding frequency is obtained in FE software under these conditions, then ω i , i = 1, 2, . . ., m. If the parameter changes, p = p 0 + p will use the finite element again, and the frequencies of these conditions are also calculated as ω i , i = 1, 2, . . ., m. According to the definition, the natural frequency's sensitivity to changes in a parameter is as follows: The finite element model has many parameters that can be changed, but only the parameters that affect the modes are used to calculate the sensitivity. Based on engineering judgments in the finite element model of this study, the parameters and their initial values that do not change the dimensional characteristics of the structure are selected for the model updating as E = 200 Gpa, υ = 0.3 and ρ = 7850 kg −1 m 3 . According to the theory of the sensitivity analysis, to find the most sensitive parameter to the first and second natural frequencies of the wind turbine tower structure (first FA, first SS, second FA, and second SS), optimization tools in AN-SYS software were used. Based on the sensitivity analysis, the model sensitivity values to the desired parameters are shown in Table 4. Based on the sensitivity analysis, the elastic modulus and density parameters significantly impact the tower bending modes' natural frequencies. Since changing the tower density leads to varying its weight and its actual characteristics, the model is updated by changes at the structure's modulus of elasticity. As stated in the sensitivity analysis, the modulus of elasticity was selected to update the studied wind turbine's FE model based on the OMA's obtained frequencies. In this The error function (ER) is the differences between natural frequencies of FEM (f ) and test results (f e i ). The natural frequencies were updated in the model updating procedure. But the objective function aims to minimize the error for the first tower bending modes (ER ≤ 1%). As the tower of the studied wind turbine is divided into two main parts (because of the simplicity in manufacturing and erection), the FEM model is created in two pieces with different properties parameters. So, E1 and E2 were the parameters to change for updating the model. The updating procedure continued up to receive the error limitation, and the elasticity module of pieces 1 (E1) 2 (E2) obtained 176 and 178 Gpa, respectively. Finally, the updated FE model's natural frequency is compared to the ones from the operational modal test in the parked condition (Table 5). Conclusion Wind turbines have complicated dynamic behaviour, so we need to extend our knowledge about their dynamic. Operational modal analysis is one of the best ways to obtain real information about the wind turbine dynamic. Optimal placement of the existing sensors (as a step of test planning) is required to obtain accurate and acceptable results from OMA of wind turbines with a large structure. In this paper, a genetic algorithm was used to find the best sensors to mount on the wind turbine tower. The fitness function is defined by auto-MAC, which is the most popular tool to evaluate this linear dependence. Some assumptions are then applied to limit the available nodes based on the real structure, leading to accelerating the optimization procedure. This optimization procedure puts the six sensors on three levels in two directions (U X , U Y ) of the tower to obtain the first and second bending pair modes of the wind turbine. Also, to extend the study, two additional sensors were mounted on top of the tower. As a result, using GA to find the sensor's optimized location on the wind turbine is a practical and quick approach. The wind turbine's operational modal analysis was then done using the SSI method and the tower bending modes obtained for two conditions (parked/operation). Natural bending frequencies were approximately constant in the parked and operating wind turbine, but the wind turbine damping ratios were varied significantly by changing its rotor speed and pitch angle. Finally, the parametric model updating was used to match the natural bending frequencies of FEM to the test results. By sensitivity analysis and engineering judgment, the modulus of elasticity was selected to update the model. As the wind turbine tower contains two assembled parts, the two modulus of elasticity of each part (E1 and E2) were updated. The correlation between the finite element model and the experimentally obtained modal frequencies improved significantly with less than 1 % error. This study shows the effective procedure and application of optimization tools to provide an acceptable test and model updating to the wind turbine tower. Data availability. All data used in this paper can be obtained on request from the corresponding author. Author contributions. This research arises from a doctoral thesis under the supervision of MM and the advisement of SZ. MT, MM, and SZ designed the experiments, and MT carried them out. MT analyzed the test data, developed the model, and performed the optimizations. Finally, MM and SZ reviewed the results and approved them. MM prepared the paper with contributions from all co-authors. Competing interests. The contact author has declared that neither they nor their co-authors have any competing interests. Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Review statement. This paper was edited by Dario Richiedei and reviewed by three anonymous referees.
2022-04-08T15:22:50.459Z
2022-04-06T00:00:00.000
{ "year": 2022, "sha1": "507f2395f731c9372a23b8fecfb84d3fc81e99a3", "oa_license": "CCBY", "oa_url": "https://ms.copernicus.org/articles/13/331/2022/ms-13-331-2022.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "37218439c2e53f19d74d9cf2dda9b374707df67c", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
13691528
pes2o/s2orc
v3-fos-license
Effect of Selective Encapsulation of Hydroxypropyl-β-cyclodextrin on Components and Antibacterial Properties of Star Anise Essential Oil Star anise essential oil (SAEO) is a plant essential oil with good antibacterial activity, but its applications are limited due to its high volatility, strong smell, and unstable physical and chemical properties. The effect of selective encapsulation of SAEO by hydroxypropyl-β-cyclodextrin (HPCD) on its compositions, volatility stability and antibacterial activity was investigated. The GC-MS results indicated that the compositions reduced and content of the compositions of SAEO changed after encapsulation. Most of the components in SAEO were successfully encapsulated by HPCD, which can be supported by data from FTIR and 1H NMR. According to the molecular modeling results, the three guest molecules (trans-anethole, estragole and trans-foeniculin) were all docked in the cavity of HPCD on the isoallyl (or allyl) side. The volatile stability of SAEO before and after encapsulation was evaluated by electronic nose, and the results confirmed that encapsulation significantly reduced the irritating smell of SAEO and makes the clathrate have a sustained release effect. Furthermore, in the antibacterial test, the selective encapsulation of HPCD improved the inhibition effect of SAEO on Rhizopus stolonoifer, Saccharomyces cerevisiae, and E. coli and its antibacterial stability in 24 h. Introduction Star anise (Illicium verum, Hooker f.) belongs to the Magnoliaceae or Magnolia family. Its fruit is one of the most important spices, and it is indigenous to Southeastern China. Moreover, it has many beneficial functions due to its antioxidant [1], antibacterial, liver cancer preventative [2], and insecticidal characteristics [3]. Star anise essential oil (SAEO), which is extracted from star anise fruits, accounts for 3-3.5 wt.% of fresh fruit and exceeds 8 wt.% of dried fruit by steam distillation [4]. SAEO is widely used in food and medicine because of its good biological activity [5]. It is extensively used in baked goods, confections, and alcoholic and soft drinks [6]. In addition, it can alleviate inflammatory responses [7] and is a common flavor in medicinal tea, cough mixtures, and pastilles. In recent years, the spoilage microorganisms in food have become resistant to synthetic antimicrobial agents, which has aroused researcher's interest in the study of natural antimicrobial agents. Natural ingredients in many plants can kill or inhibit the growth of harmful microorganisms in food. There is an increased interest focusing on essential oils as alternative agents for the control of It can be seen from the Table 1 that the compositions and relative contents of SAEO after encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. encapsulating changed, which may affect its antibacterial properties. Thirteen of SAEO main components were identified by GC-MS, which represent 97.09% ± 2.35% of all the components in SAEO ( Table 1). The major components were identified as trans-anethole (91.38% ± 0.98%), estragole (2.55% ± 0.41%), and trans-foeniculin (2.15% ± 0.65%). This result is similar to that of Aly et al. [14], though the content of the compounds is slightly different. During the extraction of the encapsulation complex, 12 components were identified, which constituted for 99.00% ± 1.93% of the gross components. Among these components, the relative contents were dominated by trans-anethole (95.36% ± 1.09%), estragole (2.23% ± 0.33%), and trans-foeniculin (0.91% ± 0.40%). Furthermore, the main components of the SAEO were the same before and after encapsulation. One of components was not identified in the extract due to its low content. In addition, the relative contents of the other SAEO components were all declined observably, except for trans-anethole and cis-anethole, which indicates that the trans-anethole and cis-anethole are more likely to be encapsulated in HPCD. These changes may be due to the difference in structure and polarity of the different constituent molecules in the SAEO and selective encapsulation of HPCD. The encapsulation effect of the HPCD on the guest molecules may be closely related to the structure of the guest molecules. According to Table 1, the structures of the three guest molecules with relatively high contents were similar, i.e., an isoallyl (or allyl) is connected to the benzene ring and an ether bond is connected with the benzene on the side of the para-position. However, the group connected to the structure mentioned above in trans-foeniculin was much more complex than that in trans-anethole and estragole. Consequently, it was more difficult to enter the HPCD cavity and form clathrate, which may have resulted in the bad encapsulate effect observed on the HPCD of trans-foeniculin. FT-IR Spectra Studies The IR spectra can be used to identify and analyze the compounds by the vibrational and rotational transitions of the molecules. Changes in the IR absorption peaks of the host and guest molecules can provide important information regarding the formation of the encapsulation complexes. Figure 1 presents the infrared spectra of SAEO, the HPCD, and their encapsulation complex. During the extraction of the encapsulation complex, 12 components were identified, which constituted for 99.00% ± 1.93% of the gross components. Among these components, the relative contents were dominated by trans-anethole (95.36% ± 1.09%), estragole (2.23% ± 0.33%), and transfoeniculin (0.91% ± 0.40%). Furthermore, the main components of the SAEO were the same before and after encapsulation. One of components was not identified in the extract due to its low content. In addition, the relative contents of the other SAEO components were all declined observably, except for trans-anethole and cis-anethole, which indicates that the trans-anethole and cis-anethole are more likely to be encapsulated in HPCD. These changes may be due to the difference in structure and polarity of the different constituent molecules in the SAEO and selective encapsulation of HPCD. The encapsulation effect of the HPCD on the guest molecules may be closely related to the structure of the guest molecules. According to Table 1, the structures of the three guest molecules with relatively high contents were similar, i.e., an isoallyl (or allyl) is connected to the benzene ring and an ether bond is connected with the benzene on the side of the para-position. However, the group connected to the structure mentioned above in trans-foeniculin was much more complex than that in transanethole and estragole. Consequently, it was more difficult to enter the HPCD cavity and form clathrate, which may have resulted in the bad encapsulate effect observed on the HPCD of transfoeniculin. FT-IR Spectra Studies The IR spectra can be used to identify and analyze the compounds by the vibrational and rotational transitions of the molecules. Changes in the IR absorption peaks of the host and guest molecules can provide important information regarding the formation of the encapsulation complexes. Figure 1 presents the infrared spectra of SAEO, the HPCD, and their encapsulation complex. According to the IR spectra of the SAEO, very strong absorption peaks were observed at 3020 cm −1 and 3008 cm −1 , which represent the C-H stretching vibration peaks of the benzene ring and According to the IR spectra of the SAEO, very strong absorption peaks were observed at 3020 cm −1 and 3008 cm −1 , which represent the C-H stretching vibration peaks of the benzene ring and the C=C, respectively. The four sharp absorption peaks between 1609 cm −1 to 1447 cm −1 (1609, 1507, 1464, and 1447 cm −1 ) is defined by the stretching vibration peaks of the benzene ring framework. The stretching vibration peaks of C-O-C were represented by the peaks at 1248, 1175, and 1038 cm −1 . The C-H bending vibration peaks of the substituent aromatics are represented by the peaks at 842 cm −1 and 790 cm −1 . The relative content of trans-anethole in the SAEO was calculated to be 91.38% ± 0.98%. Therefore, the IR spectrum of the SAEO was very similar to the spectrum of trans-anethole [21]. The above information indicated that the molecules of the main constituents in the SAEO contained C=C bonds and aromatic ether bonds. This inference was consistent with the above-mentioned GC-MS analytical results. Figure 1 reveals that the IR spectrum of the complex is very similar to that of HPCD. In addition, some characteristic absorption peaks (3020, 1609, and 790 cm −1 ) of SAEO in the IR spectra of the inclusion complex were not identified. These bands are masked due to overlapping with the more intense bands of HPCD. However, the absorption peaks at 1507 and 1248 cm −1 exhibit a decrease in the intensity, whereas that at 842 cm −1 exhibits a slight shift due to the stretching vibration of C-O-C and the bending vibration of C-H of the para-substituted benzene [29,30]. This could be explained by the stretching vibrations of the benzene framework and aromatic ether bond in the SAEO molecules being restricted following the formation of the inclusion and the low guest quantity in the inclusion complexes. The above information shows that the benzene ring and the aromatic ether bond in the SAEO molecules entered the cavity of the HPCD, and indirectly validates that some SAEO components were successfully encapsulated by the HPCD. 1 H NMR Spectra Analyses 1 H NMR can provide valuable information about the spatial position of guest molecules in the cyclodextrin cavity, as well as the formation and dissociation encapsulation conditions in the solvent. H3 and H5 are atoms in the inner wall of the cyclodextrin cavity. When the guest molecules enter the cavity, the chemical shift of the H3 and H5 atoms in the cyclodextrin cavity changed due to the interaction between the guest molecule and the hydrophobic cavity of the cyclodextrins [31]. Figure 2 presents the 1 H NMR spectra of SAEO, the inclusion complex, and the HPCD. The 1 H NMR diagram of SAEO exhibited obvious proton peaks at 7.3 and 6.8 ppm, which indicates that the molecular structure of the compositions in SAEO contains a benzene ring. The proton peaks near 6.3 and 6.1 ppm indicate the presence of a double bond in the molecular structure of the chemical composition of SAEO. The proton peaks near 3.3 and 3.7 ppm symbolize that the SAEO molecular composition contains a C-O-C structure. The proton peak near 1.8 ppm is a characteristic peak of H on the methyl or methylene structure linked to the double bond. These results were well in agreement with the analytical results of the GC-MS analysis. The 1 H NMR diagram of the encapsulation complex exhibited the presence of both HPCD and SAEO proton peaks. Figure 2B,C presents the chemical shift changes in the protons of the D-glucopyranose units in the HPCD molecules. No changes were observed in the H1 atoms in the mid-structure and H6 atoms at the outermost side of the HPCD cavity, whereas the H2 and H4 atoms outside the cavity shifted slightly towards the low field. The H3 atoms near the wider edge in the cavity visibly shifted towards the high field, and the H5 atoms in the depth of the cavity also exhibited a high field shift. Based on the 1 H NMR spectra analysis of the SAEO and the inclusion complex, the benzene ring structure and the double bond entered the cavity of HPCD. The stronger shielding effect of the benzene ring and the double bond resulted in an obvious increase in the electron cloud density around the H3 and H5 atoms and a decrease in the chemical shift. As a result, the H3 and H5 atoms shifted towards the high field. These changes indicate that the inclusion complex has been successfully formed. Molecular Modeling Studies At present, molecular modeling based on molecular mechanics is been widely applied to characterize the three-dimensional structure of inclusion complexes with cyclodextrins [32]. The above results indicate that the main components of SAEO in the encapsulation compound were transanethole, estragole, and trans-foeniculin, of which the possible molecular models and corresponding three-dimensional encapsulation structures were generated according to the PM3 method of Hyperchem 8.0. The models of the three types of inclusion complexes obtained by the methods in Section 3.6 are presented in Figure 3. The ΔE of model A (1.595 kcal/mol) of the trans-anethole molecule was positive and was significantly higher than that of model B (−9.067 kcal/mol) given that the encapsulating process reduced the energy of the CDs. As a result, it is unreasonable to generate model A of the trans-anethole molecule. The ΔE of model B (−7.881 kcal/mol) was slightly lower than that of model A (−7.323 kcal/mol) of the estragole molecule. The ΔE of model B (−14.583 kcal/mol) was significantly lower than that of model A (−1.973 kcal/mol) of the trans-foeniculin. Therefore, model B of the three guest molecules exhibited the most reasonable inclusion complex structure. Figure 3(B1-B3) indicates that all three guest molecules were inserted into the cavity of HPCD of the isoallyl (or allyl) side. Moreover, the isoallyl (or allyl) group was deeply inserted, and the benzene ring was observed near the wide edge of the HPCD, whereas the ether bonds were exposed outside the HPCD cavity. The formation of the inclusion complex was largely a result of the addition of the benzene ring of the three molecules to the hydrophobic HPCD cavity by hydrogen bonding, which is consistent with the analysis results of FT-IR and 1 H NMR. Molecular Modeling Studies At present, molecular modeling based on molecular mechanics is been widely applied to characterize the three-dimensional structure of inclusion complexes with cyclodextrins [32]. The above results indicate that the main components of SAEO in the encapsulation compound were trans-anethole, estragole, and trans-foeniculin, of which the possible molecular models and corresponding three-dimensional encapsulation structures were generated according to the PM3 method of Hyperchem 8.0. The models of the three types of inclusion complexes obtained by the methods in Section 3.6 are presented in Figure 3. The ∆E of model A (1.595 kcal/mol) of the trans-anethole molecule was positive and was significantly higher than that of model B (−9.067 kcal/mol) given that the encapsulating process reduced the energy of the CDs. As a result, it is unreasonable to generate model A of the trans-anethole molecule. The ∆E of model B (−7.881 kcal/mol) was slightly lower than that of model A (−7.323 kcal/mol) of the estragole molecule. The ∆E of model B (−14.583 kcal/mol) was significantly lower than that of model A (−1.973 kcal/mol) of the trans-foeniculin. Therefore, model B of the three guest molecules exhibited the most reasonable inclusion complex structure. Figure 3(B1-B3) indicates that all three guest molecules were inserted into the cavity of HPCD of the isoallyl (or allyl) side. Moreover, the isoallyl (or allyl) group was deeply inserted, and the benzene ring was observed near the wide edge of the HPCD, whereas the ether bonds were exposed outside the HPCD cavity. The formation of the inclusion complex was largely a result of the addition of the benzene ring of the three molecules to the hydrophobic HPCD cavity by hydrogen bonding, which is consistent with the analysis results of FT-IR and 1 H NMR. Volatile Stability SAEO contains a large number of volatile components with a strong smell, and that volatile losses may affect its antibacterial activity, and the pungent smell will limit its application in food preservation. This volatile loss can be effectively reduced after encapsulating SAEO to HPCD, while masking this pungent smell without diminishing its biological activity [25]. Electronic nose technology is an artificial intelligence technology that simulates the human sense of smell, and is Volatile Stability SAEO contains a large number of volatile components with a strong smell, and that volatile losses may affect its antibacterial activity, and the pungent smell will limit its application in food preservation. This volatile loss can be effectively reduced after encapsulating SAEO to HPCD, while masking this pungent smell without diminishing its biological activity [25]. Electronic nose technology is an artificial intelligence technology that simulates the human sense of smell, and is widely used in quality analysis and classification of essential oils [33]. The results of the comparison of volatile stability of SAEO, inclusion complex, and HPCD are shown in Figure 4. Figure 4A shows that the volatile components in the emulsion of SAEO and inclusion complex were sensitive to sensors no. 2, no. 7, and no. 9. The sensors no. 2, no. 7, and no. 9 indicate "broad range", "sulfur-organic" and "sulf-chlor", respectively. The electronic nose describes the smell characteristics of SAEO as "sulfur-organic" and "sulf-chlor", which may be related to its strong volatile irritation. It is also shows from Figure 4A that there is a difference in the shape of the smell radar map of the SAEO and clathrate. This result corroborates the "changes in the composition and content of SAEO after the HPCD encapsulation" analyzed by the GC-MS analysis. The signal values detected by no. 7 and no. 9 sensors in Figure 4B shows that the "sulfur-organic" and "sulf-chlor" smell of the emulsion of SAEO was significantly stronger than that of the encapsulation solution. This indicates that the embedding of HPCD masks the partial irritating smell of SAEO, which makes the encapsulation compound have a sustained release effect. widely used in quality analysis and classification of essential oils [33]. The results of the comparison of volatile stability of SAEO, inclusion complex, and HPCD are shown in Figure 4. Figure 4A shows that the volatile components in the emulsion of SAEO and inclusion complex were sensitive to sensors no. 2, no. 7, and no. 9. The sensors no. 2, no. 7, and no. 9 indicate "broad range", "sulfurorganic" and "sulf-chlor", respectively. The electronic nose describes the smell characteristics of SAEO as "sulfur-organic" and "sulf-chlor", which may be related to its strong volatile irritation. It is also shows from Figure 4A that there is a difference in the shape of the smell radar map of the SAEO and clathrate. This result corroborates the "changes in the composition and content of SAEO after the HPCD encapsulation" analyzed by the GC-MS analysis. The signal values detected by no. 7 and no. 9 sensors in Figure 4B shows that the "sulfur-organic" and "sulf-chlor" smell of the emulsion of SAEO was significantly stronger than that of the encapsulation solution. This indicates that the embedding of HPCD masks the partial irritating smell of SAEO, which makes the encapsulation compound have a sustained release effect. In Vitro Antimicrobial Activity According to the previous literature, SAEO is beneficial in its ability to inhibit microorganism growth [34]. However, its instability and strong volatility challenges its widespread application in food preservation. The present study described the capsulation method of SAEO to the HPCD cavity, of which the restraining effects on Rhizopus stolonoifer, Saccharomyces cerevisiae, and E. coli before and after encapsulation were compared. According to Table 2 and Figure 5, SAEO and the inclusion complex had inhibitory effects on Rhizopus stolonoifer, Saccharomyces cerevisiae, and E. coli, the inhibition effect against Rhizopus stolonoifer was better than that of Saccharomyces cerevisiae and E. coli, which is consistent with the reported results in the literature [12]. In addition, under the same concentration, the antibacterial effect of the clathrate was obviously better than that of the free SAEO. It is possible that the relative content of trans-anethole, the main antibacterial component of encapsulated SAEO, increased (from 91.38% to 95.36%) due to the encapsulation selectivity of HPCD. Moreover, the encapsulation by HPCD improved the water solubility of SAEO so that the antibacterial components in SAEO can more easily penetrate the cell membrane of microorganism, thereby exerting the inhibitory effect. Figure 5 also shows that the antibacterial stability of the inclusion complex was better than that of the free SAEO in 24 h, especially at two concentrations of 5.400 × 10 −2 mmol/mL and 0.108 mmol/mL, which may be due to the encapsulation slowing the release of SAEO. In Vitro Antimicrobial Activity According to the previous literature, SAEO is beneficial in its ability to inhibit microorganism growth [34]. However, its instability and strong volatility challenges its widespread application in food preservation. The present study described the capsulation method of SAEO to the HPCD cavity, of which the restraining effects on Rhizopus stolonoifer, Saccharomyces cerevisiae, and E. coli before and after encapsulation were compared. According to Table 2 and Figure 5, SAEO and the inclusion complex had inhibitory effects on Rhizopus stolonoifer, Saccharomyces cerevisiae, and E. coli, the inhibition effect against Rhizopus stolonoifer was better than that of Saccharomyces cerevisiae and E. coli, which is consistent with the reported results in the literature [12]. In addition, under the same concentration, the antibacterial effect of the clathrate was obviously better than that of the free SAEO. It is possible that the relative content of trans-anethole, the main antibacterial component of encapsulated SAEO, increased (from 91.38% to 95.36%) due to the encapsulation selectivity of HPCD. Moreover, the encapsulation by HPCD improved the water solubility of SAEO so that the antibacterial components in SAEO can more easily penetrate the cell membrane of microorganism, thereby exerting the inhibitory effect. Figure 5 also shows that the antibacterial stability of the inclusion complex was better than that of the free SAEO in 24 h, especially at two concentrations of 5.400 × 10 −2 mmol/mL and 0.108 mmol/mL, which may be due to the encapsulation slowing the release of SAEO. Materials Star anise was purchased from Guangxi Rongxian Guoyao Agricultural Products Co., Ltd. (Yulin, China) and underwent the hydrodistillation method to generate SAEO and was dried with anhydrous sodium sulfate. HPCD (purity > 99%, average Mw = 1380) was purchased from Sigma-Aldrich Shanghai Trading Co. Ltd. (Shanghai, China). Rhizopus stolonoifer (separated from baked food), Saccharomyces cerevisiae, and E. coli were provided by the microbial laboratory in the Anyang Institute of Technology. Sabouraud dextrose broth (SDB) and tryptic soy broth (TSB) were purchased from Qingdao High Tech Industrial Park Hopebio Technology Co., Ltd. (Qingdao, China). Other reagents were of analytical grade. The water used was double-distilled and deionized. Preparation of the Inclusion Complex of SAEO with HPCD The SAEO inclusion complexes were prepared by the freeze-drying method according to the published procedure [35]. SAEO was added to 25 mL HPCD aqueous solution with the mole ratio of 1:1. The mixture was ultrasonically treated and magnetically stirred for 96 h at 30 °C in the dark. The encapsulation solution was filtered through 0.45 μm filters to eliminate any undissolved compounds after the complexation reaction. The filtrates were lyophilized at −60 °C and 100 Pa in a Millrock Technology BT85 freeze dryer (Millrock Technology, Inc., Kingston, NY, USA). Materials Star anise was purchased from Guangxi Rongxian Guoyao Agricultural Products Co., Ltd. (Yulin, China) and underwent the hydrodistillation method to generate SAEO and was dried with anhydrous sodium sulfate. HPCD (purity > 99%, average Mw = 1380) was purchased from Sigma-Aldrich Shanghai Trading Co. Ltd. (Shanghai, China). Rhizopus stolonoifer (separated from baked food), Saccharomyces cerevisiae, and E. coli were provided by the microbial laboratory in the Anyang Institute of Technology. Sabouraud dextrose broth (SDB) and tryptic soy broth (TSB) were purchased from Qingdao High Tech Industrial Park Hopebio Technology Co., Ltd. (Qingdao, China). Other reagents were of analytical grade. The water used was double-distilled and deionized. Preparation of the Inclusion Complex of SAEO with HPCD The SAEO inclusion complexes were prepared by the freeze-drying method according to the published procedure [35]. SAEO was added to 25 mL HPCD aqueous solution with the mole ratio of 1:1. The mixture was ultrasonically treated and magnetically stirred for 96 h at 30 • C in the dark. The encapsulation solution was filtered through 0.45 µm filters to eliminate any undissolved compounds after the complexation reaction. The filtrates were lyophilized at −60 • C and 100 Pa in a Millrock Technology BT85 freeze dryer (Millrock Technology, Inc., Kingston, NY, USA). GC-MS Analyses SAEO was accurately weighed and dissolved in n-hexane to prepare sample solution A at a concentration of 0.003 mg/mL. Ten milligrams of SAEO/HPCD encapsulation compound was accurately weighed and dissolved in 5 mL deionized water, after which 10 mL n-hexane was added. The sample underwent ultrasonic extraction for 10 min to prepare sample solution B. The constituents of SAEO before (sample solution A) [36] and after (sample solution B) encapsulation were analyzed by GC-MS (Agilent 7890A-5975C, Santa Clara, CA, USA). The GC conditions are presented as follows: J and W 122-5532 quartz capillary column (30 m × 250 µm × 0.25 µm); an inlet temperature of 250 • C; oven temperature was maintained at 60 • C for 1 min, after which it was elevated by 2 • C/min to 130 • C and subsequently elevated by 5 • C/min to 240 • C, at which the temperature was maintained for 1 min; a split ratio of 15:1; carrier gas (99.999% He); a velocity of 0.95 mL/min; and an injection volume of 1 µL. The MS conditions are presented as follows: electron impact (EI) ion source; an ion source temperature of 230 • C; an MS quadrupole temperature of 150 • C; and a mass scan range of 30-500 amu. FT-IR Spectra Approximately 1 mg HPCD and 1 mg of the inclusion complex samples were placed in an agate mortar with about 100 mg dry potassium bromide to form a fine powder, respectively. The sample was then mixed well, loaded into a mold, pressed into tablets, and placed in the Bruker Tensor II FT-IR spectrometer (Karlsruhe, Germany) for testing, respectively. The SAEO tablet sample was prepared by adding a drop of SAEO to the potassium bromide tablet. 1 H NMR Spectra The 1 H NMR spectra were recorded at 25 • C with a Bruker AM-400 NMR spectrometer (Karlsruhe, Germany) at 500 MHz. SAEO, the encapsulation complex, and HPCD were dissolved in the DMSO solution, placed in NMR tubes with 5 mm inner diameters, and respectively tested. Molecular Modeling According to the above-mentioned test method in Section 3.3, the main constituents in SAEO before and after encapsulation were trans-anethole, estragole, and trans-foeniculin. Trans-anethole, estragole, trans-foeniculin, and HPCD were molecularly simulated using Hyperchem 8.0 issued by Hypercube, Inc. (Gainesville, FL, USA). The structures of trans-anethole, estragole, trans-foeniculin and HPCD were first constructed by means of Hyperchem 8.0 and then optimized by the PM3 method. The ether linkage side or isoallyl (or allyl) of the guest molecules were then inserted into the cavity of HPCD from the wide edge to the structure model A or model B, respectively. Both modes were minimized with the conjugate gradient optimizer until a root mean square (RMS) value of 0.01 kcal/(mol Å) was obtained [37]. The ∆E of the minimum energy mode was calculated on the Equation (1): where E host , E guest , and E complex (kcal/mol) represent the calculated energy of the HPCD, the main components' molecules in the SAEO, and the encapsulation molecular complex, respectively. Volatile Stability The ratio of the resistance (G/G0) of the volatile gas in the sample to the blank was obtained by the 10 gas sensors of the PEN3 electronic nose (Schwerin, Germany), in order to describe the volatile stability of the sample. That is, the smaller the ratio, the better the volatility stability. Twelve milligrams of SAEO and the encapsulation complex containing the same amount of SAEO were added to the sample bottle with 5 mL deionized water, respectively. Their volatility stability data were detected by 10 gas sensors of PEN3 electronic nose after standing for 30 min at room temperature. The same method was used to detect the volatility stability of HPCD. All the test samples were performed in triplicate. In Vitro Antimicrobial Activity Antimicrobial activity and the MICs analyses of free and encapsulated SAEO were performed against Rhizopus stolonoifer, Saccharomyces cerevisiae, and E. coli (provided by the microbial laboratory in the Anyang Institute of Technology) in 96-well microtiter plates by several researchers with minor modification [25,38,39]. These strains representing typical spoilage organisms commonly exist in food products. Rhizopus stolonoifer and Saccharomyces cerevisiae were cultured on SDB at 28 • C, and E. coli was cultured on TSB at 37 • C. The suspensions with strains (Rhizopus stolonoifer, Saccharomyces cerevisiae, and E. coli) concentration of approximately 10 5 CFU/mL were prepared and 1 mL of the bacterial suspension was added to 150 mL of liquid medium. Three-hundred microliters of inoculated culture medium was added to each well of the microplate. Furthermore, 0.000, 6.750 × 10 −4 , 3.375 × 10 −3 , 6.750 × 10 −3 , 1.350 × 10 −2 , 2.700 × 10 −2 , 5.400 × 10 −2 , and 0.108 mmol/mL SAEO and the inclusion complex containing the same amount of SAEO were added to the inoculated culture medium, respectively. That is, the concentrations of SAEO in the inoculated liquid medium containing inclusion complex are also 0.000, 6.750 × 10 −4 , 3.375 × 10 −3 , 6.750 × 10 −3 , 1.350 × 10 −2 , 2.700 × 10 −2 , 5.400 × 10 −2 , and 0.108 mmol/mL, respectively. The culture medium solution was mixed by a vortex oscillator to ensure good distribution. Incubations of Rhizopus stolonoifer and Saccharomyces cerevisiae were carried out in a dark room at 28 • C for 24 h, and the incubation of E. coli was performed at 37 • C for 24 h. The OD value of liquid culture medium containing inoculums was then measured at 600 nm once every 1 h by a Bioscreen C automatic analyzer for microbial growth curves (Turku, Finland), respectively. The inhibition rate was calculated on the Equation (2). Each test was performed in triplicate: where ∆OD C and ∆OD S are the value change of the control sample and the post-treatment samples with SAEO or the SAEO/HPCD inclusion complex at OD of 600 nm after a certain period of incubation time, respectively. The MICs of free and encapsulated SAEO were determined using a microdilution assay [25]. The antimicrobial inclusion complexes were added to the microtiter plates as aqueous suspensions, while the SAEO was added as aqueous microemulsions. The concentration of inclusion complexes added to the test wells ranged from 62.5 to 1000 mg/mL (equivalent to 1.25-20 mg/mL of SAEO concentration based on the entrapment efficiency), while the concentration of free SAEO ranged from 1.25 to 20 mg/mL. Negative control wells were prepared with sterile culture medium containing tested samples (free and encapsulated SAEO). Positive control wells were prepared with microbial suspension inoculating in culture medium. The microplates were incubated at 37 • C or 28 • C for 24 h and the turbidity was determined at 600 nm. The MICs of free and encapsulated SAEO were recorded as the lowest concentration where no visible growth (≤0.05 changed in OD 600 ) was observed in the wells after 24 h of incubation. Conclusions The present study investigated the characteristics of SAEO after encapsulation in HPCD. The results of the FT-IR and 1 H NMR spectra confirmed that SAEO was successfully encapsulated in the HPCD cavity. Hydrophobic SAEO became a water-soluble encapsulation complex. However, the compositions of SAEO and its relative content were different from those before encapsulation. In addition to cis-anethole and trans-anethole, the relative content of most components decreased. The components that contained an ether bond were more easily encapsulated into the HPCD cavity as compared to the other components, which may result in changes in the antimicrobial properties of SAEO. The results of the molecular modeling indicated that the embedded modes of the three main components were inserted into the cavity of HPCD on the isoallyl (or allyl) side. In addition, the components with higher contents formed complexes with HPCD more easily. The volatile stability of SAEO after encapsulation was evaluated by electronic nose, and the data showed that the "sulfur-organic" and "sulf-chlor" smell of the encapsulation solution was significantly lower than the SAEO and water mixture due to the embedding effect of the HPCD. This indicated that encapsulation significantly reduced the irritating smell of SAEO and makes the clathrate have a sustained release effect. The results of the antibacterial test indicated that the inhibitory effect of SAEO on Rhizopus stolonoifer, Saccharomyces cerevisiae, and E. coli markedly increased following the formation of the encapsulation complex due to the improvement of water solubility of SAEO. Furthermore, the antibacterial stability of the inclusion complex in 24 h was generally superior to that of free SAEO on account of the encapsulated slow release effect. Author Contributions: C.Y. and Y.S conceived and designed the experiments; G.Z. performed the experiments; G.Z. and C.Y. analyzed the data; and G.Z. and C.Y. wrote the paper. All authors read and approved the final manuscript.
2018-05-13T23:20:10.036Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "185638a20100e8e20c6fde589bff8c5764330364", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/23/5/1126/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "185638a20100e8e20c6fde589bff8c5764330364", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
46562707
pes2o/s2orc
v3-fos-license
Characterizing the energy gap and demonstrating an adiabatic quench in an interacting spin system Spontaneous symmetry breaking occurs in a physical system whenever the ground state does not share the symmetry of the underlying theory, e.g., the Hamiltonian. It gives rise to massless Nambu-Goldstone modes and massive Anderson-Higgs modes. These modes provide a fundamental understanding of matter in the Universe and appear as collective phase/amplitude excitations of an order parameter in a many-body system. The amplitude excitation plays a crucial role in determining the critical exponents governing universal non-equilibrium dynamics in the Kibble-Zurek mechanism (KZM). Here, we characterize the amplitude excitations in a spin-1 condensate and measure their energy gap for different phases of the quantum phase transition. At the quantum critical point of the transition, finite size effects lead to a non-zero gap. Our measurements are consistent with this prediction, and furthermore, we demonstrate an adiabatic quench through the phase transition, which is forbidden at the mean field level. This work paves the way toward generating entanglement through an adiabatic phase transition. The amplitude mode and phase mode describe two distinct excitation degrees of freedom of a complex scalar field ψ = Ae iφ appearing in many quantum systems such as the order parameter of the Ginzburg-Laudau superconducting phase transition [9] and the two-component quantum field of the Nambu-Goldstone-Anderson-Higgs matter field model [1,[3][4][5].In a spin-1 condensate, the transverse spin component plays the role of an order parameter in the quantum phase transition (QPT) with S ⊥ being zero in polar (P) phase and nonzero in broken axisymmetry (BA) phase (Fig. 1a).Representing the transverse spin vector as a complex number, S ⊥ = S x + iS y , with the real and imaginary parts being expectation values of spin-1 operators, the amplitude mode corresponds to the amplitude oscillation of S ⊥ . The amplitude mode can be studied in different spinor phases by tuning the relative strength of the quadratic Zeeman energy per particle q ∝ B 2 and spin interaction energy c of the condensate [10] by varying the magnetic field strength B (Fig. 1).In the polar phase, both the spinor energy H and the ground state (GS) spin vector have SO(2) rotational symmetry about the vertical axis (Fig. 1a), and there are two degenerate collective amplitude modes along the radial directions about the GS located at the bottom of the parabolic bowl.These amplitude excitations are gapped modes, which vary both the amplitude of S ⊥ and the energy H.In a second quantized picture, they correspond to pairwise excitations from |m F = 0 to |m F = ±1 Zeeman spin states. In the BA phase, the spinor energy H acquires a Mexican-hat shape with the GS occupying the minimal energy ring of radius 4c 2 − q 2 /(2|c|).The GS spin vector (orange arrow in Fig. 1) spontaneously breaks the SO(2) symmetry and acquires a definite direction [11].This broken symmetry induces a massless Nambu-Goldstone (NG) mode in which it costs no energy for the spin vector to rotate about the vertical axis.Recently, a, Spin energy in the BA phase (q/|c| < 2), at the QCP (q/|c| = 2), and in the P phase (q/|c| > 2).In the P phase, there are two gapped modes (blue lines) along the radial direction about the GS.In the BA phase, the GS occupies a minimum energy ring (gray circle) with one gapped mode along the radial direction (blue line) and one NG mode (red line) in the azimuthal direction.b, The GS on the {S ⊥ , Q ⊥ , Qz} unit spheres is represented by a red shaded region.Coherent orbiting (phase winding) dynamics are represented by red (green) curves and the blue curve is the separatrix.The magenta (black) arrow represents the RF (microwave) pulse used for the initial state preparation (Supplementary Information).c, The energy gap for 40,000 atoms (cyan curve) is calculated from the eigenvalues of the quantum Hamiltonian (Supplementary Information).d, The energy gap at the QCP (blue solid line) calculated from the eigenvalues of the quantum Hamiltonian matches the GS oscillation frequencies from simulations (circle markers).(Color online) the magnetic dipolar interaction was used to open a gap arXiv:1512.06766v1[cond-mat.quant-gas]21 Dec 2015 in the NG mode by breaking the rotational symmetry of the spin interaction [12].In our condensate, the NG phase mode remains gapless because the conserved and zero magnetization ( S z = 0) suppresses the magnetic dipolar interaction.The other excitation, the amplitude mode, manifests itself as an amplitude oscillation of the transverse spin in the radial direction.This amplitude mode is similar to the massive mode in the Goldstone model [1]. In this work, we measure the amplitude modes in spin-1 Bose-Einstein condensates (BEC) through measurements of very low amplitude excitations from the ground state.The results show an quantitative agreement with gapped excitation theory [8,13,14] and provide a new platform to probe the amplitude excitation, which plays a crucial role in the KZM in spinor condensates.Although in the thermodynamic limit the amplitude mode energy gap goes to zero at the quantum critical point (QCP), a small size-dependent gap persists for finite size systems [8].The measurements of the energy gap near the QCP are challenging, however our results are consistent with a small non-zero gap.Furthermore, by using a very slow optimized magnetic field ramp, we demonstrate an adiabatic quench across the QCP.Such adiabatic quenches in finite sized systems underlie proposals for generating massively entangled spin states including Dicke states [8] and are fundamental to the ideas of adiabatic quantum computation [15]. The experiments use a tightly confined 87 Rb BEC with N = 40, 000 atoms in optical traps such that spin domain formation is energetically suppressed.The Hamiltonian describing this spin system in a bias magnetic field B along the z-axis is [16][17][18][19]: where Ŝ2 is the total collective spin-1 operator and Qz is proportional to the spin-1 quadrupole moment, Qzz .The coefficient c is the collisional spin interaction energy per particle integrated over the condensate and quadratic Zeeman energy per particle q = q z B 2 with q z = 72 Hz/G 2 (hereafter, h = 1).The longitudinal magnetization Ŝz is a constant of the motion (= 0 for these experiments); hence the first order linear Zeeman energy p Ŝz with p ∝ B can be ignored.The spin-1 coherent states can be represented on the surface of a unit sphere shown in Fig. 1b with axes {S ⊥ , Q ⊥ , Q z } where the expectation value of transverse spin is , and Q z = 2ρ 0 − 1 where ρ 0 is the fractional population in the |F = 1, m F = 0 state.In this representation, the coherent dynamics evolve along the constant energy contours of H = 1 2 cS 2 ⊥ − 1 2 q(Q z − 1) where c = 2N c [6,7] (red and green orbits in Fig. 1b). In the mean field (large atom number) limit, quantum fluctuations can be ignored and the wavefunction for each spin state, m F = 0, ±1, can be represented as a complex vector with components, ψ 0,±1 = √ ρ 0,±1 exp θ 0,±1 .Using Bogoliubov analysis [13] and mean field theory [7], the energy gap of the amplitude mode in the P phase and the BA phase in the long wavelength limit correspond to the oscillation frequency of small excitations in ρ 0 from the GS here the energy gap is ∆ E (≡ ∆ P and ∆ BA ) and coherent oscillation frequency is f (≡ f P and f BA ).Although these relations show a vanishing gap at the QCP, quantum fluctuations due to finite atom number size effects result in a non-zero gap.In the quantum theory, the energy gap can be exactly calculated from the eigenenergy values of the Hamiltonian in Eq. 1 (Supplementary Information).Fig. 1c shows the energy gap between the GS and first excited state with a small nonzero gap at QCP as a result of a finite atom number.Fig. 1d shows the relation of energy gap at the QCP and atom numbers for condensates ranging from 10 1 −10 5 atoms which scales as ∆ E ∝ N −1/3 [8].The energy gap curve compares well to the oscillation frequencies of GS spinor population ρ 0 obtained from quantum simulations (Supplementary Information) for a broad range of atom numbers (red marker in Fig. 1d).The equivalence relation between the energy gap and the coherent oscillation frequency in Eq. 2 is a general statement connecting the amplitude modes to the observable dynamics and is key to this study.Energy gap measurement.To characterize the energy gap ∆ E , we measure coherent dynamics for states initialized close to the GS (Fig. 1b) for different values of q/|c| ranging from 0.1 to 3 and fit the measurements to sinusoidal functions to determine the oscillation frequencies (Supplementary Information).For each q/|c| value, several measurements of the population ρ 0 are made for a series of initial states approaching the GS as illustrated in Fig. 2a.The GS population ρ 0,GS can be obtained by minimizing the spinor energy (Supplementary Information) [7] ρ 0,GS = 1 (P) , ρ 0,GS = 1/2 + q/(4|c|) (BA) (3) The oscillation amplitude of ρ 0 has a lower limit given by the Heisenberg standard quantum limit (SQL = N −1/2 ) projected onto the ρ 0 -axis (∝ Q z -axis in Fig. 1b) [19]; hence the best estimate of the energy gap is obtained from the measurement with the lowest observable oscillation amplitude.An alternate method to determine the energy gap for states centered on the pole is to measure the oscillations of the transverse spin fluctuations, ∆S ⊥ .Although this method requires much more data because the signal is in the fluctuations instead of the mean value, it provides higher contrast for states localized at the pole.Measurements obtained with this technique at the QCP are shown in Fig. 2b for a state prepared in the polar GS (Supplementary Information). The results of the energy gap measurements are shown in Fig. 2c for both methods.Overall, the measurements ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ■ ■ 1.9 2.0 2.1 0 0.5 1 FIG. 2. Energy gap measurements.a, Coherent oscillation data (circle markers) obtained at q/|c| = 2.03.In clockwise order, the oscillation amplitude decreases as the initial state is initialized closer to the GS.Each data point is an average of 10 measurements and the data are fit to a sinusoidal function with a varying frequency (solid line) b, The time evolution of ∆S ⊥ data (square markers) at the QCP (q/|c| = 2) are fit to a sinusoidal function (solid line).Each data point is the noise of 45 measurements.The corresponding simulation is represented by an orange curve with the shaded region being q/|c| = 2 ± 0.005.c, The energy gap ∆E for different q/|c| values are obtained from the frequency fits of coherent oscillation data.Circle (triangle) markers are obtained from an average of 10 (or 3) measurements of ρ0 coherent oscillation, and square marker is the frequency fit of ∆S ⊥ dynamics.The theoretical energy gap is represented by the purple curve.The inset plot shows the region around the QCP with the shaded orange region being the energy gap for an initially imperfect GS (see text).(Color online.)capture the characteristics of energy gap predicted by gapped excitation theory for a spin-1 BEC [8,13,14].In the P phase, the energy gap data show a good agreement with the theoretical prediction within the uncertainty of the measurements.In the BA phase, the measured gap data are also in reasonable agreement with the theory, however the measured values are 20% lower than the theory for the smallest values of q/|c| < 1.This is possibly a result of small violations of the single mode approximation or the presence of a small thermal fraction, both of which would be more significant in this spin interactiondominated regime.In a study of an antiferromagnetic condensate, using an initial state (ρ 0 = 0.5) prepared far away from the antiferromagnetic GS (ρ 0,AGS = 1), slightly lower oscillation frequencies than the theory were also observed [20]; it was suggested that this resulted from excess magnetization noise from the RF pulse of the initial state preparation, however, this noise is not large enough to explain the difference in our measurements. In the neighborhood of the QCP, the energy gap decreases dramatically.A shown in the inset to Fig. 2c, the measurements are in good agreement with the theoretical predication in this region.For measurements at q = 2|c|, the minimum measured gap is ∆ E = 0.15(1)|c|, which is consistent with the non-zero gap predicted by the quantum theory, ∆ E,th = 0.165|c|, here c ≈ −7.5(1) Hz (Supplementary Information).We point out however that there are experimental challenges to these measurements, which can tend to over-estimate the measured value of the gap.The initial state is prepared in the high magnetic field GS (q/|c| = 38).This state has symmetric √ N fluctuations in the S ⊥ , Q ⊥ plane.When the condensate is rapidly quenched to a lower q/|c| for the energy gap mea-surement, this projects the condensate to slightly excited states of the final q/|c| Hamiltonian.The subsequent evolution of this state will have an oscillation frequency higher than calculated gap frequency, particularly in the region 1.95 ≤ q/|c| ≤ 2.05.We can accurately calculate this effect, and the results are indicated by the orange shaded region in Fig. 2c. A further complication in the measurement at the QCP is that the value q/|c| is not truly constant during the measurement of the gap, but drifts to slightly higher values because of a reduction of density due to the finite lifetime of the condensate.The spin interaction energy depends on the density and atom number as c(t) ∝ n(t) ∝ N (t) 2/5 .For these measurements, the condensate lifetime was 1.6(1) s, which results in a drift of ∆q/|c| = 0.05 in 100 ms in the neighborhood of the QCP.The atom loss is taken into account in the simulations, an example of which is shown in Fig. 2c, and the energy gap is determined by the frequency at t = 0. Despite these challenges to the measurements near the QCP, the data indicate the presence of a non-zero gap that is of the same size as predicted by theory. The Ŝz = 0 spin-1 BEC below the QCP has a relativistic-type dispersion relation , in which the energy gap ∆ E /2 is equivalent to the rest mass energy of a quasiparticle, m s , and the spin wave velocity plays the role of speed light, c s ≡ q − |c| [13,14].Our experiments are in the long wavelength limit in which the wave vector approaches zero, k → 0. The effective action of the system is identical to the theoretical description of superfluid-insulator transition in Bose-Hubbard model and hence the phase transition belongs to the same universality class as the XY model [14].The Higgs mode manifest as a collective excitation has been observed in the superfluid/Mott insulator transition [21] as an amplitude fluctuation of a complex order parameter, in the XY model of antiferromagnetic materials as an amplitude fluctuation of spin vector [22], and in superconducting systems [23][24][25].In a similar vein, the amplitude mode of a spinor BEC in the proximity of the QCP can be regarded as a Higgs mode associated with the amplitude fluctuation of the transverse spin.This Higgs mode is different from the Anderson-Higgs mechanism, in which a massless gauge field in combination with the spontaneous symmetry breaking can generate a massive boson [4,5]. Adiabatic quantum phase transition. In the thermodynamic (N → ∞) limit, the vanishing gap at the QPT prohibits adiabatic crossing between phases and gives rise to excitations characterized by the Kibble-Zurek mechanism (KZM).However, the opening of the gap at the QCP due to finite size effects make it possible, in principle, to cross the QCP adiabatically using a carefully tailored ramp from q 2|c| → q < 2|c|, while remaining the ground state of the Hamiltonian.Recently, adiabaticity in sodium spin-1 condensates has been studied [26]; however, these experiments were performed using condensates with non-zero longitudinal magnetization ( Ŝz = 0) that do not have a QCP.Here, we focus on the very challenging case of small energy gaps at the quantum critical point. Due to the small size of the gap, the ramp in q needs to be very slow in the region of 2|c| in order to maintain adiabaticity.To allow longer ramps, we employed a single focus dipole trap in which the condensate lifetime is 15-19 s.To determine the optimal ramp, we performed simulations using measured values of the trap lifetime, the atom number, and the spin interaction energy.The ramp is determined from a piece-wise optimization of the adiabatic parameter d∆ E dt 1 ∆ 2 E and includes the effects of atom loss on c (Supplementary Information).The simulations show that it is possible to adiabatically cross the phase transition in ∼35 s starting with a condensate initially containing 40,000 atoms. The experiment starts with atoms at the GS in the polar phase at high magnetic field, q/|c| = 140.Then, the magnetic field is ramped through the QCP to q/|c| = 1.33 in 35 s along the trajectory represented by the green line in Fig. 3a.The measured evolution of the population ρ 0 is shown in same graph and compared to that predicted by the simulation.The data show excellent agreement with the theoretical values for the evolving ground state population ρ 0,GS (Eq.3), which provides a strong indication of adiabitity. There are about 9000 atoms remaining after the adiabatic ramp.The theoretical value of the ground state population and uncertainty is ρ 0,GS = 0.833 ± 0.004, where the uncertainty is the SQL for 9000 atoms, projected onto the ρ 0 axis (∝ Q z -axis in Fig. 1b), (Sup-plemental Information).Immediately after the adiabatic ramp (t ≈ 36 s), the measured mean population and fluctuations are ρ 0 = 0.830 ± 0.007, which are very close to the theoretical values, and further indicate adiabiticity.Following the adiabatic ramp, the ratio q/|c| = 1.33 is held constant for 2 seconds to verify that the system remains in the GS.As shown in Fig. 3b, the mean value of ρ 0 stays close to the theoretical value ρ 0,GS .In Fig. 3b, the uncertainty ∆ρ 0 is plotted.Although the measurements of ∆ρ 0 (red circle markers) trend above the theoretical SQL (dashed line) after holding, atom loss increases fluctuations in the spin populations (assuming uncorrelated losses) to the level shown in the green shaded region (Supplementary Information). For comparison, in Fig. 3d-e we show data from nonadiabatic ramps from q/|c| = 140 → 1.33.In Fig. 3d, a 1 s linear ramp is used, while in Fig. 3e, a 28 s ramp is used.In both cases, the spin population ρ 0 does not follow the theoretical GS population during the ramp and the fluctuations ∆ρ 0 grow dramatically.The fluctuations at the end of the 28 s ramp are compared with those from the adiabatic ramp in Fig. 3c (shown as blue square markers), and it is clear that the non-adiabaticity gives rise to increased fluctuations. Adiabatically crossing the QPT in a spin-1 zero magnetization condensate is predicted to generate massively entangled spin states [8].Broadly speaking, this is an example of the fundamental principle underlying adiabatic quantum computing, in which the initial simple ground state, by tuning the Hamiltonian adiabatically through a QCP, transforms into a highly entangled final ground state of the final Hamiltonian that is a solution to a computation problem [27].In our case, under ideal conditions, the final state would correspond to a ring-shaped GS in the BA phase as shown in Fig. 1a.For a ramp to q = 0, the final state is predicted to be the Dicke state |S = N, S z = 0 .In this study, we stop the adiabatic ramp at q/|c| = 1.33.The entanglement of the GS at this q/|c| can be calculated as in Ref. [8] The uncertainty in transverse magnetization is Ŝ2 x + Ŝ2 y ≈ 1 − (2ρ 0 − 1) 2 .In the ideal case, the longitudinal magnetization is zero and conserved (∆ Ŝz ) 2 → 0, and the expected entanglement is ξ = 0.56N or roughly 5, 000 atoms are entangled out of 9, 000 atoms at the end of the adiabatic ramp.However, the atom loss induces noise in the magnetization, ∆S z ≈ 0.5%, in our experiment.This small magnetization noise reduces the entanglement to ξ < 1 atom. In summary, we have explored the energy gap in small spin-1 condensates.The energy gap measurements show evidence of a nonzero gap at the QCP arising from finite size effects and using a carefully tailored slow ramp of the Hamiltonian parameters, we have adiabatically crossed the QCP with no apparent excitation of the system.In future work, we hope to study these effects for different system sizes and to preserve and measure the entanglement of the system.We hope that this work stimulates similar investigations in related many-body systems, and in particular, we anticipate that the results of this study could directly inform investigations in double-well Bose-Josephson junction systems, (psuedo) spin-1/2 interacting systems [28], and the Lipkin-Meshkov-Glick (LMG) model [29], which share a similar Hamiltonian. CHARACTERIZING THE ENERGY GAP AND DEMONSTRATING AN ADIABATIC QUENCH IN AN INTERACTING SPIN SYSTEM: SUPPLEMENTARY INFORMATION EXPERIMENTAL SETUP The experiment is carried out using small condensates of 40, 000 atoms in the F = 1 hyperfine ground state of 87 Rb.In the energy gap experiment, atoms are confined in a spherical optical dipole force trap with trap frequencies ∼ 2π × 140 Hz, formed by crossing the focus of a 10.6 µm wavelength laser with a 850 nm wavelength laser.This tight confinement ensures that the condensate is well described by the single mode approximation (SMA), such that the spin dynamics can be considered separately from the spatial dynamics [16][17][18].The spin interaction energy c ≈ −7.5(1) Hz and trap lifetime is ≈ 1.6(1) seconds. Imaging protocol.The spin populations of the condensate are measured by releasing the trap and allowing the atoms to expand in a Stern-Gerlach magnetic field gradient to separate the m F spin components.The atoms are probed for 200 µs with three pairs of counter-propagating orthogonal laser beams, and the fluorescence signal collected by a CCD camera is used to determine the number of atoms in each spin component. ENERGY GAP MEASUREMENT In our experiment, the condensate is prepared at a high magnetic field (q/|c| ∼ 38).This state has symmetric √ N fluctuations in the S ⊥ , Q ⊥ plane set by the Heisenberg uncertainty limit [19].The system is subsequently quenched to a lower field in 2 ms.An rf pulse (transition between |m F = 0 ↔ |m F = ±1 ) is applied to prepare the system at ρ 0,GS value, and a subsequent microwave pulse (transition between |F = 1, m F = 0 ↔ |F = 2, m F = 0 ) [19] is applied to rotate the spinor phase θ s = θ + + θ − − 2θ 0 to the GS position.After the initial state preparation, the condensate is allowed to evolve along the energy contours, as seen on the spheres in the main paper Fig. 1b.Coherent dynamics are observed through time evolution of either the population ρ 0 or transverse spin component S ⊥ .Fig. 4 shows all the measurements of coherent oscillations for different q/|c| values. Measuring S ⊥ .Note that a π/2-rf pulse can be used to rotate S x into the S z measurement axis.Details of this protocol are described in Ref. [19].As quantum states evolve along their respective energy contours, the projection of the Heisenberg uncertainty limit onto the S ⊥ -axis oscillates and energy gap is measured through this oscillation frequency.Since we are unable to track the Larmor phase of the spin vector due to its fast dynamics, the Larmor phase is considered to be uniformly distributed in the spin space S x S y .Therefore, the measurement of the standard deviation ∆S x is equivalent to measuring ∆S ⊥ . In the main paper Fig. 2b, the condensate is prepared at the polar GS at a high magnetic field (q/|c| ∼ 38).While the mean-field GS (|m F = 0 state) remains unchanged as the system is quenched to the QCP for S ⊥ measurements, the initial √ N quantum fluctuation projects the condensate to slightly excited states of the Hamiltonian at the QCP.Therefore, the state used in S ⊥ measurements is a close approximation of the actual GS the QCP limited by the quantum noise.Sinusoidal fitting.To extract the oscillation frequency, the data are fitted to a sinusoidal function of the form ρ 0 (t) = ρ 00 + A cos(2πf (t)t + φ) + k × t as shown in Fig. 4. Fitting parameters include the initial population value, ρ 00 , the oscillation amplitude, A, the initial phase, φ, and the drift of GS population due to atom loss, k.Due to atom loss, the frequency is a function of time, f (t).Since f 0 ≡ ∆ E (0) and f (t) ≡ ∆ E (t), the frequency f where f 0 is the oscillation frequency at t = 0.The energy gap ∆(t) is calculated using the Hamiltonian in Eq. 4 with spin interaction energy depending on the number of atom c(t) = c(0) × (N (t)/N ) 2/5 .The oscillation frequency of the ∆S ⊥ dynamics is extracted with a similar fit function.Energy gap raw data.The energy gap of amplitude modes is obtained by fitting data to the sinusoidal functions.Fig. 5 show the energy gap plot summarized from all the frequency fits.The measurements with the lowest oscillation amplitude which provides a reliable fits corresponding to the closest measurement of the energy as shown in the main paper Fig. 2c. ADIABATIC QPT In the adiabatic QPT experiment, atoms are confined in the focus of the 10.6 µm wavelength laser with the trap lifetime of 15 − 19 s with spin interaction energy c ≈ −2.1 Hz.Although the Thomas-Fermi radii of the condensate in longitudinal direction in this trap is larger than the spin healing length, the spin domains is unlikely to be formed since the adiabaticity maintains the condensate at the GS leaving no extra energy for domain formation.The single-mode approximation theory is still capable of describing the adiabatic process. In our experiment, the condensate is prepared in a high magnetic field (q/|c| = 140), then subsequently a final value using an optimal ramp.An optimized ramp of the magnetic field is produced piecewise through an iterative procedure using a semi-classical simulation [30,31], in particular, we divide the whole ramp into 100 linearly small pieces.To verify the adiabaticity, one needs to compare the state after each linear ramp with the theoretical GS.Since the atom number N , spin interaction energy c, and quadratic Zeeman energy q vary after each ramp, it is very time consuming to numerically solve for the exact GS from the quantum Hamiltonian in Eq. 4. Instead, we use an adiabatic condition such that the uncertainty of quantum states maintains below the Heisenberg uncertainty limit with SQL= N −1/2 .The shortest ramp time that satisfies this adiabatic condition is chosen and this procedure continues until the final q/|c| value is reached. Atom fluctuation.We assuming the atom fluctuation is equal for all spin components, m F = 0, ±1.If the atom number fluctuation is ∆N , the uncertainty of number atoms in m F = 0 are ∆N/ √ 3 atom which translates into the uncertainty ∆ρ 0,0 = ∆N/(N √ 3).The theoretical SQL ρ0 with the contribution from the number fluctuation becomes SQL ρ0,∆N = SQL 2 ρ0 + (∆ρ 0,0 ) 2 During a 2 s period, the fluctuation of number atom is about 300 atoms with the mean number of atom is 9000 atom, which yields SQL ρ0 ≈ 0.004 and SQL ρ0,∆N ≈ 0.022 as seen in the main paper Fig. 3c. QUANTUM ENERGY GAP CALCULATION The gapped excitation ∆ E can be obtained by computing the eigenvalues of the quantum Hamiltonian in the Fock basis |N 1 , N 0 , N −1 , with N i atoms in the |m F = i state [8,18,30,32], here δ k,k is the Kronecker delta function.With conservation of the total atom number N = N 1 + N 0 + N −1 and zero magnetization M = N 1 − N −1 = 0, the Fock basis can be represented using N and a variable k, which counts the number of pairs of atoms in the m F = ±1 states.The energy gap is the excitation energy ∆ E between the lowest and first excited eigenstates. MEAN-FIELD SPINOR ENERGY From the main paper Eq. 1, the mean-field spinor energy can be written as Since the spinor dynamics is constrained on the surface of the spin-nematic sphere (main paper Fig. 1b), we have the following relationship [31] At the GS, the expectation value of quadrupole operator Q ⊥ = 0. Fixing the value Q ⊥ = 0 in Eq. 6, we can expand the spinor energy in Eq. 5 around the GS as a function of transverse spin S ⊥ , here the expectation value S ⊥ = S 2 x + S 2 y .Minimizing Eq. 7 we obtain the GS expectation value S ⊥ = 0 in the P phase and S ⊥ = 4c 2 − q 2 /(2|c|) in the BA phase.The values ρ 0,GS in the main paper Eq. 3 can be calculated from these S ⊥ values.The Hamiltonian Eq. 7 has a global continuous SO(2) symmetry reflected on the shape of spinor energy H in the main paper Fig. 1a. SPIN INTERACTION ENERGY Spin interaction energy is defined as c = 2N c with c ∝ N −3/5 [33], therefore, c ∝ N 2/5 .If q/|c| > 2, the population dynamics ρ 0 for a condensate initialized in m F = 0 state remains equal to 1 regardless of the dynamic evolution, ρ 0 (t) = 1.To search for the critical point q/|c| = 2, we measure the population ρ 0 after a short evolution time (for instance 150 ms) for different magnetic field values with q = q z B 2 (similar to an experiment carried in Ref. [6]).The magnetic field value such that the population ρ 0 drops slightly below 1 corresponds to q/|c| = 2, and the spin interaction energy is calculated as c = −q z B 2 /2.We can even compare the data ρ 0 to simulations of different c values to determine c with ∼ 1% uncertainty as shown in Fig. 6. SIMULATION TOOLS The details of simulation method are well described in our previous works [30,31]. FIG. 1.a, Spin energy in the BA phase (q/|c| < 2), at the QCP (q/|c| = 2), and in the P phase (q/|c| > 2).In the P phase, there are two gapped modes (blue lines) along the radial direction about the GS.In the BA phase, the GS occupies a minimum energy ring (gray circle) with one gapped mode along the radial direction (blue line) and one NG mode (red line) in the azimuthal direction.b, The GS on the {S ⊥ , Q ⊥ , Qz} unit spheres is represented by a red shaded region.Coherent orbiting (phase winding) dynamics are represented by red (green) curves and the blue curve is the separatrix.The magenta (black) arrow represents the RF (microwave) pulse used for the initial state preparation (Supplementary Information).c, The energy gap for 40,000 atoms (cyan curve) is calculated from the eigenvalues of the quantum Hamiltonian (Supplementary Information).d, The energy gap at the QCP (blue solid line) calculated from the eigenvalues of the quantum Hamiltonian matches the GS oscillation frequencies from simulations (circle markers).(Color online) 4 FIG. 3 . FIG.3.Adiabatic and nonadiabatic dynamics.a-c, Adiabatic dynamics of population ρ0 and the uncertainty ∆ρ0 (see texts).d-e, Non-adiabatic dynamics of population ρ0 of 1 s and 28 s linear ramp from q/|c| = 140 → 1.33.Red circle (blue square) markers are the adiabatic (nonadiabatic) measurements, gray shaded regions are the theoretical ρ0,GS values, green arrow lines represent the ramp of q/|c| (vertical axis on the right), and cyan curves represent the dynamics simulation of ρ0 with the corresponding q/|c| ramp.Each adiabatic data point is an average of 3 measurements and each non-adiabatic data point is an average of 15 measurements.(Color online). FIG. 4 . FIG.4.Coherent oscillation data (circle markers) for different q/|c| values are fitted to sinusoidal functions.For each value q/|c|, several measurements of the population ρ0 are made for a series of initial states approaching GS. (Color online.) 0 1 2 3FIG. 5 . FIG. 5. Markers represent the energy gap ∆E for different q/|c| values which are obtained from the frequency fits of coherent oscillation raw data.Theoretical energy gap is represented by purple curve .(Color online.) ρ 0 FIG. 6 . FIG. 6.An example of a spin interaction energy measurement.The spin interaction energy, c, is determined by measuring population ρ0 (black markers) after 150 ms time of evolution for different magnetic field values, B. Simulation results for different spin interaction energy values, c, are represented by the different color curves with corresponding c values labeled in the same colors.(Color online.)
2015-12-21T18:48:17.000Z
2015-12-21T00:00:00.000
{ "year": 2015, "sha1": "df43840e9fd17492bdc69ad136ff3ccbc5fce990", "oa_license": null, "oa_url": "https://www.pnas.org/content/pnas/113/34/9475.full.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "521f376e81449b2046b79320fda641262a333e6a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
267682800
pes2o/s2orc
v3-fos-license
The effect of incentive spirometry in perioperative patients with lung cancer—a systematic review and meta-analysis Background Incentive spirometry (IS) as a routine respiratory therapy during the perioperative period has been widely used in clinical practice. However, the impact of IS on patients with perioperative lung cancer remains controversial. This review aimed to evaluate the efficacy of IS in perioperative pulmonary rehabilitation for patients with lung cancer. Methods Cochrane Library, PubMed, Web of Science, Ovid, CINAHL, Chinese National Knowledge Infrastructure, Weipu, and Wanfang Databases were searched from inception to 30 November 2023. Only randomized controlled trials were included in this systematic review. The PRISMA checklist served as the guidance for conducting this review. The quality assessment of the included studies was assessed by the Cochrane risk-of-bias tool. The meta-analysis was carried out utilizing Review Manager 5.4. Furthermore, sensitivity analysis and subgroup analysis were also performed. Results Nine studies recruited 1209 patients met our inclusion criteria. IS combined with other respiratory therapy techniques was observed to reduce the incidence of postoperative pulmonary complications, enhance pulmonary function, curtail the length of hospital stay, and lower the Borg score. Nevertheless, no improvements were found in the six-minute walk distance or quality of life score. Conclusions Although IS demonstrates benefits as a component of comprehensive intervention measures for perioperative patients with lung cancer, it proves challenging to determine the precise impact of IS as a standalone component within the comprehensive intervention measures. Therefore, further researches are required to better understand the effectiveness of IS isolation and its interactions when integrated with additional respiratory therapies for these patients. Clinical trial registration PROSPERO, https://www.crd.york.ac.uk/prospero/, registry number: CRD42022321044. Supplementary Information The online version contains supplementary material available at 10.1186/s12890-024-02878-1. Introduction According to the Global Cancer Statistics 2020, lung cancer is the second most common cancer and the leading cause of cancer-related deaths.It is estimated that there are 2.2 million new cases and 1.8 million deaths, accounting for 11.4 and 18.0% of diagnosed cancers and deaths, respectively [1].Surgery is still considered as the primary therapy for the majority of patients diagnosed with stage I-III non-small cell lung cancer (NSCLC) [2].Nonetheless, about 40% of patients have experienced postoperative pulmonary complications (PPCs) because of surgical trauma and pulmonary pathophysiological alterations in the perioperative phase [3].PPCs have not only led to fatalities in approximately 85% of these patients, but also played a significant role in prolonging hospital stays and readmissions to the intensive care unit (ICU) [4].These complications are widely defined as pneumonia, atelectasis, pleural effusion, pneumothorax, respiratory tract infection, bronchospasm, respiratory failure requiring invasive or non-invasive mechanical ventilation and so on [5][6][7].Therefore, it is vital for clinical practitioners implementing effective interventions to prevent the occurrence of PPCs. The therapy of pulmonary expansion can enable patients to maintain an effective cough mechanism, promote the clearance of postoperative respiratory secretions.Incentive spirometry (IS) is a mechanical device that promotes lung expansion [8].Its aim is to simulate natural sighs or yawns by encouraging patients to take long, slow, deep breaths, reducing pleural pressure, promoting pulmonary expansion, and promoting gas exchange [9].While physiological evidence suggests that IS could potentially benefit lung re-expansion following surgery, there exists a certain level of controversy among studies regarding its impact on the incidence of PPCs and the length of hospital stays [10][11][12]. Although previous meta-analyses [13] have addressed the effect of IS in patients undergoing cardiac, thoracic, and upper abdominal surgeries, this study included a total of 31 articles, of which only 6 were relevant to thoracic surgery.Upon careful examination of these 6 studies, it becomes apparent that only 2 studies centered on lung resection [11,14], 2 studies [12,15] observed both lung and esophagus surgeries, while another 2 studies [16,17] investigated the application of IS in the realm of abdominal surgery.However, owing to lung resection, the effect of IS may be distinguished from other thoracic or abdominal surgeries.Furthermore, certain valuable Chinese studies were not included in the meta-analysis [18][19][20][21][22]. Therefore, it is hard to draw a conclusion of the effect of IS on perioperative lung cancer surgery patients.The present study aimed to synthesize existing evidence to identify the impact of IS on the perioperative period of lung cancer surgery, to provide substantive evidence for clinical practitioners to implement IS into clinical practice, to improve the prognosis of these patients. Methods This meta-analysis was performed in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [23], and registered in PROSPERO(CRD42022321044). Eligibility and exclusion criteria The inclusion criteria were according to PICOS (Participants, Intervention, Comparison, Outcomes and Study type): (1) Participants (P): adults (aged ≥18 years) who were diagnosed with lung cancer during the perioperative phase, (2) Intervention (I): the experimental group accepted IS alone or in combination with other physical therapies.( 3) Comparison (C): the control group received routine care or other physical therapies.( 4) Outcomes (O): PPCs, pulmonary function, the length of hospital stays (LOS), Borg score, the six-minute walk distance (6MWD) or quality of life (QoL).( 5) Studies (S): randomized controlled trials.( 6) Language: publications in either the Chinese or English language.Review articles, letters, comments, case reports, conference abstracts and full text unavailable were excluded.We also retrieved the references of included studies which were meticulously scrutinized to uncover other potentially eligible studies. Search strategy We performed a computer-based search in the Cochrane Central Register of Randomized Controlled Trials, Pub-Med, Web of Science, Ovid, CINAHL, Chinese National Knowledge Infrastructure, Weipu and Wanfang Databases.The database entries were searched from inception to 30 November 2023.The details of the search strategy were provided in Supplementary Material 1. Study selection Two authors (YL, JMS) individually screened the available studies.Verification of eligibility was determined based on information from the title and abstract, then we assessed the full text of potential studies to identify if they fitted the inclusion criteria.Decisions by the 2 authors were compared and any discrepancies were resolved by a third author (SLC). Data extraction Data extraction was performed by 2 authors (YL, JMS).The following data were extracted: authors, publication year, journal, the characteristics of population, sample size, primary and secondary outcomes, duration and frequency of intervention, and so on. Quality appraisal The risk of bias and quality of the included studies were assessed using the Cochrane risk of bias assessment tool [24].The tool addresses 7 specific domains of potential bias: sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective outcome reporting, and other biases.Risk of bias assessment was performed for all the included studies individually by 2 authors (YL, JMS); A third author (SLC) was available to resolve any disagreements. Statistical analysis Review Manager 5.4 was employed for statistical analysis and to generate forest plots.The pooled estimates of intervention effect for dichotomous outcomes were quantified using the odds ratio (OR) with a 95% confidence interval (95% CI), while the mean difference (MD) with a 95% CI was utilized to quantify continuous outcomes.Forest plots were created to elucidate the effect size.We conducted a comparison between the intervention and control groups, and employed the following indicators of the intervention's effect: The odds ratio (OR) with a 95% confidence interval (95% CI) was utilized to quantify the effect of intervention on PPCs, while the mean difference (MD) with a 95% CI was utilized to summarize the average values with standard deviations for pulmonary function, Borg score, length of stay (LOS), and quality of life (QoL) score for other outcome measures. The statistical heterogeneity of intervention effects was evaluated using the I 2 test and Cochran's Q test. In instances where heterogeneity was significant (I 2 > 50%), a random effects model was employed; otherwise, a fixed-effect model was utilized.We performed a sensitivity analysis to assess the stability of the outcome and to identify the source of heterogeneity.Some studies incorporated IS as a component of the intervention To evaluate the effectiveness of the intervention, which was predominantly centred on IS, in reducing postoperative pulmonary complications, we conducted a sub-group analysis on the implementation of IS combined with other respiratory therapy techniques.In order to explore the impact of IS and various interventions on the key pulmonary outcomes across different countries, we undertaken a sub-group analysis incorporating studies conducted in China and other countries.Publication bias was evaluated using funnel plots. Study selection A total of 2273 studies were initially retrieved, comprising 1055 English records and 1218 Chinese records respectively.After the removal of duplicates, 1538 records were remained.1416 articles were excluded after screening titles and abstracts, and 122 articles were retained for the full-text analysis.Finally, nine studies involving 1209 lung cancer patients were included in the meta-analysis [11, 14, 18-22, 25, 26] (Fig. 1 PRISMA flow chart of study selection). Study characteristics All encompassed studies were RCTs with single-center design, six trials were conducted in China [18][19][20][21][22]26], and the remaining trials were performed in UK [11], South Korea [25] and Canada [14].Among the eligible RCTs, one study entailed preoperative intervention [21], while five studies involved postoperative intervention [11,14,18,22,25].Additionally, three trials encompassed intervention during the perioperative period [19,20,26].The duration of intervention ranged from 1 to 4 weeks.However, various studies utilized different interventions in control or intervention groups.Two studies [18,25] compared IS with or without the combination of other devices, while two studies [14, 19] assessed the effects of routine physiotherapy with or without IS.Additionally, three studies [20,22,26] compared IS combined with other devices to breathing training, one study [11] compared IS with thoracic expansion exercises, and another [21] compared IS with routine breathing training.The details of all included studies were summarized in Table 1 (Table 1 The characteristics of the included studies). Quality assessment All trials endeavored to randomize patients into intervention group and control group, but some of them failed to specify the exact details of the randomization procedures.Due to the nature of IS, it was difficult to achieve blinding of patients and personnel, causing studies with low risk of bias to be rare.The results of the risk of bias evaluation for the included trials were summarized in the risk of bias graph (Fig. 2 Risk of bias). To evaluate the dependability and robustness of the meta-analysis, a sensitivity analysis was performed.For PPCs, the heterogeneity significantly decreased (I 2 = 0%, p = 0.66) when we removed Peter R. A. et al. 's study [14].The adjusted pooled estimates, however, had not changed significantly (OR = 0.33, 95% CI: 0.20-0.53,p < 0.000001), manifesting that this investigation served as the primary cause of the heterogeneity.The examination implied that the outcomes in this meta-analysis were fairly robust.The heterogeneity of respiratory insufficiency significantly decreased (I 2 = 0, p = 0.99) when we removed Peter R. A. et al. 's study [14], but the adjusted pooled estimates changed significantly (OR = 0.31, 95% CI: 0.13-0.73,p = 0.008).The elucidation indicated that the stability of this meta-analyses was inadequate to demonstrate the statistical significance of the difference in risk of postoperative respiratory dysfunction between the two groups. (b) Forest plot of 6-MWD). There was a significant decrease in heterogeneity (I 2 = 0%, p = 0.59) following the removal of LIU Xiang et al. 's study [21], resulting in substantial changes in the adapted consolidated approximations (MD = 17.05, 95% CI: 7.53-26.57,p = 0.0004), which further suggested that this study was the primary contributor to the heterogeneity.Three studies [11,14,25] delineated the impact of intervention on the LOS.There was moderate heterogeneity in LOS (I 2 = 38%, p = 0.20), implying that the intervention is proficient in curtailing the LOS (MD = -0.76,95% CI: − 1.35--0.17,p = 0.01) (Fig. 5(c) Forest plot of LOS).The analysis of two studies [18,22] demonstrated evidence of high heterogeneity on the QoL (I 2 = 97%, p < 00001).Therefore, we selected the random effects model for the analysis (MD = 0.29.95% CI: − 9.41-9.99,It is noteworthy that the studies conducted by Y. J. Cho and ZHU Li et al. encompassed the inclusion of IS in both the control and intervention groups.In order to further examine whether they would exert an impact on the outcomes, we also conducted a sensitivity analysis on this aspect.The findings demonstrate that, when compared to the preceding meta-analysis, there were no alterations in the direction of any of the research outcomes.This signifies that the impact of these two studies on the results is not significant, thereby attesting to the stability of the meta-analysis results. Subgroup analysis For subgroup analysis of PPCs of various interventions, we found that the subgroup of IS with routine physiotherapy had no difference between the two groups.Nevertheless, IS with vibration expectoration vest, a significant difference was noted between the two groups For subgroup analysis of PPCs of different countries, it showed that the subgroup of other countries exhibited no difference between the two groups.However, in the subgroup of China, there was a significant difference between the two groups (Fig. 6(b) Forest plot of subgroup analysis of PPCs of different countries). Publication bias Herein, the incidence of PPCs and pneumonia was analyzed using funnel plots.The funnel plot of PPCs, Discussion To our knowledge, this is the first systematic review that solely comprises RCT data to analyze the effects of IS alone or combined with other respiratory therapy techniques on perioperative lung cancer patients.The findings of this study indicate that IS combined with other respiratory therapy techniques may provide several benefits to lung cancer patients undergoing surgery, as it can reduce PPCs and LOS, improve pulmonary function, and decrease the Borg score.However, due to the limited number of RCTs and the restricted set of outcome measures used in this analysis, it is challenging to determine the efficacy of IS alone on perioperative patients with lung cancer.The nine studies included had varying intervention timelines, with preoperative interventions lasting 1 week [21], postoperative interventions lasting from 5 days to 1 month [11,14,18,22,25], and perioperative interventions lasting 2 weeks [19,20,26].The intervention modalities in our analysis differed as well.Therefore, we believe that further comprehensive evaluation is necessary to assess the impact of IS alone on perioperative lung cancer patients. The incidence of PPCs leads to an escalated mortality rate, prolonged hospitalization, and augmented readmission rate [27,28].Therefore, it is crucial for the prognosis of patients to effectively prevent PPCs after lung cancer surgery.The utilization of IS in pulmonary rehabilitation serves as a valuable instrument in respiratory exercise, with the aim of mitigating or reducing PPCs and facilitating pulmonary rehabilitation [28].Some studies suggest that IS may be more effective than non-interventional physical therapy [10,29].Our meta-analysis reveals that IS combined with other respiratory therapy techniques can decrease the incidence of overall PPCs (Fig. 3(a) Forest plot of PPCs).However, only one study each was available for IS alone, IS with acapella or OPEPD, and subgroup analyses are not feasible.Among these studies, IS with acapella did not reach statistical significance, IS with OPEPD showed a significant difference, and the odds ratio for IS alone had a 95% confidence interval approaching 0.9.Therefore, it is challenging to determine the impact of IS in isolation or in combination with acapella or OPEPD on PPCs.Additionally, in subgroup analysis, no significant difference was found in IS with routine physiotherapy, while a significant positive impact was observed with IS combined with a vibration expectoration vest (Fig. 6(a) Forest plot of subgroup analysis of PPCs of various interventions).The differences observed may stem from limited included studies and methodological variations, including differences in IS intervention implementation and sample size discrepancies across subgroups.Further explorations are needed to understand the potential independent effects of IS, and synergistic or antagonistic effects resulting from the integration of IS with other respiratory therapy techniques. In addition, the subgroup analysis of PPCs in China and other countries showed difference (Fig. 6(b) Forest plot of subgroup analysis of PPCs of different countries).Firstly, this disparity may be attributed to a multitude of factors such as patient characteristics, environmental elements, genetic diversity, and so forth, existing within different cities.Secondly, potential disparate treatment and care measures across countries may influence postoperative outcomes, stemming from varied medical practices.Other factors, including sample size, the quality of study design, and characteristics of the study population, may also exert an impact on the results.It is noteworthy that further research is imperative to ascertain and validate the explanation behind such disparities. IS facilitates the augmentation of patients' postoperative volitional respiratory capacity, enhancing alveolar gas exchange function by increasing respiratory muscle activity, thereby improving pulmonary capacity and ameliorating lung function [30].It has been substantiated to ameliorate postoperative pulmonary functions in several studies.For instance, Kundra et al. [31] noted a noteworthy improvement in pulmonary function following preoperative IS (P < 0.05), moreover, preoperative IS was found to be more efficacious in preserving pulmonary functions compared to postoperative IS.A randomized trial investigating postoperative outcomes in patients who underwent laparotomy showed that both volume-oriented and flow-oriented IS effectively ameliorated pulmonary functions [32].However, this review only demonstrated that IS combined with other respiratory therapy techniques can enhance pulmonary function in patients undergoing lung cancer surgery.It remains challenging to establish the isolated effect of IS on pulmonary function. Nonetheless, although this study exhibited an improvement in FEV1 values, FVC%, and MVV due to intervention, there exists inadequate evidence to substantiate a significant improvement in FEV1% due to the significant heterogeneity among four studies.Two studies [11,25] conducted the interventions after surgery, and found no statistically significant difference in FEV1% between the intervention group and the control group.On the other hand, the other two trials [19,21], analyzed the effects of perioperative and preoperative intervention and identified significant improvements in FEV1% among patients. The six-minute walk test (6MWT) is a highly valuable tool for assessing the pulmonary functional training capacity of individuals afflicted with pulmonary ailments, given its proximity to everyday life, simplistic terrain, ease of acceptance and implementation by patients, and superior ability to reflect the patient's daily life capacity.As a result of these advantages, the 6MWT is widely utilized in clinical settings [33].However, the meta-analysis found that IS did not improve 6MWD [19,21,22].Through sensitivity analysis, we found that LIU Xiang et al. [21] article was a source of heterogeneity, as in his study, 6MWD after 1 week of surgery was significantly higher than in the other two studies, which may lead to bias.The 6MWT is typically combined with the Borg scale to evaluate the pulmonary functional capacity of patients.The results of this study exhibited that IS combined with other respiratory therapy techniques can reduce Borg score. The results of this study showed that IS combined with other respiratory therapy techniques can effectively reduce hospitalization time in lung cancer surgery patients.However, there is moderate heterogeneity among studies, possibly due to the fact that studies are implemented in different countries, and there are significant differences in routine hospitalization time for surgical patients in the intervention measures.Similar studies, such as Oliveira et al. 's research [34] demonstrated that respiratory muscle training improved pulmonary function and shortened postoperative hospital stays.Two studies [18,22] reported the effects of intervention on postoperative QoL, both of which evaluated postoperative QoL in lung cancer patients using the revised version of the quality of life questionnaire for lung cancer patients.The analysis found that IS did not improve postoperative quality of life scores in lung cancer patients. We discovered that despite the simplicity, accessibility, and cost-effectiveness of IS, most studies did not focus on the compliance and standardization of IS.As we know, compliance is crucial for the effectiveness of interventions.Inadequate training and insufficient self-administration of IS may lead to unresolved postoperative complications.A nationwide survey of healthcare providers found that out of 1681 respondents, 86% believed that patient compliance was poor.The primary reasons are patients forgetting how to utilize IS devices (83.5%; 1404 respondents), ineffectively using them (74.4%;1251 respondents), and insufficient frequency of usage (70.7%; 1188 respondents) [35].Therefore, it is imperative that we should enhance patient compliance with IS and provide standardized instruction in the future trails.The guidelines suggest that instructing clients and other healthcare providers in the technique of IS may facilitate the patient's proper usage and promote adherence [36].Furthermore, a potential strategy to enhance adherence and technique could involve educating patients using the device prior to surgery, instead of postoperative, to when the patient may be unable to effectively concentrate. Although our meta-analysis shows strong evidence, however, there are several limitations.Firstly, we only included Chinese and English languages studies.Secondly, it indicated a certain degree of publication bias.Thirdly, there are many factors that may lead to clinical heterogeneity, including divergent characteristics of the participants, intervention measures, and study designs.Most clinical trials failed to blind patients and participants, as well as outcome assessment variables, which may also lead to methodological heterogeneity.Finally, various studies employed different interventions in control or intervention groups, highlighting a lack of standardized implementation of IS across these studies, and few eligible studies were included for each outcome indicator in same interventions.We have also attempted to analyze the effects of IS alone.However, this approach carries significant limitations.It proves challenging to ascertain the individual efficacy of IS.Therefore, further researches are needed to investigate this issue. Conclusion In this meta-analysis, IS combined with other respiratory therapy techniques can reduce the incidence of PPCs, especially pneumonia, improve predicted FVC%, FEV1, and MVV values, as well as reduce postoperative Borg score and shorten hospitalization duration.However, the majority of research designs incorporate IS as an integral component of the intervention measures.Moreover, the specific impact of IS within the comprehensive intervention plan eludes extraction and quantification.Hence, it is challenging to ascertain the precise effects of the use of IS in isolation in this cohort.Future studies with large cohort should focus on exploring this issue, enhancing compliance, and facilitating optimal postoperative pulmonary rehabilitation in patients with perioperative lung cancer. Fig. 1 Fig. 1 PRISMA flow chart of study selection Fig. 5 Fig. 5 Forest plot of other outcomes.a Borg scores; b six-minute walk distance(6-MWD); c the length of hospital stays (LOS); d quality of life (QoL) Fig. 6 Fig. 6 Forest plot of subgroup analysis of PPCs.(a) subgroup analysis of various interventions; (b) subgroup analysis of different countries
2024-02-16T14:05:07.177Z
2024-02-15T00:00:00.000
{ "year": 2024, "sha1": "bcd7061a206df350f8d7028ab3f11574a400c884", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "83fc70e50065de7b292d0002ece6034f67108645", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
72973119
pes2o/s2orc
v3-fos-license
A case series of necrotizing pneumonia due to community acquired MRSA Community-acquired pneumonia (CAP) due to Methicillin Resistant Staphylococcus Aureus (MRSA) is uncommon. In this case series, wedescribe four young immune-competent healthy males presenting with severe respiratory distress progressing to necrotizing pneumonia. All required ventilation and recovered without sequelae. One patient developed myositis and the other three developed pleural effusions and pneumothoraxes; two of them needing intercostal tube (ICT) insertion and drainage.These cases highlight community acquired MRSA in developing countries where antibiotics are frequently used empirically with little laboratory guidance. Introduction We describe four young immune-competent males with community acquired necrotizing pneumonia due to MRSA which is infrequently seen.All of them presented within two months between April and May 2013 to Teaching Hospital Anuradhapura (THA). First Patient 14 yr old healthy adolescent from Thalawa was admitted to surgical casualty April 2013 following a swelling ofthe right knee after a fall.A medical referral was done for shortness of breath.There was a preceding history of mild and intermittent fever for three days.It was accompanied by runny nose, arthralgia, myalgia and one episode of haemoptysis but no cough or wheezing.Shortness of breath was acute, developing within two to three hours.There was no external injury to the knee joint that was aspirated twice yielding scanty aspirate that was clear, acellular and sterile.On examination, he was Ill, dyspnoeic, febrile butnot cyanosed, with GCS 15/15.His respiratory rate was 60 breaths per minute with intercostal recession.In the lower zone of right lung, percussion note was dull, vocal resonance was reduced and breath sounds were diminished.His pulse rate was 146 beats per minute (BPM) and blood pressure was 85/5O mmHg.His blood count revealed aleukocytopaenia and thrombocytosis.Initial chest X-ray (CXR) revealed patchy opacities in the right lower zone,later progressing to cavities and pleural effusion.His ESR was 113 mm in the first hour while CRP was 22.5 mg/dl.His blood urea was 1.4mmol/L His Blood culture was positive for MRSA on the second day.Two pockets of fluid in theright pleural cavity were aspirated yielding sterile blood stained thick fluid.Contrast enhanced computerized tomography of the chest confirmed the multifocal infection with pneumatocele formation and two discrete collections of fluid in right pleural space.He was transferred to the medical intensive care unit for intubation and IPPV.Initially he was treated with Ceftriaxone, Levofloxacin and Oseltamivir and once the blood culture report was available medications were changed to Vancomycin and Clindamycin.As the fever persisted, Linazolidwere added.There was deferservence and patients' clinical condition improved leading to discontinuation of ventilation and extubation. Second Patient Twenty six year old army officer from Nandikadal was admitted to General Hospital Vavuniya (GHV) with a three days history of fever, arthralgia, myalgia, runny nose, low platelet count and leucopenia on April 2013.At GHV his platelet count decreased to 10 × 10 9 /L and haematocrit increased to 46% and he was managed as a probable Dengue infection.Subsequentlythe patient developed a cough and no defervesence after five days.He was given Meropenam & Clarythromycin.Bilateral coarse crepitations with absent breath sounds in the left middle and lower zones were heard on auscultation.Pneumothorax was diagnosed, Intercostal tube was inserted to the left pleural cavity and the patient was transferred to THA.On admission he was severely dyspnoeic with a respiratory rate of 33 breaths per minute.He was conscious and rational with a GCS of 15/15.He was icteric, pale,and febrile with desquamation of skin on palms and soles.His pulse rate of 136BPM and his blood pressure was 80/50mmHg.Blood count revealed leukocytopenia.His CXR revealed patchy opacities with cavities in both lungs, and bilateral pnemothorax.Calrythromycin was stopped and levofloxacin was added.Dengue IgMantibodies were negative while IgG antibodies were positive and blood culture was positive for MRSA on the second day.His blood urea was 206mmol/L.Oral Metronidazole and cefeperazone was added to Merpenum and Levofloxacin combination.An IC tube was inserted intothe right pleural cavity in addition to the one on the left.Once blood culture became positive, Vancomycin was administered. He was given IV vancomycin for 14 days.On discharge patient was afebrile and ESR was normal.Third patient 39 year old previously healthy male was admitted from Anuradhapura Town with shortness of breath and severe myalgia of both lower limbsfor one day on June 2013.He developed right shoulder pain a week ago and has taken treatment from a general practitioner.He also complained of passage of dark urine.There was no associated fever.On examination he was ill looking and dyspnoeic with a respiratory rate of 36 breaths per minute but was conscious and rational.His pulse rate was 124 BPM and blood pressure was 80/50 mmHg.There were bilateral lung crepitations in lung bases.Lower limb muscle power was grade 3/5 proximally and distally on both sides with absentknee jerks but present ankle jerks bilaterally.Arterial blood gas on admission showed partially compensated respiratory alkalosis.CXR showed bilateral patchy opacities.His blood urea was 4.9 mmol/L.Both his blood and sputum cultures were positive for MRSA respectively fourth and fifth day after admission.His CPK was 1314 but serum creatinine and blood urea were normal throughout.Initially he was ventilated with CPAP in the ward and subsequently with IPPV in the MICU.Initially he was administered Meropenum and Levofloxacin.After blood culture report was available, medications were changed to Clindamycin.Daily decolonization of MRSA was done. Fourth Patient 55 year old male was admitted from Vijithapura with pain in the right neck radiating to the right upper limb for 10 days July 2013.On examination, he had a inverted supinator jerk in the right upper limb.After doing cervical X rays, a diagnosis of cervical spondylosis was made and the patient was transferred to Rheumatology ward.During his two weeks stay at Rheumatology ward he suddenly developed dyspnea and became restless.He was conscious and lucid.He was in respiratory distress with a respiratory rate of 55 breaths per minute and there were bilateral crepitataions on chest auscultation.His pulse rate was 110 beats per minute and blood pressure was 100/60 mmHg.His first CXR revealed a right mid zone cavity.His blood urea was 6 mmol/L.He was put on to CPAP and Meropenum and Levofloxacin were administered.The following day he was transferred to MICU for ventilator support.Repeat CXR at the MICU showed a large cavity in the right mid zone and right sided pneumothorax.Suspecting MRSA pneumonia his medications were changed to Vancomycin and Clindamycin.An IC tube was inserted and subsequently blood cultures were positive for MRSA sixteen days after admission and two days after transfer from rehematology ward.Antibiotics were continued for 10 days and he improved gradually. Discussion All four patients in this case series were young immunocompetent males presenting with a short non specific symptoms.Nobody had high fever but developed respiratory distress acutely.The first two had a prodrome of influenza like illness, third shortness of breath and fourth patient hadpain in the shoulder and arm.Initially the second patient was suspected to have dengue like viral fever complicated by acute respiratory distress syndrome.All of them were sick enough to be admitted to the MICU and all required assisted ventilation.First second and fourth patient developed pneumothoraxes and effusions and first and fourth required ICT insertion and drainage.The first patient required bilateral ICT insertion.All had patchy consolidation with three of them having pneumatocoele, intrapulmonary abscesses, and pleuraleffusion suggesting necrotizing pneumonia.The term necrotizing is used to differentiate pulmonary necrosis with multiple small abscesses from single large abscess with or without cavitation.All of them responded Figure 2 (A) X -ray taken on admission showing multiple patchy shadows iin bilateral lung feilds, (B) X -ray taken on second day showing formation of a cavity in the right upper zone (arrow), (C) X -ray taken seven days after admission showing formation of multiple air fiilled cavities in left lung field. (arrow) to linezolid, vancomycin or clindamycin without serious sequelae.All of them presented within four months from different geographical areas in the Anuradhapura & Mullathivu Districts.There was no necrotizing pneumonia patients admitted after that period (until November 2013).Fourth patient may have acquired the organism while in the rheumatology ward but his clinical findings were strikingly similar to other three patients.None of them had skin sepsis.Until recently pneumonia caused by community acquired MRSA was considered an uncommon entitiyand occurred primarily in patients with influenza (1).Community acquired MRSA can cause skin and soft tissue infections, invasive disease such as purpurafulminans, osteomyelitis and necrotizing pneumonia (1,2,3).Panton-Valentine leukocidin (PVL), an extracellular toxin that destroys white blood cells is produced by some strains of Staphylococcus aureusand named after two scientists who first described this (4,7).Pneumonia caused by PVL secreting S aureus seems to be specific disease entity with a death rate of 75% causing necrotizing community acquired pneumonia (CAP) (1,6).PVL can be secreted by methicillin sensitive or resistant S aureus (4,5).PVL comprises of two subunits F and S binding and assembling on neutrophil membrane and causing pore formation (4).They are called synergohymenotropic toxins and recognition by phenotype or antibiogram is unreliable (1).Staphylococcal eneterotoxins and toxic shock syndrome toxins (TSST) can be co-secreted by PVL producing S aureus.Eneterotoxins and TSST are called superantigens because of their abilty to activate T cells causing cytokine storm (2).CAP caused by MRSA affects younger and healthier patients (7).Early clinical diagnosis is difficult.Previously healthy young patient following a "flu-like" illness with rapid deterioration with respiratory distress and sepsis leading urgent admission and ventilation is a typical story.(7).Classical clinical findings strongly suggestingthe diagnosis include haemoptysis, hypotension, "flu-like" illness myalgia, chills, fever of 39°C or above, tachycardia >140 beats/min, diarrhoea and vomiting (may be due to associated toxic shock) (7,3).CXR findings include bilateral multilobular infiltrates on chest X-ray, usually accompanied by abcess, recurrent pneumothoraces, pneumatocele, pleural effusion and later cavitation (1,7,8).Laboratory investigations that are helpful in confirming the diagnosis include, Gram stain of sputum reveals gram-positive cocci in clusters, leukopenia, high CRP level, negative pneumococcal and legionella antigen (1).Elevated serum creatine phosphokinase suggests myositis. Clinical management of necrotizing pneumonia due to MRSA should be aggressive with admission to high dependency unit (9).There are many differing opinions on therapy for PVL-associated pneumonia but intravenous flucloxacillin is not recommended (9).Although bactericidal, there are concerns that flucloxacillin may increase PVL toxin production (12).Combinations of clindamycin with rifampicin,linezolid with rifampicin,vancomycin with rifampicin,and vancomycin with clindamycin have all been successful in treating, but may need intravenous therapy for a considerable long period of time (10,12).Vancomycin should not be used alone because of poor penetration of lung tissue (1,11).Rifampicin has excellent tissue penetration, reaching intracellular staphylococci, and when used, itexhibits synergistic activity with other antibiotics, including linezolid (12).Summarising the best empirical therapy would be a combination of clindamycin, linezolid and rifampicin (12).Adjunctive therapy with Intravenous Immunoglobulin in necrotising pneumonia should be considered in addition to intensive care support and antibiotics because of IVIG's action in neutralizing exotoxins and superantigens, particularly enterotoxins A, B and C and TSST-1 (7,12,13).A young healthy immune-competent male with respiratory distress may be having PVL producing community acquired MRSA.A patient who appears to be having Dengue Shock Syndrome with ARDS may also be suffering from community acquired MRSA.The other differential diagnosis would be hantan virus infection with pulmonary syndrome and leptospirosis.The question that arises repeatedly is how long we have to rely on clinical suspicion with multiple implications of incorrect diagnosis making it imperative that advanced infectious disease diagnostics needs to be established (14).This phenomena may be due to the growing problem of drug resistant bacteria beyond the confines of health care settings.High index of suspicion, early diagnosis and aggressive antimicrobial therapy with appropriate antibiotics is important since untreated community acquired MRSA has a high mortality rate. Figure 1 Figure 1 ( Figure 1 Figure 1 (A) X-Ray taken on admission showed bilateralpulmonary shadowing (Mainly on right side), (B) X -Ray taken on following day showed a formation of a cavity (arrow) on right side
2018-12-15T12:37:38.363Z
2014-10-15T00:00:00.000
{ "year": 2014, "sha1": "54f66e32e2a3d4506e653a99dfe82542954171e0", "oa_license": "CCBY", "oa_url": "http://amj.sljol.info/articles/10.4038/amj.v8i2.7528/galley/5786/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "54f66e32e2a3d4506e653a99dfe82542954171e0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73429361
pes2o/s2orc
v3-fos-license
Inhibition of Aflatoxin Production by Paraquat and External Superoxide Dismutase in Aspergillus flavus Aflatoxin contamination of crops is a worldwide problem, and elucidation of the regulatory mechanism of aflatoxin production, for example relative to the oxidative–antioxidative system, is needed. Studies have shown that oxidative stress induced by reactive oxygen species promotes aflatoxin production. However, superoxide has been suggested to have the opposite effect. Here, we investigated the effects of the superoxide generator, paraquat, and externally added superoxide dismutase (SOD) on aflatoxin production in Aspergillus flavus. Paraquat with an IC50 value of 54.9 µM inhibited aflatoxin production without affecting fungal growth. It increased cytosolic and mitochondrial superoxide levels and downregulated the transcription of aflatoxin biosynthetic cluster genes, including aflR, a key regulatory protein. The addition of bovine Cu/ZnSOD to the culture medium suppressed the paraquat-induced increase in superoxide levels, but it did not fully restore paraquat-inhibited aflatoxin production because bovine Cu/ZnSOD with an IC50 value of 17.9 µg/mL itself inhibited aflatoxin production. Externally added bovine Cu/ZnSOD increased the SOD activity in fungal cell extracts and upregulated the transcription of genes encoding Cu/ZnSOD and alcohol dehydrogenase. These results suggest that intracellular accumulation of superoxide impairs aflatoxin production by downregulating aflR expression, and that externally added Cu/ZnSOD also suppresses aflatoxin production by a mechanism other than canonical superoxide elimination activity. Introduction Aflatoxins are potent carcinogenic toxins produced mainly by Aspergillus flavus and Aspergillus parasiticus, which infect agricultural crops, including corn and peanut. Aflatoxins accumulated in crops cause mycotoxicosis in humans and domestic animals that ingest them [1,2]. Crops contaminated with aflatoxins are discarded or reduced in value, resulting in significant economic losses [3]. Many challenges are involved in the control of aflatoxin contamination; the molecular mechanism to regulate aflatoxin production level must be elucidated to optimize or develop effective preventive methods [4][5][6]. The enzymatic genes responsible for aflatoxin biosynthesis are located in a cluster in the genomes of A. flavus and A. parasiticus. Aflatoxins are biosynthesized from 10 acetic acid units via at least 18 reaction steps [7]. The transcription of genes encoding aflatoxin biosynthetic enzymes is positively regulated by the transcription factor AflR, whose gene is in the same cluster [8]. Environmental factors such as light and pH, trophic factors such as carbon and nitrogen sources, and several transcription factors recognizing these cues have been found to affect aflatoxin production, but the regulatory mechanisms of these factors leading to aflatoxin production have not been clarified in detail [9][10][11][12]. As externally added hydrogen peroxide has been found to promote aflatoxin production in A. parasiticus and A. flavus, the relationship between aflatoxin production and oxidative stress caused by reactive oxygen species (ROS) has received wide attention [13][14][15]. ROS comprise a series of molecular species with high chemical reactivity generated from oxygen. As ROS react with macromolecules such as DNA, protein, and lipids, impairing their function, cells are equipped with antioxidant systems that protect biomolecules from ROS [16]. When the balance between antioxidant systems and ROS generation is disrupted, oxidative stress occurs. In fungal cells, superoxide, a byproduct of the mitochondrial electron transport chain, is the main source of intracellular ROS [17]. The superoxide generated is decomposed into hydrogen peroxide and oxygen by superoxide dismutase (SOD), and hydrogen peroxide is decomposed into water by antioxidant enzymes, including catalase, glutathione peroxidase, and peroxiredoxin. Free Fe 2+ may react with hydrogen peroxide and produce hydroxyl radical, which is highly toxic due to its high reactivity [18]. Although excessive ROS are harmful, ROS adjusted to an appropriate level function as signaling molecules in cell proliferation and differentiation [19]. In A. parasiticus aflatoxigenic strain NRRL2999, oxygen consumption increased in the logarithmic growth phase, and the enzymatic activities of SOD and glutathione peroxidase increased synchronously with aflatoxin production [20]. However, these phenomena of oxygen consumption and antioxidant enzyme activities observed in the aflatoxigenic strain were not observed in the nontoxigenic SRRC255 strain, suggesting that elevated ROS levels due to an increase in oxygen uptake are correlated with aflatoxin production and the expression of antioxidant enzymes. Hydrogen peroxide increased aflatoxin production in A. flavus NRRL3357 in a concentration-dependent manner [14]. Antioxidants and thiol redox state modulators reduced aflatoxin production in the A. flavus 70S(pSL82) strain [21]. These observations suggest that a decrease in the ROS level causes a decrease in aflatoxin production. On the other hand, Zaccaria et al. [22] indicated that menadione, a superoxide generator, suppressed aflatoxin production in A. flavus NRRL3357, accompanied by a decrease in SOD activity. The regulation of mycotoxin production by superoxide was also observed in Fusarium graminearum, which accumulates trichothecenes in infected grains. The superoxide generator paraquat reduced trichothecene production in several strains [23,24]. Furthermore, in SOD gene deletion mutants of F. graminearum, the accumulation of intracellular superoxide and reduction of trichothecene production were observed [25]. As ROS differ in terms of generation and elimination characteristics in fungal cells, as well as chemical reactivity, the effects of individual ROS on aflatoxin production must be investigated in detail to understand the regulatory mechanism of aflatoxin production by ROS. In this study, we focused on superoxide and evaluated the effects of paraquat and external SOD on aflatoxin production in A. flavus. We obtained paradoxical results; both paraquat and SOD suppressed aflatoxin production. In this paper, we describe the effects of superoxide generated from paraquat on mitochondrial function, reducing aflR expression, and the apparent partial internalization of external SOD into cells to suppress aflatoxin production, possibly by a function other than superoxide dismutation activity. Effect of Paraquat on Aflatoxin Production When A. flavus IFM 47798 was incubated for 48 h at 28 • C in potato dextrose broth (PDB) liquid medium, about 1-2 ppm aflatoxin B 1 was detected in the culture broth. The amount of aflatoxin B 1 produced by the strain decreased in a concentration-dependent manner by addition of paraquat with the IC 50 value of 54.9 µM (Figure 1a). As the fungal mycelial dry weight was not changed significantly by 500 µM paraquat, the inhibitory activity of this superoxide generator was specific to aflatoxin B 1 production. The inhibition of aflatoxin B 1 production by paraquat was thought to be due to the generation of intracellular superoxide. Therefore, we examined whether the effect of paraquat was affected by sodium ascorbate, a general antioxidant. Aflatoxin B 1 production suppressed by 100 µM paraquat was significantly restored by co-addition of >1 mM sodium ascorbate (Figure 1b). Furthermore, the addition of 3 mM sodium ascorbate without paraquat significantly promoted aflatoxin B 1 production. paraquat was significantly restored by co-addition of >1 mM sodium ascorbate ( Figure 1b). Furthermore, the addition of 3 mM sodium ascorbate without paraquat significantly promoted aflatoxin B1 production. Figure 1. Effects of paraquat, sodium ascorbate, and Cu/Zn superoxide dismutase (Cu/ZnSOD) on aflatoxin B1 production and fungal growth of A. flavus. (a-c) The amount of aflatoxin B1 (white bars, without paraquat; green bars, with paraquat) and mycelial dry weight (squares) were analyzed. Data are presented as means and standard deviations from three biological replicates. Asterisks indicate significant differences (*P < 0.05, **P < 0.01 vs. control group, Dunnett test). Effect of External SOD on Aflatoxin Production Next, we examined whether externally added SOD could affect the inhibition of aflatoxin production by paraquat. A. flavus was cultured with bovine Cu/ZnSOD (30,90, and 300 units/2 mL culture) and/or paraquat, and the amount of aflatoxin B1 produced was measured (Figure 1c). In cultures with 100 μM paraquat, aflatoxin B1 production was restored to some extent by 30 and 90 units of Cu/ZnSOD compared with no Cu/ZnSOD, but the small amount of aflatoxin production caused by paraquat was not changed by 300 units of Cu/ZnSOD. On the other hand, in cultures without paraquat, the amount of aflatoxin B1 was decreased in a concentration-dependent manner by Cu/ZnSOD with an IC50 value of 107.3 units, corresponding to 17.9 μg protein/mL. These results suggest that externally added Cu/ZnSOD could decrease the amount of intracellular superoxide generated by paraquat, leading to the partial recovery of aflatoxin B1 production. However, 300 units of Cu/ZnSOD could not suppress the effect of paraquat because its inhibitory activity on aflatoxin production was sufficiently strong to reduce the amount of aflatoxin to the level observed in the culture with 100 μM paraquat alone. Effects of Paraquat and External SOD on mRNA Levels of Genes Responsible for Aflatoxin Biosynthesis A. flavus was cultured for 48 h with paraquat and/or Cu/ZnSOD, and mRNA levels of genes in the aflatoxin biosynthetic gene cluster were examined by real-time PCR (Figure 2). In the culture with 100 μM paraquat, the mRNA levels of aflR and four genes encoding biosynthetic enzymes (AflC, AflD, AflP, and AflQ) were significantly decreased compared with the control, suggesting that the inhibition of aflatoxin B1 production by paraquat was due to suppressed transcription of the aflatoxin cluster genes. The co-addition of Cu/ZnSOD to the culture with 100 μM paraquat recovered the mRNA levels of these genes to some extent, but these levels remained lower than those in the control group. In general, addition of Cu/ZnSOD alone did not affect the mRNA levels of these genes, with the exception of aflC. These results suggest that Cu/ZnSOD inhibited aflatoxin B1 production without significantly affecting the transcription of most aflatoxin biosynthetic cluster genes. (a-c) The amount of aflatoxin B 1 (white bars, without paraquat; green bars, with paraquat) and mycelial dry weight (squares) were analyzed. Data are presented as means and standard deviations from three biological replicates. Asterisks indicate significant differences (* P < 0.05, ** P < 0.01 vs. control group, Dunnett test). Effect of External SOD on Aflatoxin Production Next, we examined whether externally added SOD could affect the inhibition of aflatoxin production by paraquat. A. flavus was cultured with bovine Cu/ZnSOD (30,90, and 300 units/2 mL culture) and/or paraquat, and the amount of aflatoxin B 1 produced was measured (Figure 1c). In cultures with 100 µM paraquat, aflatoxin B 1 production was restored to some extent by 30 and 90 units of Cu/ZnSOD compared with no Cu/ZnSOD, but the small amount of aflatoxin production caused by paraquat was not changed by 300 units of Cu/ZnSOD. On the other hand, in cultures without paraquat, the amount of aflatoxin B 1 was decreased in a concentration-dependent manner by Cu/ZnSOD with an IC 50 value of 107.3 units, corresponding to 17.9 µg protein/mL. These results suggest that externally added Cu/ZnSOD could decrease the amount of intracellular superoxide generated by paraquat, leading to the partial recovery of aflatoxin B 1 production. However, 300 units of Cu/ZnSOD could not suppress the effect of paraquat because its inhibitory activity on aflatoxin production was sufficiently strong to reduce the amount of aflatoxin to the level observed in the culture with 100 µM paraquat alone. Effects of Paraquat and External SOD on mRNA Levels of Genes Responsible for Aflatoxin Biosynthesis A. flavus was cultured for 48 h with paraquat and/or Cu/ZnSOD, and mRNA levels of genes in the aflatoxin biosynthetic gene cluster were examined by real-time PCR ( Figure 2). In the culture with 100 µM paraquat, the mRNA levels of aflR and four genes encoding biosynthetic enzymes (AflC, AflD, AflP, and AflQ) were significantly decreased compared with the control, suggesting that the inhibition of aflatoxin B 1 production by paraquat was due to suppressed transcription of the aflatoxin cluster genes. The co-addition of Cu/ZnSOD to the culture with 100 µM paraquat recovered the mRNA levels of these genes to some extent, but these levels remained lower than those in the control group. In general, addition of Cu/ZnSOD alone did not affect the mRNA levels of these genes, with the exception of aflC. These results suggest that Cu/ZnSOD inhibited aflatoxin B 1 production without significantly affecting the transcription of most aflatoxin biosynthetic cluster genes. Effects of External SOD on mRNA Levels of Genes Encoding SOD and Acetyl-CoA Metabolic Enzymes In the genome of A. flavus NRRL3357, five genes were annotated as SOD genes. From the multiple alignment of amino acid sequences of the five genes (AFLA_099000, AFLA_068080, AFLA_033420, AFLA_027580, and AFLA_088150), two yeast SODs (yeast MnSOD and yeast Cu/ZnSOD [26]), and three bovine SODs (one MnSOD and two Cu/ZnSODs), a phylogenetic tree was created ( Figure 3a). Cellular localization of the five A. flavus SODs was predicted using the TargetP 1.1 server (Figure 3a) [27]. AFLA_099000 was closest to yeast Cu/ZnSOD and was predicted to be localized to the cytoplasm or nucleus. AFLA_068080 was predicted to be extracellular Cu/ZnSOD. AFLA_033420 was closest to yeast mitochondrial MnSOD and was predicted to be localized to the cytoplasm or nucleus. AFLA_027580 and AFLA_088150 were predicted to be localized to the mitochondria and annotated as FeSOD. As Fe-type SODs can utilize Fe and Mn as metal cofactors [28], which metal is utilized by these SODs is unclear. Real-time PCR analysis showed that the addition of Cu/ZnSOD significantly increased the mRNA levels of AFLA_099000 putative Cu/ZnSOD and AFLA_068080 putative Cu/ZnSOD ( Figure 3b). Conversely, the addition of 300 units Cu/ZnSOD significantly decreased the mRNA levels of AFLA_033420 putative MnSOD and AFLA_088150 putative FeSOD. These results suggest that the external addition of Cu/ZnSOD affected the intranuclear transcriptional regulation of the fungal intrinsic SODs. Transcription of each gene was analyzed by real-time quantitative PCR. Each mRNA level was normalized to the amount of β-tubulin mRNA in each sample. Data are presented as means and standard deviations from three biological replicates. Asterisks indicate significant differences (*P < 0.05, **P < 0.01 vs. control group, Dunnett test). Effects of External SOD on mRNA Levels of Genes Encoding SOD and Acetyl-CoA Metabolic Enzymes In the genome of A. flavus NRRL3357, five genes were annotated as SOD genes. From the multiple alignment of amino acid sequences of the five genes (AFLA_099000, AFLA_068080, AFLA_033420, AFLA_027580, and AFLA_088150), two yeast SODs (yeast MnSOD and yeast Cu/ZnSOD [26]), and three bovine SODs (one MnSOD and two Cu/ZnSODs), a phylogenetic tree was created ( Figure 3a). Cellular localization of the five A. flavus SODs was predicted using the TargetP 1.1 server (Figure 3a) [27]. AFLA_099000 was closest to yeast Cu/ZnSOD and was predicted to be localized to the cytoplasm or nucleus. AFLA_068080 was predicted to be extracellular Cu/ZnSOD. AFLA_033420 was closest to yeast mitochondrial MnSOD and was predicted to be localized to the cytoplasm or nucleus. AFLA_027580 and AFLA_088150 were predicted to be localized to the mitochondria and annotated as FeSOD. As Fe-type SODs can utilize Fe and Mn as metal cofactors [28], which metal is utilized by these SODs is unclear. Real-time PCR analysis showed that the addition of Cu/ZnSOD significantly increased the mRNA levels of AFLA_099000 putative Cu/ZnSOD and AFLA_068080 putative Cu/ZnSOD (Figure 3b). Conversely, the addition of 300 units Cu/ZnSOD significantly decreased the mRNA levels of AFLA_033420 putative MnSOD and AFLA_088150 putative FeSOD. These results suggest that the external addition of Cu/ZnSOD affected the intranuclear transcriptional regulation of the fungal intrinsic SODs. Transcription of each gene was analyzed by real-time quantitative PCR. Each mRNA level was normalized to the amount of β-tubulin mRNA in each sample. Data are presented as means and standard deviations from three biological replicates. Asterisks indicate significant differences (* P < 0.05, ** P < 0.01 vs. control group, Dunnett test). In the early stage of aflatoxin biosynthesis in peroxisomes, the biosynthetic precursor acetyl-CoA could be supplied from β-oxidation in peroxisomes and mitochondria and/or acetate generated from acetaldehyde [29]. Therefore, acetaldehyde may be a key metabolite for aflatoxin production. Acetaldehyde, which is produced from pyruvate by pyruvate decarboxylase, is converted to ethanol by alcohol dehydrogenase or to acetate by aldehyde dehydrogenase. Acetate is further converted to acetyl-CoA by acetyl-CoA synthetase. Real-time PCR conducted to analyze the effect of Cu/ZnSOD on the mRNA levels of four genes encoding pyruvate decarboxylase, alcohol dehydrogenase, aldehyde dehydrogenase, and acetyl-CoA synthetase indicated that Cu/ZnSOD significantly increased the mRNA level of AFLA_048690 putative alcohol dehydrogenase in a concentration-dependent manner (Figure 3c). This result suggests that Cu/ZnSOD increased acetaldehyde-derived ethanol, which in turn decreased acetyl-CoA, leading to the repression of aflatoxin production. SODs, and three bovine SODs were aligned using the clustal omega algorithm (provided on the website of European Bioinformatics Institute [30]). The phylogenetic tree was constructed using the neighbor-joining method. The localization of A. flavus SODs was predicted by the TargetP 1.1 server [27]. The localization of yeast and bovine SODs was predicted following the description on the UniProt website [31]. (b,c) Transcription of each gene was analyzed by real-time quantitative PCR. The amount of each mRNA was normalized to the amount of β-tubulin mRNA in each sample. Data are presented as means and standard deviations from three biological replicates. Asterisks indicate significant differences (*P < 0.05, **P < 0.01 vs. control group, Dunnett test). SOD Activities in Fungal Cells To investigate whether externally added bovine Cu/ZnSOD altered intracellular SOD activities, SOD activities in the cell extracts and culture supernatant were measured. Regardless of the presence or absence of paraquat, SOD activity was significantly greater in fungal cell extracts cultured with 300 units of Cu/ZnSOD for 48 h than in cultures without Cu/ZnSOD (Figure 4a). About 0.5%-1% of the SOD activity at the beginning of cultivation was detected in the fungal cell extracts; most SOD activity remained in the culture supernatant (Figure 4b), indicating that bovine Cu/ZnSOD added to the culture remained undegraded. Next, the protein abundance of SOD was examined by western blotting of the cell extracts using an anti-human SOD1 antibody, which has cross-reactivity with bovine nucleic and cytoplasmic 15.7-kDa Cu/ZnSOD. Western blotting revealed bands around 17 kDa, even without the addition of bovine Cu/ZnSOD, suggesting that 16-kDa AFLA_099000 putative Cu/ZnSOD was also detected by anti-human SOD1 antibody (Figure 4c). The density of the bands detected depended on the Cu/ZnSOD activity depicted in Figure 4a. [30]). The phylogenetic tree was constructed using the neighbor-joining method. The localization of A. flavus SODs was predicted by the TargetP 1.1 server [27]. The localization of yeast and bovine SODs was predicted following the description on the UniProt website [31]. (b,c) Transcription of each gene was analyzed by real-time quantitative PCR. The amount of each mRNA was normalized to the amount of β-tubulin mRNA in each sample. Data are presented as means and standard deviations from three biological replicates. Asterisks indicate significant differences (* P < 0.05, ** P < 0.01 vs. control group, Dunnett test). SOD Activities in Fungal Cells To investigate whether externally added bovine Cu/ZnSOD altered intracellular SOD activities, SOD activities in the cell extracts and culture supernatant were measured. Regardless of the presence or absence of paraquat, SOD activity was significantly greater in fungal cell extracts cultured with 300 units of Cu/ZnSOD for 48 h than in cultures without Cu/ZnSOD (Figure 4a). About 0.5-1% of the SOD activity at the beginning of cultivation was detected in the fungal cell extracts; most SOD activity remained in the culture supernatant (Figure 4b), indicating that bovine Cu/ZnSOD added to the culture remained undegraded. Next, the protein abundance of SOD was examined by western blotting of the cell extracts using an anti-human SOD1 antibody, which has cross-reactivity with bovine nucleic and cytoplasmic 15.7-kDa Cu/ZnSOD. Western blotting revealed bands around 17 kDa, even without the addition of bovine Cu/ZnSOD, suggesting that 16-kDa AFLA_099000 putative Cu/ZnSOD was also detected by anti-human SOD1 antibody (Figure 4c). The density of the bands detected depended on the Cu/ZnSOD activity depicted in Figure 4a. To show that SOD activity was maintained in the mycelia, mycelia of A. flavus cultured for 24 h were collected, washed with water three times, and transferred to fresh medium. After cultivation in the fresh medium for another 24 h, SOD activities in the mycelia and supernatant were measured. With and without paraquat, about 1-2 units of SOD activity were observed in each cell extract and the supernatant cultured with 300 units of Cu/ZnSOD before washing of the mycelia (Figure 4d-f), suggesting that elevated SOD activity in fungal cells were maintained after 24 h cultivation. -c) A. flavus was cultured for 48 h. SOD activity of fungal cell extracts (a) and culture supernatants (b) was determined, and fungal cell extracts were subjected to western blotting using anti-human SOD1 antibody (c). The density of the bands indicated by the arrow was quantified using Image J. (d-f) After A. flavus was cultured for 24 h, mycelia were collected, washed with distilled water, transferred to fresh medium, and incubated for another 24 h. Then SOD activity in fungal cell extracts (d) and culture supernatant (e) was determined. Fungal cell extracts were subjected to western blotting using anti-human SOD1 antibody, and the density of the bands indicated by the arrow was quantified (f). Data are presented as means and standard deviations from three biological replicates. Asterisks indicate significant differences (*P < 0.05, **P < 0.01 vs. control group, Dunnett test). Effects of Paraquat and External SOD on Mitochondrial and Cytosolic Superoxide Levels To estimate how the intracellular superoxide level was affected by the addition of paraquat and/or Cu/ZnSOD, cellular superoxide was detected using the superoxide-specific fluorescent indicators mitoSOX and dihydroethidium (DHE), which are localized to the mitochondria and cytoplasm, respectively (Figure 5a,b and Supplementary Figure 1). In cultures treated with 100 μM paraquat alone, superoxide levels in the mitochondria and cytoplasm were significantly higher than in the control after 24 h cultivation. These paraquat-induced high superoxide levels were suppressed by the addition of Cu/ZnSOD in a concentration-dependent manner. In contrast, in cultures without paraquat, neither the mitochondrial nor the cytoplasmic superoxide level was changed significantly by the addition of Cu/ZnSOD at 24 h cultivation. These results suggest that the external addition of Cu/ZnSOD led to the decomposition of superoxide, the level of which had been increased by paraquat, whereas it did not affect the superoxide level during normal fungal growth. -c) A. flavus was cultured for 48 h. SOD activity of fungal cell extracts (a) and culture supernatants (b) was determined, and fungal cell extracts were subjected to western blotting using anti-human SOD1 antibody (c). The density of the bands indicated by the arrow was quantified using Image J. (d-f) After A. flavus was cultured for 24 h, mycelia were collected, washed with distilled water, transferred to fresh medium, and incubated for another 24 h. Then SOD activity in fungal cell extracts (d) and culture supernatant (e) was determined. Fungal cell extracts were subjected to western blotting using anti-human SOD1 antibody, and the density of the bands indicated by the arrow was quantified (f). Data are presented as means and standard deviations from three biological replicates. Asterisks indicate significant differences (* P < 0.05, ** P < 0.01 vs. control group, Dunnett test). To show that SOD activity was maintained in the mycelia, mycelia of A. flavus cultured for 24 h were collected, washed with water three times, and transferred to fresh medium. After cultivation in the fresh medium for another 24 h, SOD activities in the mycelia and supernatant were measured. With and without paraquat, about 1-2 units of SOD activity were observed in each cell extract and the supernatant cultured with 300 units of Cu/ZnSOD before washing of the mycelia (Figure 4d-f), suggesting that elevated SOD activity in fungal cells were maintained after 24 h cultivation. Effects of Paraquat and External SOD on Mitochondrial and Cytosolic Superoxide Levels To estimate how the intracellular superoxide level was affected by the addition of paraquat and/or Cu/ZnSOD, cellular superoxide was detected using the superoxide-specific fluorescent indicators mitoSOX and dihydroethidium (DHE), which are localized to the mitochondria and cytoplasm, respectively (Figure 5a,b and Supplementary Figure S1). In cultures treated with 100 µM paraquat alone, superoxide levels in the mitochondria and cytoplasm were significantly higher than in the control after 24 h cultivation. These paraquat-induced high superoxide levels were suppressed by the addition of Cu/ZnSOD in a concentration-dependent manner. In contrast, in cultures without paraquat, neither the mitochondrial nor the cytoplasmic superoxide level was changed significantly by the addition of Cu/ZnSOD at 24 h cultivation. These results suggest that the external addition of Cu/ZnSOD led to the decomposition of superoxide, the level of which had been increased by paraquat, whereas it did not affect the superoxide level during normal fungal growth. Discussion In some aflatoxigenic strains, aflatoxin production has been reported to be influenced by ROS; hydrogen peroxide promotes aflatoxin production, and ROS regulators such as antioxidants and intracellular ROS generator suppress it [12][13][14][15]20,21]. However, the manner in which individual ROS intrinsically regulate aflatoxin production needs to be clarified. In this study, we confirmed that paraquat increased superoxide levels in the mitochondria and cytoplasm, and inhibited aflatoxin B1 production by suppressing the expression of aflatoxin biosynthetic cluster genes, including aflR. In contrast to previous reports on the pSL82 strain, we found that ascorbate promoted aflatoxin B1 production during the experiment with paraquat [21]. When bovine Cu/ZnSOD was used for the dismutation of paraquat-induced superoxide, we observed that Cu/ZnSOD suppressed aflatoxin B1 production in a concentration-dependent manner. The addition of bovine Cu/ZnSOD not only increased SOD activity in fungal cell extracts, but also increased the mRNA level of AFLA_099000 putative Cu/ZnSOD. Therefore, it was difficult to determine whether bovine Cu/ZnSOD was internalized to increase SOD activity in the fungal cell extracts. Peñalva [32] reported that fluorescent dye FM4-64 was internalized into the cells of Aspergillus nidulans in energy-, temperature-, and F-actin-dependent manners. The fluorescence of FM4-64 was detected, in order, in cortical organelles (e.g., actin-patch), hollow structures with diameters of 0.7 µ m representing mature endosomes, and 2-3-µm-diameter vacuoles, leading the author to conclude that FM4-64 was internalized by endocytosis. Higuchi et al. [33] found that plasma membrane protein AoUapC-EGFP, the fusion protein of a putative uric acid-xanthine permease with enhanced green fluorescent protein, was internalized into the cells of Aspergillus oryzae upon the addition of ammonium. As the internalization was temperature and F-actin dependent, the authors concluded that this membrane protein was internalized by endocytosis. Based on these reports, bovine Cu/ZnSOD could be internalized by endocytosis with other extracellular nutrients. Increase in mitochondrial superoxide is thought to be a major cause of inhibition of aflatoxin B1 production by paraquat. Paraquat generates superoxide by oxidation of the paraquat radical, which is generated via one-electron reduction by respiratory chain or other dehydrogenases in the mitochondria [34]. Generated superoxide attacks mitochondrial (4Fe-4S) cluster enzymes, including aconitase, and releases Fe 2+ from the cluster to inactivate the enzyme [35,36]. This Fe 2+ release leads to the generation of hydroxyl radical, a strong oxidizing agent of macromolecules [18]. Therefore, paraquat may cause mitochondrial dysfunction. As mitochondrial respiratory inhibitors inhibit aflatoxin production [37], paraquat probably inhibits aflatoxin production through mitochondrial Figure 5. Effects of paraquat and Cu/ZnSOD on mitochondrial and cytosolic superoxide levels. Time courses of mitochondrial (a) and cytosolic (b) superoxide levels in A. flavus were calculated by analyzing mitoSOX and dihydroethidium fluorescence, respectively, on microscopic images. Data are presented as means and standard deviations from eight or more microscopic images. Asterisks indicate significant differences (* P < 0.05, ** P < 0.01 vs. control group, Dunnett test). Discussion In some aflatoxigenic strains, aflatoxin production has been reported to be influenced by ROS; hydrogen peroxide promotes aflatoxin production, and ROS regulators such as antioxidants and intracellular ROS generator suppress it [12][13][14][15]20,21]. However, the manner in which individual ROS intrinsically regulate aflatoxin production needs to be clarified. In this study, we confirmed that paraquat increased superoxide levels in the mitochondria and cytoplasm, and inhibited aflatoxin B 1 production by suppressing the expression of aflatoxin biosynthetic cluster genes, including aflR. In contrast to previous reports on the pSL82 strain, we found that ascorbate promoted aflatoxin B 1 production during the experiment with paraquat [21]. When bovine Cu/ZnSOD was used for the dismutation of paraquat-induced superoxide, we observed that Cu/ZnSOD suppressed aflatoxin B 1 production in a concentration-dependent manner. The addition of bovine Cu/ZnSOD not only increased SOD activity in fungal cell extracts, but also increased the mRNA level of AFLA_099000 putative Cu/ZnSOD. Therefore, it was difficult to determine whether bovine Cu/ZnSOD was internalized to increase SOD activity in the fungal cell extracts. Peñalva [32] reported that fluorescent dye FM4-64 was internalized into the cells of Aspergillus nidulans in energy-, temperature-, and F-actin-dependent manners. The fluorescence of FM4-64 was detected, in order, in cortical organelles (e.g., actin-patch), hollow structures with diameters of 0.7 µM representing mature endosomes, and 2-3-µm-diameter vacuoles, leading the author to conclude that FM4-64 was internalized by endocytosis. Higuchi et al. [33] found that plasma membrane protein AoUapC-EGFP, the fusion protein of a putative uric acid-xanthine permease with enhanced green fluorescent protein, was internalized into the cells of Aspergillus oryzae upon the addition of ammonium. As the internalization was temperature and F-actin dependent, the authors concluded that this membrane protein was internalized by endocytosis. Based on these reports, bovine Cu/ZnSOD could be internalized by endocytosis with other extracellular nutrients. Increase in mitochondrial superoxide is thought to be a major cause of inhibition of aflatoxin B 1 production by paraquat. Paraquat generates superoxide by oxidation of the paraquat radical, which is generated via one-electron reduction by respiratory chain or other dehydrogenases in the mitochondria [34]. Generated superoxide attacks mitochondrial (4Fe-4S) cluster enzymes, including aconitase, and releases Fe 2+ from the cluster to inactivate the enzyme [35,36]. This Fe 2+ release leads to the generation of hydroxyl radical, a strong oxidizing agent of macromolecules [18]. Therefore, paraquat may cause mitochondrial dysfunction. As mitochondrial respiratory inhibitors inhibit aflatoxin production [37], paraquat probably inhibits aflatoxin production through mitochondrial dysfunction. The relationship between mitochondrial function and the transcriptional regulation of aflatoxin biosynthetic cluster genes will be subject to further investigation. Fluorescence observation indicated that paraquat increased mitochondrial and cytosolic superoxide, possibly through the flow of excess superoxide from the mitochondria into the cytosol through membrane channels, such as the voltage-dependent anion channel (VDAC) [17]. The addition of bovine Cu/ZnSOD suppressed the paraquat-induced superoxide elevation in the mitochondria and cytosol, probably by increasing SOD activity. Inhibition of aflatoxin B 1 production by bovine Cu/ZnSOD may occur due to the function of Cu/ZnSOD as a transcription factor affecting expression of oxidative response genes. As the addition of bovine Cu/ZnSOD without paraquat did not significantly change the mitochondrial or cytoplasmic superoxide level, the inhibition of aflatoxin production by Cu/ZnSOD may not be correlated with SOD's canonical superoxide scavenging function. Tsang et al. [38] showed that SOD1 in human and yeast cells is phosphorylated by the Mec1/ATM kinase cascade in response to intracellular hydrogen peroxide, and that the phosphorylated SOD1 is translocated into the nucleus, where it binds to the promoters of oxidative stress-responsive genes and promotes their transcription. Yeast por-1 encodes VDAC1, and growth of its disruptant is restricted under non-fermentable carbon source. Magrì et al. [39] found that overexpression of human SOD1 restored the growth of the disruptant in the presence of glycerol as a single carbon source. They concluded that human SOD1 entered the nucleus and increased transcription of the por-2 gene, which encodes the mitochondrial outer membrane β-barrel VDAC2, resulting in the partial recovery of mitochondrial outer membrane function. If bovine Cu/ZnSOD is internalized, it may be transferred to the nucleus, where it would control the transcription of oxidative stress-responsive genes, including AFLA_099000 encoding cytosolic Cu/ZnSOD. If bovine Cu/ZnSOD is not internalized, the way in which it increases the mRNA levels of AFLA_099000 in a dose-dependent manner is not clear, but the SOD increase caused by bovine Cu/ZnSOD might affect the transcription of some genes. The addition of bovine Cu/ZnSOD significantly increased transcription of the alcohol dehydrogenase gene. This increased alcohol dehydrogenase activity might promote the conversion of acetaldehyde to ethanol, resulting in decreased acetyl-CoA and aflatoxin B 1 production. Work investigating whether bovine Cu/ZnSOD enters the nucleus and binds to the promoter regions of some A. flavus genes is now in progress. Conclusions This study clarified that paraquat induces intracellular accumulation of superoxide, affects aflR expression, and reduces aflatoxin production. Externally added bovine Cu/ZnSOD suppressed the paraquat-induced increase in superoxide levels and partially restored aflatoxin production. However, bovine Cu/ZnSOD itself can inhibit aflatoxin production by a mechanism other than its superoxide elimination activity. Chemicals Aflatoxin B 1 standard, paraquat, sodium ascorbate, and lyophilized powder of Cu/ZnSOD from bovine erythrocytes were purchased from Sigma-Aldrich (#S7571; St Louis, MO, USA). MitoSOX and DHE were purchased from Thermo Fisher Scientific (Waltham, MA, USA). Paraquat and sodium ascorbate were dissolved in water to be 100 mM and 3 M, respectively. Bovine Cu/ZnSOD was dissolved in phosphate buffered saline (pH7.4) to be 10 U/µL. According to Sigma-Aldrich, one unit will inhibit reduction of cytochrome c by 50% in a coupled system with xanthine oxidase at pH 7.8 at 25 • C in a 3.0 mL reaction volume. Enzyme concentration is about 3000 units/mg protein. Strain and Culture Conditions Aspergillus flavus IFM 47798 strain (obtained from Agricultural research service, USDA, USA), which mainly produces aflatoxin B 1 , was used throughout this study. A glycerol solution suspending spores collected from a week-old culture plate was used as an inoculum. The spore suspension was inoculated into PDB (BD, Sparks, MD, USA) liquid medium in a 12-well microplate (2 mL/well) at 10 5 spores/well, and the microplate was placed at 28 • C in the dark for 48 h. When adding paraquat or sodium ascorbate, 2 µL of diluted solution was added in 2 mL culture. When adding bovine Cu/ZnSOD, 30 µL of diluted solution was added. When replacing the culture medium at 24 h of cultivation, the culture broth of each well was centrifuged to obtain mycelia. After mycelia were washed three times with distilled water, mycelia were transferred into 12-well microplate in which each well was filled with 2 mL of fresh PDB liquid medium. Then the plate was placed at 28 • C for another 24 h. Analysis of Aflatoxin B 1 Production and Mycelial Weight After incubation, the culture broth of each well was centrifuged to obtain culture supernatant and mycelia. The 500 µL of supernatant was extracted with 500 µL of chloroform and the chloroform solution was evaporated in the air. The remaining residue was dissolved in 100 µL of 90% aqueous acetonitrile and subjected to reverse-phase HPLC analysis according to the method previously reported [37]. The mycelia were washed with distilled water and lyophilized. The dried mycelia were weighed. Determination of SOD Activity Lyophilized mycelia were ground under liquid nitrogen with mortar and pestle and homogenate was suspended in 200 µL of assay buffer (150 mM sucrose, 20 mM Tris-HCl (pH 8.0), 1 mM EDTA, 0.1% nonidet P-40). The suspension was centrifuged and supernatant was diluted to 1/25 in PBS and its SOD activity was determined using SOD assay kit-WST (Dojindo, Kumamoto, Japan) according to the manufacturer's instructions. For the culture supernatants, SOD activity was similarly determined using 20 µL of culture supernatants. Western Blotting Ten microliters of supernatants of mycelial homogenate suspension described above was subjected to SDS-PAGE and the separated proteins on the gel were transferred to polyvinylidene difluoride (PVDF) membrane. The membrane was immuno-blotted using anti-SOD1 antibody (SPC-115C; StressMarq Biosciences, British Columbia, Canada) followed by goat anti-rabbit IgG (H+L) poly-horseradish peroxidase antibody (32260; Thermo Fisher Scientific). The membrane was developed by ECL prime western blotting detection reagent (GE healthcare, Buckinghamshire, UK) and detected with ChemiDoc XRS+ system (Bio-Rad, Hercules, CA, USA). The density of the bands was relatively quantified using Image J software (US National Institutes of Health, Bethesda, MD, USA). RT-qPCR Analysis Lyophilized mycelia were ground as described above. Total RNA was extracted by TRIzol reagent (Thermo Fisher Scientific) and purified using PureLink RNA Mini Kit (Thermo Fisher Scientific). Complementary DNA was synthesized with ReverTra Ace qPCR Master Mix (Toyobo, Osaka, Japan). RT-qPCR was carried out using FastStart Universal SYBR Green Master (Rox) (Roche, Basel, Switzerland) in a final volume of 25 µL for each reaction and ABI PRISM 7300 thermal cycler (Thermo Fisher Scientific). The amount of each mRNA was normalized to the amount of β-tubulin (NCBI gene symbol: AFLA_068620) mRNA in each sample. PCR primers used were listed in Supplementary Table S1. Determination of Superoxide Level Mitochondrial and cytosolic superoxide levels were quantified in the same manner as previously reported method [25] with some modifications. A. flavus was cultured for 24 or 48 hand mycelia were harvested by filtration, washed with distilled water, and incubated with 5 µM mitoSOX or 30 µM DHE for the detection of superoxide in mitochondria and cytosol, respectively. Then the mycelia were incubated with 3 µM Calcofluor White M2R (Sigma-Aldrich) and applied to microscopic slides. A BX53 fluorescence microscope equipped with a DP70 camera (Olympus, Tokyo, Japan) was used to capture fluorescent images. Superoxide level in a region of interest was estimated as follows: Using Image J software, the blue component of each fluorescent image of Calcofluor White M2R was binarized and the dimensions of the binarized area were regarded as mycelial mass in the image. Similarly, the red component of each fluorescent image of mitoSOX and DHE was first subjected to background subtraction and then binarized at the threshold set at "20". The dimensions of the binarized area were regarded as superoxide amount in the image. The relative superoxide level was calculated with the equation: superoxide level in the image = superoxide amount/mycelial mass × 100. Supplementary Figure S2 shows schematic representation of quantification of superoxide in a region of interest. Statistical Analysis Data are presented with the mean ± standard deviation (SD). Differences between groups were analyzed by one-way ANOVA followed by the Dunnett test. Values of P < 0.05 were considered to be significant. Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6651/11/2/107/s1, Figure S1: Representative fluorescence microscopic images of fungi double-stained by Calcofluor White M2R and superoxide indicator. Figure S2: Schematic representation of quantification of superoxide in a region of interest. Table S1: Primers used in the real-time PCR analysis.
2019-02-25T16:56:33.251Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "cd5837f677ece0c43e44019e95cdffd7582b63f7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/11/2/107/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd5837f677ece0c43e44019e95cdffd7582b63f7", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
255955473
pes2o/s2orc
v3-fos-license
Hyperglycemia enhances arsenic-induced platelet and megakaryocyte activation Low to moderate inorganic arsenic (iAs) exposure is independently associated with cardiovascular disease (CVD), particularly for patients with diabetes mellitus (DM). The mechanism of increased CVD risk from iAs exposure in DM has not been adequately characterized. We evaluated whether increasing concentrations of glucose enhance the effects of iAs on platelet and megakaryocyte activity, key steps in atherothrombosis. Healthy donor whole blood was prepared in a standard fashion and incubated with sodium arsenite in a range from 0 to 10 µM. iAs-induced platelet activation was assessed by platelet receptor CD62P (P-selectin) expression and monocyte-platelet and leukocyte-platelet aggregation (MPA and LPA, respectively) in the presence of increasing sodium arsenite and glucose concentrations. Megakaryocyte (Meg-01) cell adhesion and gene expression was assessed after incubation with or without iAs and increasing concentrations of d-glucose. Platelet activity markers increased significantly with 10 vs. 0 µM iAs (P < 0.05 for all) and with higher d-glucose concentrations. Platelet activity increased significantly following co incubation of 1 and 5 µM iAs concentrations with hyperglycemic d-glucose (P < 0.01 for both) but not after incubation with euglycemic d-glucose. Megakaryocyte adhesion was more pronounced after co incubation with iAs and hyperglycemic than euglycemic d-glucose, while gene expression increased significantly to iAs only after co incubation with hyperglycemic d-glucose. We demonstrate that glucose concentrations common in DM potentiate the effect of inorganic arsenic exposure on markers of platelet and megakaryocyte activity. Our results support recent observational cohort data that DM enhances the vasculotoxic effects of arsenic exposure, and suggest that activation of the platelet-megakaryocyte hemostatic axis is a pathway through which inorganic arsenic confers atherothrombotic risk, particularly for patients with DM. Background The adverse cardiovascular and vasculotoxic effects of long-term exposure to high levels of inorganic arsenic in drinking water have been well characterized [1]. Recent studies have demonstrated an increased risk of cardiovascular disease (CVD), ischemic heart disease (IHD) and mortality from low-moderate drinking water inorganic arsenic (iAs) exposure (10-20 µg/L) common in the United States (U.S.), particularly for patients with diabetes mellitus (DM) [2]. Recent prospective cohort study data indicates the vasculotoxicity and cardiovascular disease risk of environmental pollutants, including inorganic arsenic, may be greater for individuals with diabetes [2,3]. However, the mechanism of this increased risk of environmental exposures for diabetic vasculopathy Open Access Journal of Translational Medicine *Correspondence: Jonathan.Newman@nyumc.org 1 Division of Cardiology and the Center for the Prevention of Cardiovascular Disease, Department of Medicine, New York University School of Medicine, TRB rm. 853, New York, NY 10016, USA Full list of author information is available at the end of the article has not been studied. Pathological and clinical studies consistently demonstrate that platelets play a key role in atherothrombosis [4], and have shown the importance of the platelet-megakaryocyte hemostatic axis for vascular disease and CVD events [5][6][7]. Patients with DM exhibit increased platelet activity both in vitro and in vivo, and heightened platelet function may contribute to excess macrovascular risk in patients with DM [8]. A previous in vitro study of iAs and atherothrombosis used very high concentrations of sodium arsenite and did not examine the effects of hyperglycemia on thrombotic risk [9]. We examined whether glucose concentrations common in DM potentiate the effects of iAs on in vitro measures of platelet and megakaryocyte adhesion and activity. Subjects Whole blood was collected from healthy donors in the fasting state. Subjects were not on any antiplatelet therapy nor did they have any history of cardiovascular disease, metabolic syndrome or DM. All human experiments were performed in accordance with institutional and state guidelines. Phlebotomy was performed after 10 min of quiet rest. Blood was collected following a clean, problem-free venipuncture, using a 21-gauge needle after a 5 cc discard (a tourniquet was used to obtain access and was removed before blood collection). Blood was collected into vacutainer tubes containing 3.2% (0.105 mol/l) sodium citrate for platelet activity measurements. After collection, each tube was gently inverted 3 times and immediately transferred to the laboratory for processing. Reagents Sodium arsenite was dissolved in dH 2 0 for a stock concentration of 1000 µM then added to whole blood at a concentration of up to 10 µM for a total of 30 min, similar to prior studies [10,11]. Similar procedures were performed to achieve concentrations of 0.1, 1, 5 µM sodium arsenite. d-glucose was dissolved in dH 2 O for a stock concentration of 500 mM then added to whole blood and megakaryocytes at concentrations of 5, 15 or 25 mM to approximate euglycemia (5 mM d-glucose ≈90 mg/dl blood glucose) to a range of hyperglycemia common in DM (15 mM ≈ 270 mg/dl, 25 mM ≈ 450 mg/dl). Flow cytometry To examine the effect of iAs on platelet activity, we first measured platelet activation by assessing platelet P-selectin exposure and the presence of monocyte and lymphocyte platelet aggregates (MPA and LPA, respectively) in whole blood samples. We began with a 10 µM concentration of sodium arsenite used in prior in vitro models with aortic endothelial [10,11] and vascular smooth muscle cell cultures [12,13], a concentration 50-75% less than that used in prior studies of arsenic and thrombosis [9]. P-selectin expression (CD62P) is a cell surface marker primarily expressed by activated platelets and involved in platelet adhesion. To identify platelet specific P-selectin, we performed flow cytometry on whole blood with CD42b and CD61 to constitutively expressed platelet glycoproteins 1b (GP1b) and IIIa (GPIIIa), respectively. Flow cytometric analysis was performed using the BD Accuri flow cytometer (C6 Flow Cytometer). Whole blood was incubated in the dark for 30 min at room temperature with APC-conjugated mouse antibody specific for CD42b (glycoprotein Ib) and FITC-conjugated mouse antibody specific for CD62P (P-selectin) (BD Biosciences) before the mean fluorescence intensity of P-selectin-bound antibody per 10,000 events was measured. P-selectin is a component of the alpha granule membrane of resting platelets that is only expressed on the platelet surface membrane after alpha granule secretion. In-vivo circulating degranulated platelets rapidly lose their surface P-selectin, but continue to circulate and function [14]. Monocyte and leukocyte platelet aggregates provide complementary information on in vivo platelet activation; are independently associated with cardiovascular disease events; [15,16] and were assessed as events positive to markers CD14-APC and CD45-APC, respectively, in addition to platelet marker CD-61. MPAs were defined as events positive to both monocyte markers (CD14-APC [BD Biosciences]) and the platelet marker CD61-FITC (Dako). Monocytes were identified by their staining with CD14-APC and by their characteristic orthogonal light scatter. Monocytes with adherent platelets were identified by CD14-APC positivity. LPAs were defined as events positive to both leukocyte markers (CD45-APC [BD Biosciences]) and the same platelet marker CD61-FITC (Dako). The leukocytes with adherent platelets were identified by CD45-APC positivity. Appropriate color compensation was determined in singly labeled samples and matched nonspecific antibody controls (Mouse IgG1 FITC [BD Biosciences]). For the co-incubation experiments, whole blood was first incubated with 5 and 15 mM d-glucose for 30 min. We then used lower concentrations of sodium arsenite at 0, 0.1, 1 and 5 µM which were added to solution and incubated for an additional 30 min. P-selectin expression with unstimulated and stimulated with thrombin 0.025 IU/ml (Sigma) was then assessed. Cell culture and megakaryocyte gene expression Meg-01 cells were purchased from American Type Culture Collection (VA) and cultured in RPMI-1640 medium supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/ml penicillin, and 100 μg/ ml streptomycin (Invitrogen, CA, USA) at 37 °C in a 5% CO 2 humidified atmosphere, consistent with prior studies [17,18]. For adhesion assays, 18 mm glass coverslips (Fisher Scientific) coated with collagen (Helena Laboratories, Beaumont, TX, USA) were blocked with 1% BSA in 12-well plates [17,18]. Meg-01 cells were stained for 10 min with 1 µM DiOC6 (Fisher Scientific), washed and incubated at 2.5 10 5 cells/ml for 3 h with and without addition of iAs (0, 1, 5 and 10 µM) in presence of 5 or 25 mM d-glucose. After the supernatant was aspirated, adherent cells were gently washed with FBS. For each well, five random fields were captured and area of coverage was quantified using Image J (National Institutes of Health, Bethesda, MD). Nuclear transcription factor kappa B (NFκB) gene expression was measured because of its roles in inflammation, platelet activation, and arsenic vasculopathy [18][19][20]. Other genes measured include monocyte chemoattractant protein-1 (CCL2) and CD36 that have also been associated with platelet degranulation, diabetes and inflammation. To measure these genes, total RNA was isolated from Meg-01 cells using the Direct-zol RNA Miniprep kit (ZymoResearch, Irvine, CA, USA) and quantified using a Nanodrop ND-2000 spectrophotometer (Wilmington, DE, USA). RNA was converted to cDNA using the iScript cDNA synthesis kit (Bio-Rad). Gene expression of GAPDH and NFκB1 using the Sso fast Evagreen Supermix (BioRad) was assessed with real-time PCR (iCycler Real-Time Detection System, Eppendorf ). The sequences of the NFκB1, CCL2 and CD36 primers used for qRT-PCR were CAGATGG CCCATACCTTCAAA and TTGCAGATTTTGACC TGAGGG, CCCAAAGAAGCTGTGATCTTCA and GCAGATTCTTGGGTTGTGGA, and CTATTGGGAA GGTCACTGCGA and CAGGTCTCCCTTCTTTGC ATT, respectively. Statistical analysis All experimental values are represented as mean ± standard error of the mean (SEM). Differences in selected categorical variables between the respective comparison groups were analyzed with the χ 2 test of statistical significance. Unpaired two-tailed t tests and ANOVA were used to examine differences in continuous variables overall and at each time point under study in the different comparison groups. A value of P < 0.05 was considered statistically significant. Results We first examined the effect of a 10 µM iAs concentration used previously in endothelial and smooth muscle cell culture to assess the effects of inorganic arsenic exposure [10][11][12][13]. There was a clear increase in platelets expressing P-selectin by flow cytometry following incubation with 10 µM iAs (Fig. 1a, b). Compared to 0 µM iAs, the mean fluorescence intensity of P-selectin expression increased significantly after incubation with 10 µM iAs for both unstimulated and thrombin-stimulated platelets (Fig. 1c, d). We subsequently examined the effect of iAs on monocyte and leukocyte platelet aggregation (MPA and LPA, respectively) a different measure of platelet activity predictive of CVD events [15]. Compared to 0 µM, incubation with 10 µM iAs significantly increased both MPA and LPA (Fig. 1e, f ). These experiments demonstrate that sodium arsenite concentrations below those used in prior studies of platelet activation have significant effects on multiple measures of platelet activity [9,21]. Consistent with prior data [8], platelet activity increased with increasing d-glucose concentrations (Fig. 2a, b). To investigate whether glucose and arsenic had a synergistic effect on platelet activation, we coincubated euglycemic (5 mM ≈ 90 mg/dl) and hyperglycemic (15 mM ≈ 270 mg/dl) concentrations of d-glucose with lower concentrations of sodium arsenite than used to demonstrate platelet activation without glucose coincubation. After incubation at hyperglycemic conditions, exposure to 0.1, 1 and 5 µM sodium arsenite led to marked increases in platelet activation. In contrast, these sodium arsenite concentrations did not potentiate platelet activation at euglycemic concentrations of d-glucose (Fig. 2a, b). Hyperglycemia may induce prothrombotic changes in megakaryocyte function and platelet thrombogenesis [6]. To test whether glucose and iAs also had a synergistic effect on megakaryocyte adhesion, we coincubated megakaryocytes at euglycemic (5 mM) and hyperglycemic (25 mM) concentrations of d-glucose with 0, 1, 5 and 10 mM concentrations of sodium arsenite. Similar to the results observed for platelet activation, exposure to subthreshold sodium arsenite concentrations below 10 µM induced significantly greater megakaryocyte adhesion after incubation with a hyperglycemic compared to a euglycemic concentration of d-glucose (Fig. 3a, b). Prior studies have demonstrated megakaryocyte nuclear transcription factor kappa B (NFκB) gene expression is an important regulator of inflammation and platelet activation [18,19], and may also be an important transcriptional factor for the vascular effects of inorganic arsenic exposure [20]. To verify the prothrombotic effect of iAs, we measured the gene expression of NFKB1 in Meg-01 cells. monocyte chemoattractant protein-1 (CCL2) and CD36, genes involved in platelet activation and degranulation [22,23], were also measured [22,23]. Following coincubation of hyperglycemic d-glucose with 5 and 10 µM sodium arsenite, Meg-01 cells NFκB1 expression increased significantly compared to coincubation with euglycemic d-glucose (Fig. 4). There were additional nonsignificant increases in MCP-1 (CCL2) and CD36 (data not shown). No deleterious effects on Meg-01 cell toxicity were observed within the range of concentrations of sodium arsenite used in this study (0-10 µM) up to concentrations 100-fold greater (Appendix, Fig. 5). Discussion There are four primary findings of this report. First, we show for the first time that a concentration of d-glucose common in DM potentiates sodium arsenite-induced platelet activation. Second, we demonstrate that hyperglycemia also potentiates the effects of sodium arsenite on megakaryocyte adhesion, a marker of atherothrombotic risk [18]. Third, we demonstrate that lower concentrations of sodium arsenite than previously studied are associated with increased platelet activation and aggregation. Finally, we show that Meg01 NFκB transcription as a marker of megakaryocyte activation increases following exposure to hyperglycemia and sodium arsenite. These findings suggest that alterations in the platelet-megakaryocyte axis may be a pathway through which exposure to environmental toxicants such as iAs increase CVD risk, particularly for patients with DM. Despite advances in effective medical therapy to reduce CVD events, nearly 70% of patients with DM will die of CVD [24]. The etiology of this excess CVD risk for DM patients remains unclear. The vasculotoxicity and cardiovascular disease risk of environmental pollutants, including iAs, may be greater for individuals with diabetes [2,3], and suggests that low-level environmental exposures may be a novel risk factor for CVD risk in DM. Environmental pollutants enhance inflammation and the generation of reactive oxygen species, steps also important in the pathogenesis of diabetic vasculopathy [25]. While prior studies have indicated that environmental exposures increase oxidative stress and platelet activation [26,27], to our knowledge this is the first report to describe a potential link between diabetic hyperglycemia and enhanced atherothrombotic risk to iAs exposure. There are a number of pathways of platelet activation shared between hyperglycemia and iAs exposure. Hyperglycemia and diabetes is associated with platelet hyperreactivity, and coupled with enhanced levels of thromboxane, may partially explain increases in cardiovascular disease morbidity and mortality seen among patients with DM [8]. High levels of drinking water inorganic arsenic (500 ppb) increase platelet thromboxane formation and adhesion protein expression [28]. Other synergistic pathways between hyperglycemia and iAs exposure include increases in aldose reductase activity and oxidative stress signaling. During hyperglycemia aldose reductase activity increases significantly, leading to abnormal activation of the polyol pathway and enhanced oxidative and osmotic stress [8]. In turn aldose reductase increases thromboxane formation and platelet activation [8]. Inorganic arsenic has also been shown to increase aldose reductase activity [29]. Taken together enhanced aldose reductase activity and thromboxane generation may represent a synergistic pathway of thrombotic risk for both hyperglycemia and inorganic arsenic exposure. Platelet and endothelial mitochondrial function may be another synergistic pathway of risk for iAs exposure in diabetes. Recent studies have indicated the importance of platelet mitochondrial function in cardiovascular disease [30], and have suggested that alterations in platelet mitochondrial function may increase the risk of diabetic atherothrombosis [31]. Inorganic arsenic has also been shown to alter endothelial cell mitochondrial function [13]. Future studies might consider the synergy of inorganic arsenic exposure and diabetes on mitochondrial function in platelets and vascular endothelium as novel pathways of cardiovascular disease risk. Strengths of the current study include the use of multiple validated measures of the platelet-megakaryocyte axis associated with incident CVD; use of sodium arsenite concentrations below those used in previous models of iAs-induced atherothrombosis; and an investigation of the synergy between hyperglycemia and iAs exposure on atherothrombotic risk. Although we used a lower sodium arsenite concentration than previous atherothrombosis studies [9,21], we recognize the concentrations of sodium arsenite used may not correspond to current levels of iAs exposure in the U.S. Future studies should further investigate effects at very low concentrations corresponding to levels more prevalent in human populations. The discrepancy between exposure levels relevant to naturally contaminated drinking water and in vitro concentrations of sodium arsenite may reflect the lack of an accepted biomarker of internal iAs dose. Other limitations include the use of in vitro models and the inability to model in vivo differences in hyperglycemia and insulin resistance seen in type 1 and 2 diabetes. Further study is also needed to better estimate internal inorganic arsenic dose relevant for in vitro modeling; to examine the effect of environmental exposures on the platelet-megakaryocyte axis across the spectrum of diabetes control; and to study the effects of iAs and hyperglycemia on mitochondrial function in platelets and other relevant systems. Treatment studies could consider the use of aldose-reductase inhibitors to attenuate platelet activation and megakaryocyte adhesion [8]. Conclusion Our findings suggest that increased platelet activation and megakaryocyte adhesion may be pathways through which hyperglycemia in DM can enhance the vasculotoxicity of inorganic arsenic exposure. While intensive glycemic control has failed to significantly reduce macrovascular risk in DM, exposure to environmental toxicants such as inorganic arsenic may represent a novel class of modifiable CVD risk factors, particularly for patients with diabetes. Future studies should investigate platelet activation in patients with and without diabetes, at varying levels of glycemic control, following exposure to environmentally relevant concentrations of inorganic arsenic and other environmental exposures. Authors' contributions JDN lead and corresponding author, had full access to the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. He wrote the majority of the manuscript. CTE performed many of the experiments, made substantial critical revisions and aided with interpretation. YMO aided substantially with experimental conditions and data acquisition. EM aided substantially with experimental conditions and data acquisition. YC provided critical revisions to the manuscript and aided substantially in the preparation of the revised submission. EAF provided input into study design and made substantial critical revisions to the manuscript. JSB provided crucial laboratory support and input into experimental design and
2023-01-18T14:03:46.422Z
2017-03-06T00:00:00.000
{ "year": 2017, "sha1": "9f7e442c430450e55150c6b1eb4403b796a35f66", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12967-017-1148-1", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "9f7e442c430450e55150c6b1eb4403b796a35f66", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
36157717
pes2o/s2orc
v3-fos-license
Dynamics of the Kitaev-Heisenberg Model We introduce a matrix-product state based method to efficiently obtain dynamical response functions for two-dimensional microscopic Hamiltonians, which we apply to different phases of the Kitaev-Heisenberg model. We find significant broad high energy features beyond spin-wave theory even in the ordered phases proximate to spin liquids. This includes the phase with zig-zag order of the type observed in $\alpha$-RuCl$_3$, where we find high energy features like those seen in inelastic neutron scattering experiments. Our results provide an example of a natural path for proximate spin liquid features to arise at high energies above a conventionally ordered state, as the diffuse remnants of spin-wave bands intersect to yield a broad peak at the Brillouin zone center. Introduction. The interplay of strong interactions and quantum fluctuations in spin systems can give rise to new and exciting physics. A prominent example are quantum spin liquids (QSL), as fascinating as they are hard to detect: they lack local order parameters and are instead characterized in terms of emergent gauge fields. On the experimental side, spectroscopic measurements provide particularly useful insights into such systems, in particular by probing the fractionalised excitations (e.g. deconfined spinons) accompanying the gauge field. Such measurements can be related to dynamical response functions, e.g. inelastic neutron scattering to the dynamical structure factor. On the theoretical side, determining the ground state properties of such quantum spin models is already a hard problem, and it is even more challenging to understand the dynamics of local excitations. Here we present a combination of the density-matrix renormalization (DMRG) ground state method and a matrix-product states (MPS) based dynamical algorithm to obtain the response functions for generic twodimensional spin systems. With this we are able to access the dynamics of exotic phases that can occur in frustrated systems. Moreover it is also very useful for regular ordered phases where one would conventionally use large-S approximations, which in some cases cannot qualitatively explain certain high energy features 1,2 . We demonstrate our method by applying it to the currently much-studied Kitaev-Heisenberg model (KHM) model on the honeycomb lattice The first term is the pure Kitaev model exhibiting strongly anisotropic spin exchange coupling 3 . Neighboring spins couple depending on the direction of their bond γ with S x S x , S y S y or S z S z (Fig. 1). The second is the SU (2)-symmetric Heisenberg term. The KHM serves as a putative minimal model for several materials including Na 2 IrO 3 , Li 2 IrO 3 4 , and α-RuCl 3 5 . The pure model is an exactly solvable spin-1/2 model stabilizing two different Kitaev quantum spin liquids (KSL): a gapped Z 2 one with abelian excitations ("A phase") Green, red and blue edges correspond to Kitaev exchange couplings S γ i S γ j with γ = x, y, z. (b) Allowed k-vectors (red lines) for an infinite long cylinder with circumference L2 = 6 and periodic boundary condition along N2. Black nodes picture the position of the gapless Majorana cones. and one hosting gapless Majorana and gapped flux excitations ("B phase") 3 . If not stated otherwise, we use the parametrization J = cos α and K γ = K = 2 sin α. If J = 0 and K γ bond-independent, the Kitaev model is in the B phase, which is stable under time-reversal symmetric perturbations as pointed out by Kitaev. Numerical studies of the ground state phase diagram of the KHM have shown an extended QSL phase for small J and four symmetry broken phases for larger J 4 . The dynamical response functions of the pure Kitaev model are known exactly and reveal characteristic features 6,7 , such as a spectral gap due to a spin flip not only creating gapless Majorana but also gapped flux excitations. This feature is perturbatively stable to small J 8 , but the influence of J on high-energy features (or non-perturbatively at low energies) is unclear and of ongoing interest 9 . More pressingly, there appear to be proximate spin liquids 10,11 , such as possibly the currently much-studied α-RuCl 3 2,5,11-19 , whose low-energy physics is consistent with spin waves on an ordered background, but whose broad high-energy features resemble those of a KSL. In particular, for intermediate energy scales there are star-like features 2 apparently arising from a combination of spin wave and QSL physics. In this article, we first revisit the ground state phase diagram and confirm the previously found phases. The infinite cylinder geometry allows us to numerically confirm that the gaplessness of the KSL is robust throughout the entire phase. Secondly we use a recently introduced MPS based time evolution algorithm 20 to obtain the dynamical spin structure factor. We benchmark our method by comparing to exact results for the Kitaev model and find a good agreement. We calculate the spectra of different (non-soluble) phases of the KHM. Most notably, we identify broad high energy continua even in ordered phases that are reminiscent of the broad features observed in recent experiments on α-RuCl 3 and which are moreover similar to the high energy features in the spin liquid phase, thus providing a concrete realisation of the concept of a proximate spin liquid. Ground state phase diagram. We use the iDMRG algorithm on the KHM on infinite cylinders to map out the phase diagram. We choose cylinder geometries such that the corresponding momentum cuts contain the gapless Majorana modes of the Kitaev spin liquid. For the pure isotropic Kitaev model, there are gapless Majorana cones on the corners of the first Brilluoin zone, Fig. 1b. The full KHM has a C 6 symmetry which means that in the 2D limit these cones cannot shift. The iDMRG method determines the ground state of systems of size L 1 × L 2 where L 1 is in the thermodynamic limit and L 2 a finite circumference of up to 12 sites beyond what is achievable in exact diagonalization. While traditionally iDMRG is used for finding the ground state of onedimensional systems, it has become a fairly unbiased method for studying two-dimensional frustrated systems. The resulting phase diagram for L 2 = 12 is shown in Fig. 2 (for the iDMRG simulations we keep χ = 1200 states), which agrees with previous studies 4, [21][22][23][24][25] . For this L 2 , the system is compatible with the sub-lattice transformation that maps zigzag to AF and stripy to FM 22 . Plotted are the ground state energy and the entanglement or von-Neumann entropy S = −Trρ red log ρ red of the reduced density matrix ρ red for a bipartitioning of the cylinder by cutting along a ring. Both the cusps in the energy density and the discontinuities of the entanglement entropy indicate first order transitions. A careful finite size scaling is difficult because of the large bond dimension needed and thus it is not possible to make definite statements about whether the transitions remain first order in the limit L 2 → ∞. The symmetry broken phases can be identified by measuring the local magnetization. We identify a Néel phase (−0.185 < α/π < 0.487) that extends around the pure anti-ferromagnetic Heisenberg 26 point, the corresponding zigzag phase (0.513 < α/π < 0.894), a ferromagnetic phase around the pure FM Heisenberg point (0.894 < α/π < 1.427), and its stripy phase (1.559 < α/π < 1.815). The two KSLs between Néel and zigzag as well as between FM and stripy are confirmed to be gapless. In particular, if L 2 is a multiple of six we use the finite entanglement scaling approach 27-29 and extract the expected chiral central charge c = 1 for both KSLs, each of the two Majorana cones contributing c = 1/2. See also appendix B. Note that when a gapless spin liquid is placed on a cylinder, the gauge field generically adjusts to open a gap 30 . In order to see gapless behaviour, we have to initiate the iDMRG simulations in the gapless sector to access a metastable state (see appendix C for additional details). The gapped ground state having a non-zero flux through the cylinder overestimates the stability of the QSL phases. It is notable how well the phase boundaries agree with those from the infinite projected entangled pair state (iPEPS) simulations 21 . Dynamical structure factor S(k, ω). Starting from a ground state obtained using iDMRG, we calculate S(k, ω) by Fourier transforming the dynamical correlation function C γγ (r, t) = S γ r (t)S γ 0 (0) . The real-time correlations can be efficiently obtained using a recently introduced matrix-product operator based time evolution method 20 . This allows for long range interactions resulting from unraveling the cylinder to a one-dimensional system which render standard methods like the timeevolving block decimation inefficient. Following the general strategy laid out in Refs. [32][33][34], we perform the simulations for an infinite cylinder with a fixed circumference. Note that the entanglement growth and the resulting growth of the required number of states is generically slow as we only locally perturb the ground state and thus long times can be reached even in the cylinder geometry. We show results obtained for 0 ≤ t ≤ T and to avoid Gibbs oscillations we multiply our real-time data with a Gaussian (σ t ≈ 0.43T ). This corresponds to a broadening in ω-space (σ ω ≈ 2.3 T ). We use linear prediction to allow room for the tail of the Gaussian in real-time, but confirm that the final results do not depend on its details 35 . Thence, normalized as S γγ (k, ω) dkdω = dk. If not stated otherwise, we present results for S(k, ω) = γ S γγ (k, ω). We benchmark the method by comparing our numerical approach to exact results for the pure Kitaev model. Figure 3a shows a comparison for the gapped Kitaev model in the A phase with K x /K y,z = 6, the exact solution for S zz (k = 0, ω) shown in black. Our numerics (with resolution σ ω ≈ 0.06 in units shown) for an infinite cylinder with L 2 = 10 (red) agrees well with such features as gap, bandwidth and total spectral weight. In the real-time data (inset), whilst the numerics agrees with the exact solution for the cylinder geometry, it overlaps with the 2D result only until a characteristic time scale corresponding to the perturbation traveling around the cylinder and then feeling the static fluxes inserted by the spin-flip. More generally we expect such timescales (after which 2D physics becomes 1D) to be particularly significant for systems with fractionalization. For Fig. 3b we take K x = K y = K z = −2 being in the gapless KSL phase at α = 3π 2 . Comparing the exact 2D result (black) to our numerics for a cylinder of circumfer-ence L 2 = 6 (red), we see qualitative similarities, such as a spectral gap (dashed lines; slightly obscured by our finite-time window), a dip where the fluxes suppress the van Hove singularity of the Majorana spectrum 6 , comparable bandwidth and strong low-energy weight. To better resolve the spectral gap, we rely slightly on linear prediction 35 by using a real-time Gaussian envelope with σ t = 0.56T , corresponding to σ ω ≈ 0.045. Two striking quantitative differences are (i) the spectral gap which for this circumference is approximately half that of the 2D limit, and (ii) the presence of a delta-peak on this gap (≈ 4% of total spectral weight). The latter, present for any cylinder, vanishes as L 2 → ∞. The inset compares exact real-time results on the cylinder 31 with our numerics. Despite the true ground state on this cylinder being gapless and MPS only being able to capture gapped ground states exactly, we still find good agreement for appreciable times. After this benchmarking, we explore S(k, ω) in different phases of the KHM shown in Fig. 4, all with σ ω ≈ 0.06. The pure Heisenberg FM (α = π) can be solved in terms of linear spin wave theory (LSWT) and numerically captured with bond dimension χ = 2. Instead of this special point, in Fig. 4a we show results for α = 1.1π (corresponding to K = 0.65J) where we still find excellent agreement with LSWT. Note that there is an extremely small gap (≈ 0.05|J|) despite the presence of anisotropic couplings, as the entire KHM is SU (2)symmetric in LSWT. We do not observe any strong cylinder effects on the dynamics, which is presumably related to the short correlation length and the absence of fractional excitations. The pure Heisenberg AFM (with small XXZ anisotropy) in Fig. 4b shows appreciable deviations from LSWT, with second order SWT 36 giving better agreement. Moreover, the weight in the spin waves is approximately halved, indicating the importance of higher order magnon contributions. Staying within the Néel phase but approaching the QSL, spin wave theory cannot even qualitatively describe Fig. 4c, with much weight in very broad high energy features unaccounted for. Lastly we focus on a parameter regime producing zigzag ordering like that found in α-RuCl 3 2,11,12 . Fig. 5 shows S(k, ω) for four different choices of α: the first row contains the exact solution for the pure AFM Kitaev model, and the subsequent rows are all numerical results within the zigzag phase with increasing α. For each α we show S(k, ω) at fixed ω: the columns display representative low-, mid-and high-energy features, with parameters L 2 = 12 and time cut-off T = 10 corresponding to σ ω ≈ 0.23. We average over the different symmetry broken directions. In appendix D, we show results for L 2 = 6 and T = 40, revealing that even at this resolution the high-energy features stay very broad. The first column shows the low-energy physics of the Kitaev model being reconstructed into spin wave bands, with minima on the edges of the first Brillouin zone. For α = 0.7π, 0.8π these obey the C 6 -symmetry, indicating that the cylinder geometry locally looks like 2D. Inter- estingly, the high-energy physics of the ordered phases is very similar to that of the pure Kitaev model: we have broad features centered around k = 0 which are diffuse w.r.t. ω, with its characteristic energy and width simultaneously decreasing as α increases. The interplay between these low-and high-energy features then gives rise to different mid-energy shapes. In fact the six spin wave bands start on the edges of the first Brillouin zone. As the energy increases, these bands become increasingly diffuse, eventually overlapping in a very broad blob above the symmetric Γ point k = 0. Both spin waves and blob sharpen as one moves away from the nearby QSL. Comparing with inelastic neutron data for α-RuCl 3 2 , we find the best qualitative agreement in Fig. 5 around α = 0.7π. In particular at intermediate energies there is a six-pointed-star whose arms point towards the edges of the first Brillouin zone. It is interesting to note that if we do not average over different symmetry broken directions, the low-energy physics strongly breaks the C 6 symmetry yet the six-pointed star at intermediate energies persists: thus even if we interpret these high energy features as the overlap of broad spin waves, at this point the effect of symmetry breaking has disappeared. Under what conditions such a symmetry restoration occurs more generally is an interesting question. Conlusion. We have presented a new method for obtaining the dynamical properties of generic lattice spin models in (quasi-)two dimensions, which we expect to be useful for many future studies. In the KHM, our study reveals several features beyond spin-wave theory even in the ordered phases, providing a more detailed picture for the concept of a proximate spin liquid as potentially realised in α-RuCl 3 . Acknowledgements. We are grateful to Roser Valenti, Mike Zaletel and Johannes Knolle for stimulating discussions. In particular we thank Johannes for pro-viding unpublished data for the dynamical correlations of the isotropic Kitaev model on the cylinder. This work was supported in part by DFG via SFB 1143 and Research Unit FOR 1807 through grants no. PO 1370/2-1. Appendix A: 1D vs 2D physics: symmetry breaking From Monte-Carlo studies 38 it is known that the ground state of the Heisenberg antiferromagnet (AFM) on the honeycomb lattice displays symmetry breaking Néel order. However, when we place the Heisenberg AFM on an infinitely long cylinder of finite circumference, it is in principle a 1D system and the presence of a continuous symmetry in fact forbids spontaneous symmetry breaking 37 . Instead we numerically find a gapped state which preserves both spin rotation and translation symmetry. This is analogous to the results for stacking an even number of coupled spin- 1 2 Heisenberg chains 39 . The transition from 1D to 2D can be understood by noting that this symmetry-preserving state is effectively Néellike within a correlation length ξ, the latter growing with circumference. Similarly to how one determines spontaneous symmetry breaking from finite size scaling in the context of exact diagonalization, one can conclude that the 2D limit achieves Néel order by scaling with respect to circumference. The presence of a gap implies this symmetrypreserving state is stable under SU (2)-breaking perturbations. For example for L 2 = 6 it extends over −0.2π ≤ α ≤ 0.43π, with a Néel order arising for larger α until we hit the spin liquid. The stability of this symmetrypreserving state under Kitaev perturbations is presumably related to the fact that the Néel order which arises in the 2D limit would have a very small spin gap. This is different for XXZ-type perturbations, which induce Néel order for relatively small anisotropies as shown in Fig. 6 (with ∆ = 1.1), where our state is numerically converged (for large χ) and the physics quickly becomes independent of circumference. The DMRG simulations use a parameter χ which gives an upper bound on the entanglement. By limiting χ we can find a variational state with ξ < L 2 . Locally this state then looks 2D and hence we can have symmetry breaking even for the SU (2)-symmetric Heisenberg model, as confirmed in Fig. 6. As we increase χ, eventually ξ becomes of the order of L 2 , which signals the transition of 2D to 1D physics and the symmetry-preserving state arises. For L = 12 the necessary ξ is already out of reach, explaining the effective Néel order we see in Fig. 2. Similarly, in the zigzag phase there is an extended region with a gapped symmetry-restored ground state. This is in keeping with the sublattice transformation, which maps the zigzag to the Néel phase (in particular α = 3 4 π maps onto α = 0). the entanglement entropy S scales logarithmically with the correlation length ξ. In the MPS formalism, this is known as Finite-Entanglement Scaling with S χ = c/6 log ξ χ , where χ is the bond dimension of the MPS and c is the chiral central charge 28,29 . Fig. 7 shows S and log ξ for various MPS bond dimensions χ of up to 1024. The lines serve as a guide to the eye corresponding to a slope with c = 1. We observe a good match of the scaling for the pure Kitaev spin liquid at α = 3/2π. This reflects the fact, that the KSL can be mapped to a free fermion problem with two Majorana cones in the first Brillouin zone, each contributing 1/2 to the central charge. The gapless nature persists within the whole KSL phase and the scaling suggests c = 1. where γ i = {x, yz} corresponds to the bond that is not part of the loop at site i. Following Kitaev 3 , W l can be expressed in terms of Z 2 gauge field variables u jk For our choice of lattice periodicity, both loop operators are related by a minus sign. Thus,W l → +1 (periodic boundary condition of the fermions) translates to W l → −1, which corresponds to the gapless sector if the cylinder is chosen such that cuts in reciprocal space go through the nodes of the Majorana cones. The second sector (antiperiodic boundary condition of the fermions) is always gapped and has a lower ground state energy than the gapless sector. Regarding the computation of the ground state, we can now make use of the loop operator and initialize DMRG with a state |ψ that has ψ|W l |ψ = ±1 depending on the desired sector. Table I contains the phase transitions for the gapped and the gapless sector and compares it to exact diagonalization (ED) and infinite Projected Entangled Pair States (iPEPS). As the gapped sector has a lower energy, its stability is enhanced and widens the KSL phase. This effect is more pronounced for a small circumference L 2 = 6. Appendix D: Dynamics of L2 = 6 cylinder In Fig. 8 we show S(k = 0, ω) for the same choices of α as in Fig. 5, but now with a sharper ω-resolution (corresponding to T = 40) which is possible due to a smaller circumference (L 2 = 6). The finer features are most likely discretization effects due to the finite circumference, but the main points are that the broadness in ω-space persists despite a finer resolution, and that the high-energy feature gets squeezed downward as we get further away from the nearby spin liquid. Note that the latter is a meaningful statement and not just due to an overall α-dependent scaling of the Hamiltonian since the minima of the spin bands (as shown in the first column of
2017-01-17T14:02:52.000Z
2017-01-17T00:00:00.000
{ "year": 2017, "sha1": "07b8395a100cb8a4da2a9a2e798c3dacd1e6c013", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1701.04678", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "07b8395a100cb8a4da2a9a2e798c3dacd1e6c013", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
21692043
pes2o/s2orc
v3-fos-license
Effects of feeding whole linseed on ruminal fatty acid composition and microbial population in goats The objective of the present study was to evaluate the effect of feeding different levels of whole linseed, as a source of n-3 polyunsaturated fatty acids (PUFA), on ruminal fatty acid composition and microbial population in the goat. Twenty-four crossbred Boer goats were assigned to 3 dietary treatments: L0 (control), L10 and L20 containing 0, 10%, or 20% whole linseed, respectively. The ruminal pH and concentration of total volatile fatty acids (VFA) were not affected by dietary treatments. The feeding of L10 and L20 diets produced higher (P < 0.05) molar proportions of acetate and lower (P < 0.05) molar proportions of butyrate and valerate than the L0 diet. Molar proportions of myristic acid (C14:0) and palmitic acid (C16:0) were lower (P < 0.05) in the rumen of goats offered L10 and L20 diets than the control diet. However, stearic acid (C18:0), vaccenic acid (C18:1 trans-11), conjugated linoleic acid (CLA, C18:2 trans-10, cis-12) and α-lenolenic acid (C18:3 n-3) were higher (P < 0.05) in the rumen of goats fed L10 and L20 than L0. Both inclusion levels of linseed in the diet (L10 and L20) reduced the ruminal total bacteria, methanogens, and protozoa compared with L0 (P < 0.05). The effect of the dietary treatments on cellulolytic bacteria, varied between the individual species. Both inclusion levels of linseed resulted in a significant decrease (P < 0.05) in the population of Fibrobacter succinogenes, and Rumunococus flavefaciens compared with L0, with no significant difference between the groups fed linseed diets. The population of Rumunococus albus was not affected by the different dietary treatments. It was concluded that inclusion of whole linseed in the diet of goats could increase the concentration of PUFA in the rumen, and decrease the population of F. succinogenes, R. flavefaciens, methanogens and protozoa in rumen liquid of goats. Introduction Feeding animals with sources of polyunsaturated fatty acids (PUFA) has been of interest in animal nutrition to enhance these beneficial fatty acids (FA) in animal products, specifically n-3 PUFA, which has been associated with significant physiological and health benefits in human populations. Compared with monogastric animals, increasing PUFA in ruminant products is more challenging, since most of the PUFA in the animal diet are hydrogenated by the rumen microorganisms. Yet, the inclusion of PUFA sources in diets of ruminants has been shown to increase the concentration of n-3 PUFA in their meat (Palmquist, 2009). Furthermore, incomplete biohydrogenation of linoleic acid (LA) and a-linolenic acid (ALA) results in developing conjugated linoleic acids (CLA) isomers (Lee and Jenkins, 2011). CLA is now well known as an anticarcinogenic, anti-atherosclerotic, antimutagenic, antioxidant, antibacteriogenic, anti-diabetogenic, immunomodulator, and antiobesity (Waghmare, 2013). Similar to CLA, vaccenic acid is an intermediate product of the microbial biohydrogenation of LA and ALA (Harfoot and Hazlewood, 1997). The increase of vaccenic acid in animal products is desirable since it performs as a precursor in the biosynthesis of CLA (Griinari et al., 2000), and may exert benefits similar to those related to CLA in humans (Field et al., 2009). However, the presence of excessive amounts of PUFA in the rumen has the potential to radically disturb ruminal pH, volatile fatty acids (VFA) and microorganisms survivability, which perform a principal role in the overall process of ruminal fermentation (Machmüller et al., 1998;Maia et al., 2010). However, the type and sources of PUFA fed to ruminants might have different impacts on rumen fermentation and microbial populations (Ivan et al., 2012;Liu et al., 2012). Also, feeding plant based PUFA in the form of whole seeds might have less adverse effects on rumen fermentation than feeding free oils (Palmquist, 1995). The effect of PUFA on ruminal microbes differs depending on the type of microorganisms. For example, protozoa are more sensitive to dietary PUFA than bacteria. Polyunsaturated fatty acids may cause either total defaunation or significant reduction in the rumen protozoa population (Ivan et al., 2001). Within the bacteria species, feeding fish oil (Liu et al., 2012) or plant based PUFA have had different effects on growth of various species (Zhang et al., 2008;Ivan et al., 2012). Linseed (Linum usitatissimum) is considered as a leading source of plant based n-3 FA (Legrand et al., 2010), because it contains about 40% oil, with a high level of ALA (50% to 60% of total FA) (Legrand et al., 2010). Also, linseed contains a lower concentration of LA and saturated FA (SFA) compared with other oilseeds such as soybeans, cottonseed, corn, and sunflowers (Maddock et al., 2005). Numerous studies have been undertaken to enhance n-3 PUFA content in ruminant meat and milk by feeding linseed (Abuelfatah et al., 2014). However, reports documenting the effects of feeding whole linseed on ruminal microorganisms are rare. Therefore, the objective of the present study was to evaluate the influence of feeding different levels of whole linseed, as a source of ALA n-3 PUFA, on ruminal microbial population of goats, using real time polymerase chain reaction (RT-PCR). We also tested the effect on ruminal pH, FA and VFA. Experimental animals, housing, feeds and feeding Ruminal samples in the present experiment were collected at the end of feeding trial conducted in small ruminant research unit, University Putra Malaysia. The experimental procedures have been described in detail by Abuelfatah et al. (2013). Briefly, twenty four 5-month-old crossed Boer bucks with initial body weight (means and SE) of 14.23 ± 0.33 kg, were housed in individual pens. After 3 weeks of adaptation, goats were randomly divided into 3 equal groups of 8 animals each, and assigned to one of the 3 dietary treatments. The dietary treatments contained either 0 (L0), 10% (L10) or 20% (L20) whole linseed. The diets, ingredients and chemical and FA composition are presented in Table 1. At the end of the feeding experiment, which lasted for 110 days, all animals were slaughtered after overnight fasting. Animal care, handling techniques, and slaughter procedures were approved by the University Putra Malaysia Animal Care and Use Committee. Proximate analysis of feed The proximate analysis of the experimental feed was performed following the standard methods of the Association of Official Analytical Chemists (AOAC, 2007). Briefly, feed samples were dried in a forced-air oven for 24 h at 105 C to determine dry matter (DM). Nitrogen was determined by Kjeltec Auto Analyzer and then converted to crude protein (CP ¼ N Â 6.25). Ether extract (EE) was determined by extracting the sample with petroleum ether (40 to 60 C) using a Soxtec Auto Analyzer. Neutral detergent fiber (NDF), acid detergent fiber (ADF) and acid detergent lignin (ADL) were determined by the methods outlined by Van Soest et al. (1991) without adding alpha amylase and sodium sulfite. Values for NDF and ADF were expressed inclusive of residual ash. Samples were ashed in a muffle furnace at 550 C for 4 h to determine the ash content. Each analysis was performed in triplicate. Rumen content sampling and pH measurement Following animal slaughter, the esophagus was tied with nylon strings to conserve the ruminal environment until sampling time, which occurred directly upon evisceration. Rumen content was taken and squeezed through double layered gauze to remove the feed particles. About 100 mL of liquor was obtained from each animal. The pH of rumen liquid was measured instantly using a pH meter (Mettler-Toledo Ltd., England). The samples were stored at À80 C for FA and VFA analysis, and microbial quantification. FA and VFA determination For FA analysis of rumen liquor, 2 mL of sample was used. Ruminal fatty acid composition was determined following the procedure described by Abuelfatah et al. (2014). The VFA contents of the rumen liquor were measured using gaseliquid chromatography. The fixed rumen liquor (using metaphosphoric acid, 4:1, vol/vol) was centrifuged at 15,000 Â g at 25 C for 20 min, and 0.5 mL of the supernatant was taken and added to an equal volume of internal standard (4-methyl-n-valeric acids, Sigma Chemical Co., St. Louis, Missouri, USA). The separation was conducted on a bonded phase fused silica capillary column 15 m, 0.32 mm ID, 0.25 mm film thickness (Quadrex 007 Series, Quadrex Corporation, New Haven, CT 06525 USA) in an Agilent 7890a Gas-Liquid Chromatography (Agilent Technologies, Palo Alto A, USA). The injector and detector temperature was programmed at 220 and 230 C, respectively. The column temperature was adjusted in the range of 70 to 150 C with temperature programming at the rate of 7 C/min increments to assist optimum separation. Peaks identification was achieved by comparison with accurate commercial standards of acetic, propionic, butyric, isobutyric, valeric, and isovaleric (Sigma Chemical Co., St. Louis, Missouri, USA). DNA extraction The DNA was extracted from rumen liquor using the QIAamp DNA mini stool kit (Qiagen, Hilden, GmbH, Germany) following the manufacturer's protocol with a few modifications as described by Abubakr et al. (2014). Real-time PCR was conducted using the BioRad CFX96 Touch (Bio-Rad laboratories, Inc., Hercules, CA, USA) with fluorescence detection of SYBR Green dye using MicroAmp tube strips and MicroAmp Optical Cap Strips. Primers used to quantify the population of various microbes groups are presented in Table 2. The PCR reaction was achieved on a total volume of 25 mL using the iQTMSYBR Green Supermix assay (BioRad, USA). Each reaction comprised 12.5 mL SYBR Green Supermix, 1 mL of each Primer, 2 mL of DNA samples and 8.5 mL H 2 O. The reaction settings for DNA amplification were one cycle at 95 C for 5 min for initial denaturation followed by 40 cycles of 95 C for 30 s then by annealing temperatures for various primers as described in Table 2 for 30 s and then at 72 C for 30 s. For confirming the specificity of amplification, melting curve examination was performed after each last amplification cycle. Detection of the fluorescent product was adjusted at the last step of each cycle. Standards were prepared from Plasmid DNA from each microbial group. The concentration of the extracted DNA was measured using a UV spectrophotometer. The number of copies of a template DNA/mL of elution buffer was calculated online using the web site (http://scienceprimer.com/ copy-number-calculator-for-realtime-pcr) based on the following formula: Number of copies ¼ Amount of DNAðmg=mLÞ Â 6:022 Â 10 23 lengthðbpÞ Â 10 9 Â 660 Standard curves were created by serial dilution of plasmid DNA of each microbial group (Faseleh Jahromi et al., 2013). Statistical analysis Data of rumen fermentation parameters and microbial population were subjected to one-way analysis of variance using the GLM procedure of SAS (SAS, 2003). Microbial data which did not meet the normality requirement were subjected to log10-transform before analysis. Least-square means were computed and tested for differences by Duncan multiple range test. Differences between least squared means were considered to be significant at P < 0.05, and data were presented as means ± standard errors. VFA and pH of rumen liquor The VFA and the rumen pH of goats fed diets containing different levels of whole linseed are presented in Table 4. The concentration of total VFA in the rumen and pH was not affected by dietary treatments. However, whole linseed inclusion in the diet of goats significantly increased (P < 0.05) the molar proportion of acetate and decreased (P < 0.05) the molar proportion of butyrate and valerate with no effects on the other individual VFA. Rumen microbial populations The effects of feeding different levels of whole linseed as a source of ALA n-3 PUFA on rumen microbial populations of goats are presented in Table 5. In the present study, the total bacteria in the rumen were significantly affected by the dietary treatments. The concentration of total bacteria was lower (P < 0.05) in the rumen of goats fed linseed diets (L10 and L20 diets) than in those fed control diet (L0). However, no significant difference (P > 0.05) was observed between L10 and L20. Among the individual cellulolytic species, Rumunococus albus was not affected by dietary treatments, whereas the concentration of Fibrobacter succinogenes and Rumunococus flavefaciens were lower (P < 0.05) in goats that received L10 and L20 than in those fed L0. Similar to total bacteria and cellulolytic bacteria species, the population of total methanogens and protozoa were reduced (P < 0.05) in both inclusion levels (L10 and L20) compared with L0 with no differences between L10 and L20. Discussion Inclusion of sources of PUFA in animal diets comes mainly to increase these beneficial FA in animal products. In our previous studies, it has been reported that inclusion of whole linseed in diets Values with different superscripts within a row differ significantly at P < 0.05. 1 Data are presented as means ± SEM of total fatty acids (g/100 g). 2 SFA ¼ C12:0 þ C14:0 þ C15:0 þ C16:0 þ C17:0 þ C18:0. 3 UFA ¼ C14:1 þ C16:1 þ C17:1 þ C18:1n-9 trans þ C18:2 þ C18:3 þ C20:4, C22:6, C20:5n-3 þ C22:5-3 þ C22:6n-3. 4 MUFA ¼ C14:1 þ C15:1 þ C16:1 þ C17:1 þ C18:1 n-9 trans þ C18:1 n-9 cis þ C18:1 n-7 þ C20:1 n-9. 5 PUFA n-3 ¼ C18:3n-3 þ C20:5n-3 þ C22:5n-3 þ C22:6n-3. 6 PUFA n-6 ¼ C18:2 n-6 þ C20:2 n-6 þ C20:3 n-6. 7 Total CLA ¼ C18:2 cis-9, trans-11 þ C18:2 trans-10, cis-12, CLA. Values with different superscripts within a row differ significantly at P < 0.05. resulted in increasing the proportion of ALA and total n-3 PUFA in goat muscles and adipose tissues as the inclusion level of linseed increased (Abuelfatah et al., 2014). The growth performance and apparent digestibility were not affected by inclusion of linseed at level of 10% or 20%; however, at the level of 20%, the feed intake was negatively affected (Abuelfatah et al., 2013). The objective of this study was to examine the effects of feeding different levels (0%, 10% or 20%) of whole linseed, as a source of n-3 PUFA, FA composition of ruminal digesta and some microbial population. The proportions of palmitic acid (C16:0) in the rumen digesta of experimental groups mirror that of their diets (Tables 1 and 3). The greatest concentration of C16:0 in the rumen coming from animals fed the diet with the highest C16:0 concentrations was also reported by (Kim et al., 2007). However, stearic (C18:0) was offered in the diets at a low proportion (3.23% to 5.92% of total FA), but it represented the major FA in the rumen digesta (48.69% to 67.32% of total FA) ( Table 2). However, the 18-carbon UFA (C18:3 n-3, C18:2 n-6, and C18:1 n-9), which represent the major FA in experimental diets, were detected in low proportion in the rumen digesta. The increase in C18:0 and decrease in 18-carbon unsaturated fatty acids (UFA) indicated that a considerable amount of 18-carbon UFA was subjected to biohydrogenation since C18:0 is the end product of biohydrogenation of these FA (Harfoot and Hazlewood, 1997;McKain et al., 2010). However, the significantly higher proportion of ALA in the rumen digesta of goats fed linseed compared with L0 indicates that feeding whole linseed as a source of ALA was provided partial protection from biohydrogenation, even though the digesta were collected after 24 h of animals feeding. The significant increment in the proportion of vaccenic (C18:1 trans-11), and C18:2 trans-10, cis-12 CLA in animals fed linseed diets was expected since these trans-FA are intermediate products in biohydrogenation of unsaturated 18-carbon FA (Harfoot and Hazlewood, 1997;Kim et al., 2007;Lee and Jenkins, 2011). However, the cis-9 trans11 CLA was not significantly affected by level of linseed, because the cis-9 trans-11 CLA is the main CLA isomer during the biohydrogenation of LA rather than ALA (Lee and Jenkins, 2011). We noted that the pattern of FA composition of ruminal digesta of the experimental animals in this study resembles the pattern of FA composition of muscles taken from the same animals. The data related to FA composition of muscles has been published (Abuelfatah et al., 2013(Abuelfatah et al., , 2016. It is well known that ruminal pH is an important characteristic for assessing fermentation in the rumen (Liu et al., 2012). In the present study, the absence of any influence of feeding whole linseed on ruminal pH and total concentration of VFA in goats (Table 4) agrees with previous studies in lactating dairy cows fed diets containing about 10% (wt/wt) of crushed sunflower, flax, or canola seeds (Beauchemin et al., 2009), and in sheep fed diets containing linseed oil (Ueda et al., 2003;Kim et al., 2007). In contrast Czerkawski et al. (1975) reported decreased total VFA concentration when the diet supplemented with 90 g linseed oil/d. The different effect of PUFA on the proportions of individual VFA is also reported by (Machmüller et al., 2000;Soder et al., 2012). The reduction in molar proportion of butyrate in this current study agrees with Beauchemin et al. (2009) that the molar proportion of butyrate decreases with oilseed supplementation. Ruminal microorganisms (bacteria, protozoa, and fungi) establish the key link between the diets and the ruminant animal (Weimer et al., 1999). The VFA resulting from the fermentation activity of these microorganisms as well as the microbial protein are digested and absorbed by the host for growth and production. Therefore, the study of rumen microbiology is fundamental for a greater understanding of the feed utilization and metabolic disorders of ruminants. Studies on the effects of lipids, especially PUFA, on rumen microbes have attracted considerable interest not only due to the human health aspects, but also for environmental issues. In this current study, the reduction in population of the total bacteria in L10 and L20 compared with L0 can be attributed to their high lipid content. Dietary lipids could inhibit the growth of bacteria in the rumen (Harfoot and Hazlewood, 1997;Maia et al., 2010). The total rumen bacteria were not affected by feeding oilseeds containing high concentrations of LA, ALA in cattle (Ivan et al., 2012), or linseed oil in goats (Ebrahimi, 2012) when the lipid content of the diet was similar to or less than the control, but the effect of PUFA on bacterial populations is not the same. Bacterial populations that are relevant for fiber digestion and biohydrogenation have been found to be sensitive to PUFA. Therefore, the impact of PUFA supplementation on ruminal bacteria should be made by examining specific bacterial species rather than the total number of bacteria (Liu et al., 2012). The results of the present experiment indicated different effects of feeding whole linseed on the selected strains of rumen bacteria. F. succinogenes, R. flavefaciens, and methanogen are strongly inhibited by inclusion of whole linseed, whereas the population of R. albus was not affected negatively by the treatment diets. The reduction in the population of F. succinogenes has been reported previously in cattle fed dietary PUFA (Ivan et al., 2012;Liu et al., 2012). The effect of PUFA on R. flavefaciens varied among comparable studies. Ebrahimi (2012) reported a reduction in R. flavefaciens in goat fed linseed oil. In contrast, Ivan et al. (2012) reported an increase in R. flavefaciens population in dairy cattle. The different findings can be attributed to level concentration of PUFA in the rumen. The growth of R. flavefaciens increases when PUFA in the rumen were at a low level, but decreases when these acids were fed higher levels (Zhang et al., 2008). The R. albus, which is the most important cellulolytic bacteria, was not affected negatively by the treatment diets in this current study. This finding is in agreement with Zhang et al. (2008) and Liu et al. (2012). However, Ivan et al. (2012) and Ebrahimi (2012) reported increases in population of R. albus when cattle and goats were fed PUFA, respectively. The production of methane during fermentation in the rumen is an energy loss to ruminants but also has a potential impact on the Values with different superscripts within a row differ significantly at P < 0.05. environment (Moss et al., 2000). Therefore, decreasing methane production in ruminants has significance for efficient animal production and for global environmental protection (Zhang et al., 2008). Methane is produced by the metabolic activity of the methanogens in the rumen (Tan et al., 2011). Protozoa are also involved in methanogenesis, because protozoa produce H 2 , which is utilized by methanogens to produce methane (Vogels et al., 1980). In this study, both inclusion levels of linseed in the diets significantly decreased the population of methanogens and protozoa in rumen liquid of goats. In general, adding lipids to the diets of ruminants is a promising strategy to decrease the methane emissions because of the toxic effects of free FA on both methanogens and protozoa (Van Nevel and Demeyer, 1996;Maia et al., 2010). The limited ability of methanogens and protozoa to absorb and transform lipids leads to swelling and consequent rupture of the protozoa cells (Girard and Hawke, 1978). Conclusion Inclusion of linseed in the diet of goats at either 10% or 20% increases the molar proportion of acetate and decreases molar proportion butyrate and valerate. Feeding linseed also promotes changes in rumen microbial populations, such that both inclusion levels significantly decreased the population of F. succinogenes, R. flavefaciens, methanogens and protozoa in rumen liquid of goats.
2018-05-21T21:28:07.076Z
2016-10-28T00:00:00.000
{ "year": 2016, "sha1": "cdb7fbbfdf8d2c92839d21b6a12a2638474cb2b9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.aninu.2016.10.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cdb7fbbfdf8d2c92839d21b6a12a2638474cb2b9", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
232294227
pes2o/s2orc
v3-fos-license
Many faces of Wilms Tumor: Recent advances and future directions Background Wilms’ tumor (WT) is the most frequently occurring paediatric renal tumor and is one of the most treatment-responsive tumors. A tumor-suppressor gene and other genetic abnormalities have been implicated in its etiology. In addition, patients with many congenital anomalies, such as Beckwith-Wiedemann syndrome, WAGR syndrome and Denys-Drash syndrome, have an increased risk of WT. Methods and results Two large collaborative groups – National Wilms Tumor Study Group (NWTSG)/Children's Oncology Group (COG) and The International Society of Paediatric Oncology (SIOP) have laid down the guidelines for standardized treatment of WT, though differing in the diagnostic and therapeutic approach. The major difference in the two guidelines is the timing of surgery: SIOP recommends using preoperative chemotherapy and NWTSG/COG prefers primary surgery before any adjuvant treatments. Both these groups currently aim at intensifying treatment for patients with poor prognosticators while appropriating the therapy to reduce long-term complications for those with favourable prognostic features. As the survival rate has now reached 90%, the primary objectives of the physician are to perform nephron-sparing surgery in selected cases and to reduce the dosage and duration of chemotherapy and radiotherapy in appropriate cases. The purpose of this review is to present current standards of diagnosis and treatment of WT around the world. Conclusion Further studies in future should be done to highlight the use of chemotherapy and radiotherapy under risk-stratified strategies. Further improvement in survival of these children can only be achieved by increasing awareness, early recognition, appropriate referral, and a multidisciplinary approach. Introduction Renal tumors are the fifth most common tumors in children and Wilms tumor (WT; nephroblastoma) is the most common paediatric renal tumor, accounting for about 85% of cases [1]. The incidence, growth rate, type and response to treatment of renal tumors in children differ significantly from adult renal cancers. Renal tumors in adults are mostly carcinomas, whereas in children they are of embryonic origin and thus they grow rapidly. Renal cell carcinomas, sarcomas and other tumors of the kidney are extremely rare in children. Moreover, childhood renal tumors have a better response to treatment as compared to adult tumors. Wilms tumor was first reported by Thomas F. Rance in 1814. However, Max Wilms, a German surgeon and pathologist, gave the detailed description, adding seven new patients of his own in 1899 and since then the tumor bears his name [2]. It is primarily a disease of the kidney, but few rare extrarenal locations have been reported, like retroperitoneum, sacrococcygeal region, testis, uterus, inguinal canal and mediastinum [3]. Wilms tumor cells are believed to derive from pluripotent embyronic renal precursor cells. Thus, it is an embryonic renal tumor. While most are sporadic tumors, approximately 10% of cases are associated with genetic syndromes and extrarenal manifestations. There has been a dramatic improvement in overall survival rates due to, the coordinated use of modern surgical technique and anaesthesia, multiple drug chemotherapy and radiotherapy [4]. Large Mutidisciplinary cooperative cancer groups, namely the Children Oncology Group (COG) and the Société Internationale d'Oncologie Pédiatrique (SIOP) have laid down guidelines for standardized treatment of this entity and thus achieved a 5-year survival rate of more than 90%. This article reviews the genetics, imaging, histopathology and evolving treatment strategies of WT. Epidemiology Childhood cancers are uncommon, constituting 0.5-1% of all cancers, but are still the major cause of disease-related death in children. WT represents 6% of all childhood malignant tumors [5]. The incidence of WT is about 1 per 10,000 children in Europe and North America. There is a minor racial difference in the incidence of Wilms tumors. The Asian population has about half the incidence rate (3-4 per 10,000 children) of Western countries and its rate in the black population is 2.5 times higher [6]. The incidence of WT in other countries is similar to that in the USA. In Turkey, childhood renal tumors represent 7.1% of all childhood tumors [7]. The population-based incidence rate in a part of Italy was 4.5% for WT [8]. This tumor is seen mostly in children between the ages of 1 and 5 years and the peak age is 3. Although adult patients with WT have been reported, it is extremely rare in people older than 15 years of age [9]. COG revealed that the median onset age is 38 months and girls had disease onset 6 months later than boys. In most populations, no gender difference has been found; however, females are more likely to have WT than males (combined M:F = 1:4) in some Asian countries. For bilateral tumors, the median age at presentation is 29.5 months for males and 32.6 months for females. The male to female ratio is 0.92 for unilateral tumors and 0.6 for bilateral tumors. Most of the patients present before 5 years of age. WT is bilateral at presentation in 4%-8% of cases [10]. Molecular biology and genetics of Wilms tumor Wilms tumor (hereditary or sporadic) appears to result from changes in one or more of several genes. WT1 and WT2 gene deletions are the two frequent genetic abnormalities in WT. A "two hit model" similar to that of retinoblastoma was proposed, indicating a recessive mutation in the etiology of this tumor. Apart from that, epigenetic alterations affecting 11p15 locus are associated with a selective increase in WT risk. Genes and proteins involved WT1: It is the first identified gene in WT and is responsible for development of genitourinary system. It is a tumor-suppressor gene located on chromosome 11p13. Its expression is seen in the kidney, gonads, spleen and mesothelium. It encodes four zinc finger transcriptional factors that have regulatory functions on cell growth, differentiation and apoptosis. Normal WT1 gene expression is necessary for the maturation of the blastemal cells and reduced WT1 expression is associated with the stromal predominant WT. It is deleted in WAGR and Denys-Drash syndrome [11]. WT2: This gene is located on chromosome 11p15 and is found in Beckwith-Wiedemann syndrome [12]. Some functions of this gene are related to insulin-like growth factor 2 (IGF2), which encodes embryonal growth factor. Other Genetic Abnormalities: Other genes believed to be involved in WT development are, CTNNB1 (Beta-catenin), IGF2/H19, GPC3 (Glypican 3; Simpson-Golabi-Behmel gene). Another interesting observation is about Mulibrey nanism (for muscle-liver-brain-eye nanism, MUL). MUL is an autosomal recessive disorder that involves several tissues of mesodermal. About 4% of MUL patients develop Wilms tumor. There are rare inherited mutations. In addition to above genes, p57 Kip2 is also overexpressed or mutated in some patients. p57 Kip2 encodes cyclin-dependent kinase inhibitors and is a putative tumor suppressor [13]. Beta-Catenin is a cellular adhesion molecule that promotes overexpression of the c-myc and cyclin D1. Beta-Catenin mutations have been detected in 15% of patients with WT. There is a strong correlation between reduced expression of the WT1 gene and Beta-catenin mutations. Familial Wilms tumor has been found in 1-2% of cases. Although this tumor does have the WT1 gene, some familial tumors have linkage in the 17q, and this locus has been named FWT1. Some such tumors have demonstrated a 19q anomaly, which has been described as FWT2 [14]. The p53-encoded protein appears to act as a cell cycle checkpoint protein that arrests cell growth in G1. This gene regulates cell proliferation and induces apoptosis. Its inactivation results in genomic instability and cytogenetic aberrations (e.g. aneuploidy, translocations, deletions, and gene amplification). The Mutations of P53 occur in 5% of Wilms tumors and have been found in 75% of patients with anaplastic histology. TP53 abnormalities do not appear to associate with stage of (Diffuse anaplastic WT) DAWT but are associated with significantly worse disease-free and overall survival (OS) for patients with Stage III or IV DAWT. In addition to alterations at the TP53 locus, molecular profiling has demonstrated significant associations between anaplastic histology and loss of 4q and 14q [15]. Other chromosomal abnormalities, such as loss of heterozygosity (LOH) of 16q, 1p, and 7p, have been identified [14]. This defect has been associated with poor prognosis, relapses and death and has resulted in a poor outcome in patients with favourable histology WT. LOH at 1p and/or 16q associates with relapse and overall poor prognosis. Copy number gain of chromosome 1q is a commonly observed genetic abnormality in WT and is present in approximately 30% of tumors. After several smaller, retrospective studies suggested a correlation between 1q gain and tumor recurrence, the Children's Cancer and Leukaemia Group, NWTS, and SIOP independently confirmed poorer Event Free Survival and Overall Survival in both pre-treated and untreated patients with 1q gain in larger cohorts [14]. Genomic amplification of the MYCN gene has repeatedly been described in WT as well as other embryonal tumors, most commonly in neuroblastoma. Overexpression of MYCN in WT has been identified as a potentially prognostic feature. Interestingly, MYCN gain was present in higher proportion (>30%) among a cohort of pre-treated anaplastic tumors compared with a parallel study analyzing a mixed cohort of anaplastic tumors (which included tumors that were not pre-treated). This suggests that MYCN gain could confer treatment resistance. Notably, MYCN gain is not limited to anaplastic WT, and its association with poorer relapse-free and overall survival is independent of histology. The P44L mutation has been identified as a potentially activating mutation leading to MYCN gain in WT [14]. Frequency of LOH at 11q was 3-4 times higher among mixed type and diffuse anaplastic tumors compared to favourable histology tumors. Loss of the entire long arm of chromosome 11 was associated with higher rates of relapse and death. Other studies have also demonstrated a correlation between LOH at 11q and anaplasia, tumor recurrence, and death, indicating that this region is likely prognostically relevant [14]. DNA content Some studies suggested that flow cytometric evaluation of DNAploidy is a useful predictor of outcome and response to therapy. Diploid and aneuploid tumors are reported to have better long-term survival when compared with tetraploid tumors. However, other studies reported that this factor is not superior compared to histology and staging. Ongoing studies will determine the clinical usefulness of DNA Ploidy [16]. Hereditary factors Despite the number of genes that appear to be involved in the development of WT, hereditary WT (either bilateral tumors or a family history of the neoplasm) is uncommon, 1%-2% of patients having a positive family history for WT. The risk of WT among offspring of persons who have had unilateral (i.e., sporadic) tumors is quite low (<2%). Siblings of children with WT have a low likelihood of developing Wilms tumor. A second WT may develop in the remaining kidney of 1%-3% of children treated successfully for Wilms tumor. The incidence of such metachronous bilateral Wilms tumors is much higher in children whose original WT was diagnosed at less than 12 months of age and/or whose resected kidney contains nephrogenic rests. Periodic abdominal ultrasound is recommended for early detection of metachronous bilateral WT as follows: children with nephrogenic rests in the resected kidney (if < 48 months of age at initial diagnosis) -every 3 months for 6 years; children with nephrogenic rests in the resected kidney (if > 48 months of age at initial diagnosis) -every 3 months for 4 years; other patientsevery 6 months for 2 years, then yearly for an additional 1-3 years [16]. Associated congenital anomalies In 10%-13% of cases, WT is associated with several congenital anomalies. Children with genitourinary anomalies such as horseshoe kidney, renal dysplasia, bilateral cystic renal disease, double collecting system, fused kidney, cryptorchidism, hypospadias, aniridia and hemihypertrophy have a higher incidence of WT [17]. Congenital abnormalities are seen more commonly in bilateral tumors. In addition, it is a component of the syndromes described below. (a) Beckwith-Wiedemann Syndrome: This syndrome is associated with macroglossia, visceromegaly, omphalocele and gigantism. About 4-5% of patients with this syndrome have WT. The molecular defect is on chromosome 11p15.5 [12]. IGF-2 abnormalities are related to this gene and may be responsible for the development of Wilms tumor and the Beckwith-Wiedemann syndrome. (b) WAGR Syndrome: The components of this syndrome are WT, aniridia, genitourinary abnormalities and mental retardation. Cardiopulmonary problems, head anomalies, neurobehavioral disorders, musculoskeletal defects and metabolic problems have also been reported [11]. The 11p13 chromosomal deletion has been identified. The Wilms tumor risk is 30% in this syndrome. (c) Denys-Drash Syndrome: It is a combination of Male pseudohermaphroditism, glomerulonephritis and WT. There is also an association with a defect on the WT1 gene [17]. (d) Perlman Syndrome: This syndrome can be associated with WT and includes macrosomia, islet cell hyperplasia, renal hamartomas and an atypical face shape [17]. The other associations of WT are 13 and 18 trisomies [18], cerebral gigantism and neurofibromatosis. Other associated malformations include Septal defects, microcephalus, hyperinsulinism and von Willebrand's disease (8%). WT is rarely associated with metastasis at time of diagnosis. The most common site of metastasis is lung (85% of cases), followed by the liver and regional lymph nodes [17,18]. Recent advances in WT predisposition Recently, several reports have described novel genes in which pathogenic germline variants confer an increased risk for WT, including CTR9, REST and TRIM28. CTR9 encodes a component of the Polymerase-Associated Factor 1 (PAF1) complex, which associates with RNA polymerase II, a large protein complex that transcribes DNA into messenger RNA and several small nuclear RNAs. The human PAF1 complex is comprised of several subunits including CTR9, CDC73, LEO1, PAF1, RTF1 and SKI8. Studies of the Paf1 complex in yeast have revealed multiple roles including gene regulation, transcriptional elongation and chromatin modifications. To date, four unrelated families harboring germline variants in CTR9 have been reported. Descriptions of the WTs developing in CTR9 families are somewhat limited. Although the exact mechanism by which CTR9 LOF contributes to kidney tumor formation is yet to be determined, as the PAF1 complex is involved in regulation of gene expression, DNA repair and cell cycle, it is possible that a disturbance in one or more of these functions leads to Wilms' tumorigenesis. RE1-silencing transcription factor (REST; also known as Neuron Restrictive Silencer Factor) encodes a Krüppel-associated box (KRAB) zinc-finger transcription factor made up of two repressor domains (RD1, RD2) and a DNA-binding domain (DBD). REST serves as a focal point for the recruitment of chromatin modifying enzymes that silence the expression of target genes and play a critical role during embryonic development and neurogenesis. While the mechanism(s) by which gremlin variants in REST contribute to WT remain to be determined some KRAB zinc-finger proteins can mediate transcriptional repression by recruiting the c regulator TRIM28, a gene in which pathogenic germline variants also predispose to WT. Tripartite motif containing 28 (TRIM28; also known as KAP1 serves as a co-regulator for the KRAB proteins. TRIM28-associated complexes contribute to many aspects of cellular biology, including proliferation, genome stability, immune response, early embryonic development and embryonic stem cell pluripotency. Importantly, TRIM28 controls genomic imprinting through distinct mechanisms at different developmental stages. TRIM28-associated WT often contains a predominance of epithelial cells, which generally express lower levels of IGF2. Thus, IGF2 upregulation may not be as crucial for the formation of epithelialpredominant TRIM28-associated tumors. This is further supported by reports of lower IGF2 expression and normal imprinting of 11p15 in a subset of tumors exhibiting epithelial-predominant histology. TRIM28 functions as a classical tumor suppressor gene. While the mechanisms by which TRIM28 inactivation induces Wilms tumorigenesis remain to be elucidated, TRIM28 plays an important role in the developing kidney [19]. Evaluation and management of children with hereditary predisposition to WT. In the absence of syndromic features or a family history suggestive of any specific cancer predisposition syndromes, clinical germline testing should include sequencing and deletion/duplication analysis of CTR9, DICER1, REST, TP53, TRIM28 and WT1; other genes can be added at the discretion of the genetics provider, based on personal medical or family history features. If the child is found to harbor a pathogenic or likely pathogenic variant in a WT predisposition gene, his or her parents and close relatives can also be offered testing. Affected individuals should be counseled about the risks for additional neoplasms and non oncologic manifestations, as appropriate, as well as the risks for recurrence in future offspring. Children testing positive should be offered surveillance throughout the period of increased WT risk, which is typically up to 8 years of age but may vary depending upon the condition. The goal of surveillance is to detectWTs while they are low stage and more likely to be cured using fewer intensive therapies. Luckily, it is easy to visualize the kidneys by radiologic methods such as ultrasound, which is a readily available, safe and easy and relatively inexpensive procedure. As WTs can double in size every week, it is recommended that an abdominal ultrasound be completed once every 3 months. Additional modes of surveillance can be considered for individuals with predisposition to a wider spectrum of cancers, such as those with Li-Fraumeni or DICER1 syndrome. Genetic and/or epigenetic discoveries may be challenging as causal lesions could reside in non-coding regions of the genome or involve more complex mechanisms, such as structural variants (including intronic deletions or inversions), digenic or polygenic inheritance and events that occur post zygotically [19] (see Fig. 1). Clinical presentation Most of the patients present with an abdominal mass (Fig. 2). The tumor is often detected by the parents or caregivers while bathing the child. Haematuria is seen in 30% of patients and 25% have hypertension. In addition, malaise, fever, weight loss, anorexia, or a combination of these symptoms can be seen. The tumor can rupture with trivial trauma and these patients present with acute abdominal pain. Obstruction of the left spermatic vein by the mass can result in a leftsided varicocele. Few hormones, such as erythropoietin and ACTH, can be secreted in WT. In addition, hypercalcemia and haemorrhagic conditions caused by reduced von Willebrand factor can be seen [20]. Physicians must be cautious for other associated findings, such as hemihypertrophy, aniridia and genitourinary malformations. Laboratory tests Laboratory tests are done to check urine and blood samples if a kidney problem is suspected. They may also be done after a WT has been found. A urine sample may be tested (urinalysis) to see if there are problems with the kidneys. Urine may also be tested for substances called catecholamines. This is done to make sure the child doesn't have another kind of tumor called neuroblastoma. So, the battery of tests usually employed includes Complete Blood Counts, coagulation Profile, Urine routine, microscopy and culture. Certain laboratory tests are important for proper management of WT cases. Recently, urinary basic Fibroblast Growth Factor (bFGF) has been reported to be elevated preoperatively in these patients [16]. The serum level of neuron-specific enolase (NSE) and urinary catecholamine levels should be routinely measured to exclude neuroblastoma, which is a very close differential of WT. Tissue Polypeptide Specific antigen (TPS), might be of clinical value in monitoring the therapy of WT [21]. Radiological investigations Advances in radiological techniques are able to detect non-palpable WT and its spread much earlier than in the past. Before the ultrasonography (USG) and tomography era, direct radiograph and intravenous urography were used widely. Ultrasound is commonly used for the initial evaluation of renal tumors, and imaging features associated with a renal origin include a mass that moves with respiration. On ultrasound, WT commonly presents as an echogenic mass with discrete hypoechoic areas corresponding to necrosis. Calyceal distortion with renal displacement is the characteristic finding "claw sign" (Fig. 3A). Few cases of WT have been diagnosed antenatally with help of USG. It is usually associated with polyhydramnios. Increased mortality has been reported if associated with fetal hydrops. Doppler USG shows vena cava invasion, which is important for determining the preoperative treatment strategy. USG and contrast-enhanced computed tomography (CECT) of the abdomen are more effective diagnostic techniques in the staging and follow-up of patients, as they can detect tumor size, invasion and tumoral involvement of the lymph nodes. While US is a useful starting point for imaging, all paediatric patients with renal masses should undergo cross-sectional imaging with computed tomography (CT) or magnetic resonance imaging (MRI), as in over half of patients these studies provide additional important information beyond what can be obtained with US. CT shows other parenchymal organ metastasis, such as to the liver, the extent of renal involvement including the contralateral kidney (Fig. 3B), the renal vein, and the inferior vena cava (IVC). These important features are critical for accurate staging, as preoperative identification of any of the above findings can affect staging and treatment assignment. Skiagram chest, CT scans of the chest and the abdomen should also be done as baseline diagnostic procedures for complete evaluation of the extent of the mass and distant spread if any. MRI studies have a predominant role in demonstrating the relation of the tumor to other organs. MRI is more sensitive than CT scan. Nephrogenic rests appear as small homogeneous lesions after Gadolinium enhancement, different from the heterogeneous appearance of WT. Tumor calcifications, when present in WT, mean that tumor growth is slow, and possibly a good prognostic sign [22]. Contrasted CT or MRI can provide even more definitive information about resectability of tumors and the presence of intravascular tumor, which occurs in 6% of Wilms tumor patients. Because vascular extension of tumor greatly increases the surgical complication rate, upfront nephrectomy is usually deferred while patients are treated with neoadjuvant chemotherapy in an attempt to retract the clot and facilitate a safer surgery done later. MRI is recommended by the Children's Oncology Group for evaluation of patients with known or suspected The lungs are the most common site of metastasis in Wilms tumor, and historically patients were considered to have lung involvement if nodules were identified on chest X-ray (CXR). However, CT of the chest is a more sensitive modality for identifying metastatic lung nodules, especially when done preoperatively in awake patient in order to reduce the atelectasis associated with postoperative or sedated imaging studies in young children. Five percent of Wilms tumor patients will have nodules identified only on CT but not conventional CXR. The use of diffusion weighted MRI has been reported to show correlations between the apparent diffusion coefficient measurements and the blastemal component of residual tumors after neoadjuvant chemotherapy. Finally, surveillance imaging is also utilized in pre Cancerous patients who remain at high risk for the development of WT. Given the known association with a variety of cancer predisposition syndromes, guidelines have been developed for following these at-risk patients in a way that balances the benefits of early identification with safe and reasonable utilization of imaging. Specifically, renal USs are recommended every 3 months from the time the predisposition syndrome is diagnosed at least until the 7th birthday [23]. WT can be radiologicaly differentiated from neuroblastoma, which is a close mimicker ( Table 1). Invasion of the inferior vena cava, which can occasionally extend to the right atrium is strongly predictive of WT, while paravertebral mass with spinal canal invasion is for neuroblastoma. The spread around the celiac and superior mesenteric arteries differentiates neuroblastoma from WT, as is most common in the neuroblastoma. If there is no clear discrimination from neuroblastoma, an I-metaiodobenzylguanidine (MIBG)-scan may be performed. In renal tumors, monitoring ultrasound before and after the treatment, must be performed periodically every three months. Even after nephrectomy of the affected kidney, the other kidney monitoring should be performed [23]. Gross The usual gross appearance of WT is a large, solitary, well circumscribed mass (10% bilateral or multicentric) that is soft, homogenous and tan grey in color. Haemorrhage, necrosis, cysts and lobular pattern are common. But gross appearances may vary (Fig. 4). Histopathology There is mimicry of nephrogenesis in WT as the tumor comprises of 3 elements namely, undifferentiated blastemal cells, differentiated epithelial cells and stromal cells. Ectopic components like skeletal muscle may be observed in 5-10% of tumors. The stromal components are believed to be neoplastic, raising the possibility that undifferentiated blastema cells are precursors of the stromal and heterologous elements. There are two main histological types of WT [24]: (a) Classical nephroblastoma: This entity includes blastemal, epithelial and stromal components. Sometimes one or two components are predominant, and sometimes they are equally present. The latter type of tumor is classified as a mixedtype or triphasic Wilms tumor (Fig. 5). (b) Anaplastic Wilms tumor: There are three main cytopathologic features of anaplasia: a) a threefold or greater nuclear enlargement, compared to the nearby nuclei of the same cell type e.g. stromal or epithelial; b) hyperchromatism (indicating that the nuclear enlargement is attributable to gross polyploidy and not to hydrophilic swelling or poor fixation) and c) enlarged abnormal (usually multipolar) mitotic figures, which is regarded as the most quintessential criterion. It constitutes 4-8% of all cases. This type may have a diffuse or focal form and this classification has prognostic importance, as patients with focal anaplasia should be treated with less intensive protocols than those with diffuse anaplasia. Previously, a tumor was classified as focal anaplastic, if anaplastic cells were encountered in fewer than 10% of microscopic fields. This description was revised by Faria et al. [25] in 1996, as follows: In focal anaplasia, anaplastic changes are confined to circumscribed regions within the primary tumor and are surrounded by non-anaplastic tissue. Diffuse anaplasia has the following characteristics: it is found in an extrarenal site, the random biopsy specimen reveals unequivocal anaplasia, the tumor is coupled with extreme nuclear unrest, and there is nuclear atypia elsewhere in the tumor [25]. This classification of focal and diffuse anaplasia has been used in the COG; the other large-scale collaborative group, the SIOP, stratifies risk groups according to histopathologic structures. SIOP has analyzed risk for two groups: those patients who have been pre-treated and those receiving primary nephrectomy. Table 2 shows SIOP risk groups according to histopathology [25]. Tumor classification should determine the choice of treatment protocols. Anaplastic tumors, except for those in stage 1, should be treated with more intensive protocols than mixed-type tumors [26]. Nephrogenic Rests: The nephrogenic rest (NR) is the putative precursor lesion of the WT, which sometimes can be confused with malignancies and include blastemal, stromal and embryonal elements. It can be found in the opposite or the same foci in the affected There is engagement/ displacement of the great vessels, renal vessels and extending to periaortic and retro-crural lymph nodes. MRI Well-defined mass with relatively distinct margins with predominantly low signal on T1-weighted images and high signal on T2-weighted images. Often appears heterogeneous in T1 and T2 weight images. Good for viewing the involvement of the spinal canal, being seen as foci of low signal intensity on T1 weighted images. Demonstrates better venous extension CT scan. Used for staging. kidney. If located peripherally, it is classified as a perilobar nephrogenic rest; if located deep in the renal lobe, it is an intralobar nephrogenic rest. They can regress or stay dormant [24]. NRs may be microscopic or grossly visible, single or multiple. Patients with NRs (in particular perilobar NRs), have a significantly increased risk of metachronous bilateral Wilms tumor. Although most patients with a histologic diagnosis of Wilms tumor fare well with current treatment, approximately 12% of patients have histopathologic features that are associated with a poorer prognosis, and, in some types, with a high incidence of relapse and death. WT can be separated into 2 prognostic groups on the basis of histopathology: Histology mimics development of a normal kidney consisting of 3 components: blastema, epithelium (tubules) and stroma. There is no anaplasia. This Corresponds to Favourable Histology. (b) Unfavourable histology: Characterized by anaplasia. Focal anaplasia may not confer nearly as poor prognosis as diffuse anaplasia. Anaplasia is associated with resistance to chemotherapy and may still be detected after pre-operative chemotherapy. While this group corresponds to Unfavourable Histology. Staging system for Wilms tumor There is involvement of two large groups in management of WT: the Children Oncology Group (COG) and the Société Internationale d'Oncologie Pédiatrique (SIOP). They use similar staging systems with only minor differences. SIOP gives preoperative chemotherapy and then does staging after preoperative treatment and surgery. The COG group treats patients with surgery at the time of diagnosis, and then they are staged. The staging system of these two groups is shown in Table 3 [26,27]. Each side must be staged individually according to the criteria mentioned above. Therapy would be offered based on the higher stage of the two. Wilms Tumor in adults WT is the most common abdominal tumor in children; but it is extremely rare in adults, representing only 0.5% of all renal neoplasms. Till date only 240 cases in adults have been reported in the literature [28]. The diagnostic criteria defining adult WT were described by Kilton et al. [29]. It is difficult to differentiate this entity from renal cell carcinoma based only on imaging techniques, though preoperative diagnosis may be suggestive in about 75-80% of cases. On USG, it presents as a rapidly growing abdominal mass, with heterogeneous contrast uptake, and is surrounded by a pseudocapsule on CT which is suggestive of WT. Arteriography characteristically shows a hypovascular mass with neo-formed blood vessels exhibiting a zigzag pattern. The histopathological study confirms its diagnosis. The treatment is not well established for adults. Aggressive treatment, including radical surgery, chemotherapy and irradiation of the tumor bed, is considered necessary. The chemotherapeutic agents routinely used are vincristine, actinomycin-D, doxorubicin and ifosfamide. Satisfactory results have also been obtained with cisplatin and etoposide in patients with stage IV disease and patients in progression after conventional chemotherapy. The prognosis in adults is worse than in children. This may be due to the fact that, adults do not receive paediatric protocols, as has been demonstrated [30]. Treatment Multidisciplinary approach is required to determine and implement optimum treatment for WT. Ideal team includes experienced paediatric surgeon or paediatric urologist, paediatric radiation oncologist and paediatric oncologist. COG and SIOP guidelines provide two different strategies for the treatment of WT in children. COG recommends patients undergo surgery before chemotherapy. North America commonly adopts COG guideline. However, most children in European countries are treated with preoperative chemotherapy based on the SIOP guideline. Different treatment strategies are based on different staging systems. The COG staging system relies on pathological analysis from a primary nephrectomy in most cases. The SIOP staging is based on the results after preoperative chemotherapy. But the overall survival rate for the patients treated by the two guidelines is almost similar i.e. approximately 90% [31]. Multimodality Therapy consists of surgery and chemotherapy, with radiation or those who also need it. Its components are described as below. (a) Surgery Surgery is the cornerstone of the treatment. Operative principles have evolved from COG trials ( Table 4). The crucial role of surgeon is to ensure complete removal of the tumor without rupture and perform an assessment of the extent of disease (Fig. 6). Radical nephrectomy via a transabdominal incision and lymph node sampling is the procedure of choice. Transperitoneal approaches are used, as the flank incision is not suitable for WT because there is increased risk of spill over of tumor and moreover, access to lymph nodes is harder. Hilar, peri-aortic and iliac lymph node sampling is must. Lymph node sampling is important for staging. Furthermore, any suspicious node should be sampled. Margins of resection, residual tumor and any suspicious node basins should be marked with titanium clips. Titanium clips are specifically not to be used per COG protocols unless there is gross residual disease left in the abdomen. Pre-operative chemotherapy does not make the resection easier. It may shrink a thrombus or spare organs to allow resection, but it does not make the surgery easier. In fact it is exactly opposite in unilateral tumors, it obliterates planes and makes the case much more difficult. The SIOP recommends radical tumor nephrectomy performed after preoperative chemotherapy. Patients had minimal complications and no increased risk of local recurrence or upstaging [26]. Ligation of both the renal artery and vein is preferable before performing radical nephrectomy. WT is an encapsulated tumor (Fig. 6), and en bloc resection can be done to avoid tumor spillage. Resection of the primary renal tumor should be considered even if in a stage IV disease (usually pulmonary metastases). The incidence of post-operative complications in the COGG was 11%. The most serious complication intraoperatively is tumor embolus into pulmonary artery and sudden death. Common post-operative complications are haemorrhage and intestinal obstruction. Intestinal obstruction in first post-operative week is mostly due to intussusception and after that is due to adhesive obstruction. ROLE OF CONTRALATERAL EXPLORATION: With the availability of modern high-quality cross-sectional imaging, contralateral renal exploration for patients undergoing surgery for unilateral WT is largely unnecessary. Historically, contralateral exploration was recommended when excretory urography was the only pre-operative imaging modality. Now, with the advancement of CT scan and MRI, lesions measuring millimeters can be detected pre-operatively. Several studies have demonstrated high sensitivity and specificity (close to 100%) with these modalities, with no evidence of missed disease during contralateral exploration. Ritchey et al. found that routine contralateral exploration may yield a small number of occult lesions not identified on preoperative imaging, but that omission of routine contralateral exploration is unlikely to affect the outcome of any children with newly diagnosed WT, as long as they underwent CT or MRI scan prior to surgery [32]. (b) Partial nephrectomy The role of partial nephrectomy (nephron-sparing surgery) remains controversial. This surgery is not recommended by COG guidelines, except when children have a solitary kidney, with predisposition to bilateral tumors, horseshoe kidney or in infants with Denys-Drash or Frasier syndrome (to delay the need for dialysis). This is a SIOP recommendation only. It is used for non-syndromic unilateral WTs with small tumor volume (<300 mL) and the expectation of a substantial remnant kidney function in patients who never had lymph node involvement. Several studies reported an increased incidence of hypertension, proteinuria and decreased renal function, even renal failure, in patients who underwent unilateral nephrectomy for WT. Total tumor nephrectomy might potentially be harmful to the patient due the substantial risk of renal function loss of a solitary kidney caused by the consecutive hypertrophy of the remaining contralateral kidney as well as to the probability of a primary malformation, metachronous tumor occurrence (1.5% in COG, 2-3% in SIOP studies), accidental damage, or other superimposed renal injury. The currently reported poor evidence of a marked risk of renal failure following unilateral nephrectomy however might be due to the lack of long-term follow-up studies. Surgical (radiological and pathological) selection criteria for partial nephrectomy should include functioning kidney, tumor confined to one pole occupying less than one third of the kidney, no invasion of the renal vein or collecting system, and clear margins between tumor, kidney, and surrounding structures. Most studies concur that safe partial nephrectomy is applicable in approximately 5% of tumors at diagnosis (10% of patients after preoperative chemotherapy) without violating oncological principles. The local recurrence rate for partial nephrectomy in patients with bilateral tumors was found to be 8.2% [33]. (c) Chemotherapy Chemotherapy has proved to be beneficial in all stages of the disease and radiotherapy is used to improve the outcome of late stage tumors, including stage II malignancies with diffuse anaplasia. Chemotherapy can be given prior to surgery or after surgery (Table 5). COG Studies: The COG guideline recommends surgery as the initial therapy before chemotherapy. Preoperative chemotherapy is only indicated under the following condition: with inoperable WT; with a solitary kidney; with synchronous bilateral WT; tumor thrombus in the inferior vena cava extending above the level of the hepatic veins; tumor involving contiguous structures whereby removing the kidney tumor requiring removal of the other organs, such as spleen, pancreas, or colon and with extensive pulmonary metastases. Preoperative chemotherapy by the COG has four regimes ( Table 6). The agents for chemotherapy commonly are doxorubicin plus dactinomycin and vincristine; if with anaplastic histology, chemotherapy then includes regimen I (Table 6) [34]. The COG group has investigated five protocols. In COG 1 (1969-73) the vincristine + dactinomycin combination was more effective than either drug alone in stage II and III patients. COG 2 was conducted between 1974 and 1978. It found that a treatment duration of either 6 or 15 months was equally effective in stage I patients, and, after these results were published, treatment duration in protocols was shortened. Addition of adriamycin to the chemotherapy protocols improved the survival rate. In COG 3, stage I patients were treated successfully with a two-drug regimen for 10 weeks. For stage II patients, there was no significant difference in outcome between the RT or no RT arms, nor between the arm without adriamycin. Stage IV patients received no benefit from the addition of cyclophosphamide to the three-drug regimen. Different radiotherapy doses (1000 vs. 2000 cGy) also had no effect on survival. COG 4 also demonstrated that pulse-intensive actinomycin-D (single injection of 45 g/kg) was as potent as the long-term injection dose (15 g/kg/day for 5 days). The addition of adriamycin had a strong effect on survival in patients with stage III in COG 3-4 studies. COG 5 investigated whether stage I patients actually benefited from chemotherapy. Without chemotherapy, the 2-year overall survival was 100%, but relapse-free survival was 86% [35]. The COG recommends postoperative chemotherapy routinely used in all patients with WT except those at a very low risk: younger than 2 years at diagnosis with stage I favourable histology tumor weighing <550 g was sampled and confirmed negative lymph nodes. SIOP Studies: The SIOP guideline recommends preoperative chemotherapy for all patients after diagnosis. For patients with unilateral localized tumor, 4-week pretreatment with vincristine (weekly) and dactinomycin (biweekly) is given; for patients with bilateral tumors, vincristine-dactinomycin for no longer than 9-12 weeks is recommended (doxorubicin is added for reinforcement in some patients); for patients with metastasis, a regimen including 6 weeks of vincristine-dactinomycin (like above) and doxorubicine on weeks 1 and 5 is given [34]. The SIOP 1 study compared the effectiveness of pre-nephrectomy irradiation versus immediate surgery and found that the two arms had the same overall survival rates. The SIOP 2 study found that preoperative treatment resulted in a decreased tumor rupture rate. In the SIOP 5 study, preoperative chemotherapy was substituted for preoperative radiotherapy. The SIOP 6 study showed that 17 weeks of chemotherapy treatment was as effective as 38 weeks of treatment for patients with stage I disease. Relapse risk increased in stage II lymph-node-negative patients who did not receive radiotherapy. The addition of epirubicin was planned in this group of patients. Also, radiotherapy doses were decreased from 30 to 15 Gy. The aim of the SIOP 9 protocol was to determine how the duration of preoperative chemotherapy affected survival. There was no significant difference in survival between 4 and 8 weeks of preoperative treatment. SIOP 93-01 studies have aimed to reduce treatment duration. Stage I patients were treated postoperatively for 4 weeks, whereas patients in other stages received 27 weeks of postoperative treatment. In this randomized study, there was no significant difference in terms of event-free survival rates, although patients with progressive disease during preoperative chemotherapy had poorer survival than the others [36]. The SIOP recommends postoperative chemotherapy in all patients with WT except those with stage I lowrisk tumor. UKCCSG Protocols: This group treated patients with the postoperative chemotherapy regimen used by the COG group. This group used to perform biopsy prior to chemotherapy in all patients to ensure WT pathology, but because the rate of finding something when chemotherapy would be changed was <5%, this practice has been abandoned. Now this group follow SIOP exclusively. Patients with unresectable tumors were given preoperative chemotherapy. In patients with stage I, vincristine alone was as effective as vincristine and actinomycin-D. In the first study, the duration of the vincristine regimen in stage I was 6 months. This duration was shortened to 10 weeks in the second study [37]. This recommendation was limited to patients younger than 4 years. The group did not recommend using single-agent vincristine in older patients. Treatment results with stage IV patients were not as good as those obtained by the COG group. Newborns and all infants less than 12 months of age require a reduction in chemotherapy doses to 50% of those given to older children. This reduction diminishes toxic effects reported in children in this age group while maintaining an excellent overall outcome. Liver function tests in children with WT should be monitored closely during the early course of therapy based on hepatic toxic effects (veno-occlusive disease) reported in these patients. Dactinomycin should not be administered during radiotherapy. Children treated for WT are at increased risk for developing second malignant neoplasms. This risk depends on the intensity of their therapy, including the use of radiation and doxorubicin, and on possible genetic factors. Congestive heart failure has been shown to be a risk in children treated with doxorubicin. Efforts, therefore, have been aimed toward reducing the intensity of therapy where possible [31]. Under the current COG study, children with stage III-V diffuse anaplasia are treated with a new chemotherapeutic regimen combining vincristine, doxorubicin, cyclophosphamide, and etoposide (Regime I) in an attempt to further improve the survival of these high-risk groups. All these patients receive radiation therapy to the tumor bed. (d) Radiotherapy Wilms tumor is a highly radiosensitive tumor. The postoperative radiotherapy (RT) is started within 10 days of surgery because delay beyond 10 days leads to tumor cell repopulating and chances of relapse increase. It has been shown that appropriate adjuvant RT reduces the postoperative recurrence to 0-4% in children with favourable histology. The dose of radiotherapy has decreased to approximately 10 Gy from the doses of 25-30 Gy that were recommended in the past. In the early years, all stage I and II patients were treated with flank irradiation, and those with stage III and IV were treated with whole abdominal radiotherapy. Since 1975, patients with favourable histology stage I no longer receive radiotherapy. Stage III and IV patients and those with otherwise local stage I and II receive flank irradiation instead of whole abdomen radiotherapy. Dosages were reduced to 2700 cGy and later to 1000 cGy depending on the histology and stage, rather than the age of the patient. Whole lung irradiation of 12 Gy was generally given in patients with metastatic lung disease with post-stamp boost (or boosts) of 10 Gy whenever possible [27]. Since 1990, patients with stage III and IV are treated with radiotherapy delivered to the tumor bed in 10-Gy dosages. Lung irradiation is used only in patients with residual or resistant disease after undergoing induction chemotherapy. The radiotherapy dose has varied from 10Gy to 40Gy. However, the use of radiation has now been reduced due to the awareness and documentation of radiation related late effects (growth disturbances, second cancer) in growing children of WT. The COG group has redefined the role of radiotherapy (Table 7) and has provided specific recommendations so that the minimum possible RT dose is administered. The COG-3 has documented that there is no survival difference at doses of 10Gy or 20Gy in stage III, FH group. The recommended dose per fraction is 1.2-1.5 Gy and it should not exceed 1.8Gy per fraction with concomitant chemotherapy [38]. The COG recommends postoperative radiation used to the tumor bed for all patients with tumor stage III. The SIOP recommends wholeabdominal radiotherapy for patients with intermediate-risk or high- [16]. •For patients with unilateral localized tumour, 4-week pretreatment with vincristine (weekly) and dactinomycin (biweekly) is given. •The agents for chemotherapy commonly are doxorubicin plus dactinomycin and vincristine; if with anaplastic histology, chemotherapy then includes regimen I. •For patients with bilateral tumors, vincristine-dactinomycin for no longer than 9-12 weeks is recommended (doxorubicin is added for reinforcement in some patients). •For patients with metastasis, a regimen including 6 weeks of vincristine-dactinomycin (like above) and doxorubicine on weeks 1 and 5 is given. POSTOPERATIVE CHEMOTHERAPY COG GUIDELINES SIOP GUIDELINES •The COG recommends postoperative chemotherapy routinely used in all patients with WT except those at a very low risk: younger than 2 years at diagnosis with stage I favourable histology tumour weighing <550 g was sampled and confirmed negative lymph nodes. •The SIOP recommends postoperative chemotherapy in all patients with WT except those with stage I low risk tumour. risk histology tumors with major preoperative or intraoperative tumor rupture, or macroscopic peritoneal deposits; pulmonary radiotherapy is indicated for lung metastases lacking complete response until postoperative week 10. Patients with a complete response after induction chemotherapy with or without surgery do not need pulmonary radiotherapy. Patients with viable metastases at surgery or high-risk histology require pulmonary radiotherapy. Whole-lung irradiation is recommended for patients who did not receive lung irradiation during the first-line treatment, irrespective of histology. (e) Treatment of Inoperable tumors Since imaging studies alone carry the risk of overstaging, COG recommends determining 'inoperability' at surgical exploration. Tumors with caval extension above the hepatic veins or so massive in size that are considered risky to remove surgically should be treated with preoperative chemotherapy. Additionally, if tumors do not shrink after initial chemotherapy, open biopsy is indicated. Radiation is not used prior to surgery ever in WT treatment. If surgery is performed in a patient with caval or atrial extension, care should be taken to ensure that appropriate resources are available for paediatric cardiopulmonary bypass. In rare cases, advanced right-sided tumors may extend into the liver and wedge resection en bloc or even hepatic lobectomy may be necessary in these patients. If the diaphragm has been infiltrated by tumor, it should also be partially excised en bloc. Patients considered to have unresectable tumor based on imaging studies only should be considered stage III and treated accordingly. On the COG-5, these patients are treated after biopsy by initial chemotherapy with vincristine and dactinomycin with or without doxorubicin. If no reduction in tumor size has occurred after using 3 drugs, surgery is performed as soon as sufficient tumor shrinkage has occurred, generally within 6 weeks of diagnosis. Patients are subsequently treated as for stage III tumors, which includes postoperative radiation therapy. Because of the 5%-10% error rate in preoperative diagnosis of renal masses after radiographic assessment, confirmation of the diagnosis by open biopsy should be obtained prior to chemotherapy [31]. (f) Treatment of Anaplastic Wilms Tumor All patients except stage I should be treated with intensive chemotherapy and radiotherapy. Vincristine + actinomycin-D + adriamycin and cyclophosphamide are used in this type of tumor. In the last COG study, patients with stage I disease were treated with vincristine + actinomycin-D for 18 weeks and achieved good results. Patients with diffuse anaplastic stage II-IV disease were treated with vincristine + cyclophosphamide + actinomycin-D + etoposide for 24 weeks. The results in this group were unsatisfactory. New drugs, such as carboplatin, should be tried in patients with anaplastic WT [39]. The risk factors associated with relapse are unfavourable histology, lymph node involvement, and age more than 6 years, diffuse spill, capsular and vascular invasion, and aneuploidy. The 2-year survival rate for children after local recurrence is 43%. The combination of ifosfamide, etoposide and carboplatin has demonstrated efficacy in this group of patients, but significant hematologic toxic effects have been observed. While very high-dose chemotherapy followed by autologous bone marrow transplant has been utilized in the past, a recent POG/CCG intergroup study used a salvage induction regimen of cyclophosphamide and etoposide (CE) alternating with carboplatin and etoposide (PE) followed by delayed surgery. Disease-free patients were assigned to maintenance chemotherapy with 5 cycles of alternating CE and PE, and the remainder of patients to ablative therapy and autologous marrow transplant. All patients received local radiation therapy. The 3-year survival was 52% for all eligible patients, while the 3-year survival was 64% and 42% for the chemotherapy consolidation and autologous marrow transplant subgroups, respectively. Patients in whom such salvage attempts fail should be offered treatment on available phase I or phase II studies [31]. (g) Treatment of Bilateral Wilms Tumor Fascinatingly, 5-10% of patients present with bilateral Wilms tumor (BWT), which may present as synchronous or metachronous bilateral tumors [1,2]. In the COG studies, approximately 4-6% of children registered, presented with synchronous bilateral tumors (Figs. 2 and 7). The male-to-female ratio was 1:2, and the patients were usually younger at diagnosis. It was found that more bilateral or multifocal tumors occur at an earlier age (2 years versus 3.6 years in sporadic tumors). Also, the frequency of genitourinary anomalies (16%) and hemihypertrophy (5.4%) was higher compared to unilateral disease. Historically, the management of bilateral Wilms tumor (BWT) was non-standardized and suffered from instances of prolonged chemotherapy and inconsistent surgical management which resulted in suboptimal renal and oncologic outcomes. Because of the risk of end-stage renal disease associated with the management of BWT, neoadjuvant chemotherapy and nephron-sparing surgery have been adopted as the guiding management principles. This management strategy balances acceptable oncologic outcomes against the risk of end-stage renal disease. The presence of synchronous bilateral disease requires alteration of management. It is not recommended to perform unilateral nephrectomy and contralateral heminephrectomy as was the approach earlier. Under Children's Oncology Group (COG) protocols, unilateral WT is typically treated by up-front radical nephroureterectomy, with acceptable rates of long-term end-stage renal disease (<1%). Application of this strategy to patients with bilateral disease would render them anephric, and thus nephron-sparing approaches were developed and refined to preserve kidney function. Surgical strategy therefore attempts to preserve renal mass to minimize the risk of late renal failure. A recent multiinstitutional Children's Oncology Group study (AREN0534) has confirmed the benefits of standardized 3-drug neoadjuvant chemotherapy and the utilization of nephron-sparing surgery in BWT patients; however, less than 50% of patients underwent bilateral nephron-sparing surgery. The diagnosis of BWT is typically confirmed by the presence of bilateral renal masses, in an appropriately aged child, on ultrasound •The SIOP recommends wholeabdominal radiotherapy for patients with intermediate-risk or high-risk histology tumors with major preoperative or intraoperative tumour rupture/Spill, or macroscopic peritoneal deposits but only flank radiotherapy in other stage III criteria. •Pulmonary radiotherapy is indicated for lung metastases lacking complete response until postoperative week 10. •Patients with a complete response after induction chemotherapy with or without surgery do not need pulmonary radiotherapy. •Patients with viable metastases at surgery or high-risk histology require pulmonary radiotherapy. •Whole-lung irradiation is recommended for patients who did not receive lung irradiation during the first-line treatment, irrespective of histology. followed by contrast-enhanced CT abdomen/pelvis. A CT chest should also be performed to evaluate for pulmonary metastases at diagnosis, the most common site of metastatic disease in children with WT. Because other pediatric renal tumors are almost never bilateral, neoadjuvant chemotherapy for presumed BWT should be initiated without first performing a biopsy of any of the tumors [11]. In the recently completed first multicenter study specifically for BWT patients, the COG AREN0534 trial, intensification of neoadjuvant chemotherapy to three-drugs with vincristine, actinomycin-D, and doxorubicin (VAD) resulted in improved 3-year event free and overall survival compared to historical BWT patients treated on the NWTS-5 protocol who were often treated with only vincristine and actinomycin-D [11,30]. The rationale for up-front intensification of therapy was to achieve improved tumor response to facilitate bilateral nephron-sparing surgery. The feasibility of bilateral nephron-sparing surgery should be assessed after six weeks of VAD therapy by contrast-enhanced CT scan of the abdomen/pelvis. If more than 50% volume reduction has been achieved for all tumors, but nephron sparing surgery is still not feasible, neoadjuvant VAD should be continued for six more weeks. Surgical resection should be performed regardless of tumor status at the 12-week timeframe. If less than 50% volume reduction has been achieved after the six initial weeks of therapy for any tumor, and bilateral nephron-sparing surgery is still not feasible, open surgical biopsy of all tumors should be performed to assess for the possibility of alternative histologies such as diffuse anaplasia, blastemal predominance after neoadjuvant therapy, differentiated tumor without remaining viable/proliferative elements, or an alternate diagnosis (exceedingly uncommon). Core biopsy often misses the presence of diffuse anaplasia, which could be responsible for treatment resistance, and should thus be avoided in this treatment algorithm [40]. Approximately 10% of patients with bilateral tumors have unfavourable (anaplastic) histology and may benefit from more aggressive chemotherapy (addition of doxorubicin and cyclophosphamide) and radiation therapy and an aggressive surgical approach at the second-look operation. Salvage chemotherapy regimens using Cisplatinum, ifosfamide and VP-16 have been found to be helpful. After chemotherapy, the patient is reassessed with abdominal CT to determine the feasibility of resection. If serial imaging studies show no further reduction in tumor, a second look surgical procedure should be performed. For small synchronous bilateral lesions at the poles, bilateral partial nephrectomies or wedge resections can be performed. Excisional biopsy or partial nephrectomy is regarded as appropriate only if radical tumor resection is not compromised, negative margins are obtained and if two thirds of the renal parenchyma can be preserved The goal is to achieve survival and at the same time to preserve an adequate amount of renal parenchyma. In case of a large tumor on one side and a contralateral small one, radical nephrectomy on the extensively involved site and partial nephrectomy on the opposite side is done [40]. If conditions are not favourable for any surgical intervention, another biopsy is taken to confirm viable tumor. Chemotherapy and/or radiation therapy following the second-look operation is dependent on the response to initial therapy, with more aggressive therapy required for patients with inadequate response to initial therapy observed at the second procedure. A third look may be indicated; bilateral nephrectomy and subsequent renal transplantation remains the last option. Unfortunately, due to immunosuppression, recurrence of disease occurs frequently. Before considering bilateral nephrectomy, bench surgery with autotransplantation and intraoperative radiotherapy may be performed. The cumulative survival rate for infants with bilateral tumors is approximately 65-70% at 10 years. However, one series reported overall survival of metachronous bilateral Wilms tumor to be 49.1% and 47.2% at 5 and 10 years, respectively [16,31]. Metachronous bilateral tumors were reported in about 1.5% of COG patients. Since many of these lesions appear to be overlooked at initial laparotomy, a thorough investigation of the opposite kidney remains crucial. Children younger than 12 months diagnosed with Wilms tumors, who also have multicentric disease or NRs, in particular perilobar NRs, have a markedly increased risk of developing contralateral disease and require frequent and regular imaging of the contralateral kidney for several years. The median interval of diagnosis of metachronous WT ranges from 1.37 (COG) to 3.29 (SIOP) years. Though radiotherapy has been recommended in bilateral WT in reduced doses, the authors advocate avoiding radiotherapy in Bilateral WT and preferring salvaging chemotherapy schedules to prevent radiation nephritis and glomerulosclerosis. It has been seen by a long term evaluation of renal function in patients with irradiated bilateral WT that 34.6% have deranged renal functions with elevated urea and creatinine levels [41]. The principal treatment of bilateral WT is nephrectomy of the larger tumor after preoperative treatment or through immediate surgery [42]. After induction chemotherapy, the smaller tumor should be removed by partial nephrectomy. Limited radiotherapy could be applied. Whatever the treatment, salvage of the kidney should be the goal. Both the COG and SIOP recommend preoperative chemotherapy and resection for bilateral WT. Bilateral renal-sparing surgery can be done in patients with synchronous bilateral WT. Renal parenchyma sparing may help preserve the renal function in these children. Renal transplantation is recommended and is usually delayed until 1-2 years without evidence of relapse. The SIOP also suggests that preoperative chemotherapy should be limited to not longer than 12 weeks, with time intervals for evaluation fixed to 6 weeks [40,42]. Both the COG and SIOP recommends preoperative chemotherapy and resection for bilateral WT. Bilateral renal-sparing surgery can be done in patients with synchronous bilateral WT. Renal parenchyma sparing may help preserve the renal function in these children. Renal transplantation is recommended and is usually delayed until 1-2 years without evidence of relapse. The SIOP also suggests that preoperative chemotherapy should be limited to not longer than 12 weeks, with time intervals for evaluation fixed to 6 weeks [34]. A recently completed multi-institutional COG study (AREN0534) has standardized the approach to neoadjuvant therapy and timing of surgical resection, resulting in great benefit to patients with BWT. Renal transplantation for Bilateral Wilms Tumor Bilateral nephrectomy may be required in BWT patients because of disease relapse or complications of surgical or medical therapy, necessitating hemodialysis and eventual renal transplantation. Also, bilateral nephrectomy will likely eventually be required for WT patients with Denys-Drash syndrome, who eventually develop nephropathy and endstage renal disease. Transplantation has historically been delayed until 1-2 years after cancer therapy because most WT relapses occur within 2 years of diagnosis. However, more recent data show WT patients, even those who underwent early transplant, have outcomes similar to other renal transplant patients. Given the morbidity and mortality associated with chronic pediatric dialysis, consideration of earlier transplantation should be made in WT patients with end stage renal disease and no evidence of cancer. The current treatment strategy for BWT balances acceptable oncologic outcomes with the goal of maintaining the maximum amount of functioning renal parenchyma. It is imperative that the pediatric surgeon be intimately aware of the treatment algorithm for BWT when managing such patients. This algorithm is characterized by avoidance of initial tumor biopsy, three-drug neoadjuvant chemotherapy (VAD), assessment of tumor volumetric response at 6 weeks, a decision whether to continue VAD, perform an open biopsy, or change chemotherapy for a subsequent six weeks, and then to perform surgical resection in all cases at 12 weeks of therapy. Close collaboration between pediatric surgeons and the multidisciplinary oncology team is necessary to negotiate this complex algorithm and to maximize the chances that BWT patients have optimal long-term oncologic and renal outcomes [40]. (h) Treatment of Recurrent WT The recurrence rate in patients with familial hypercholesterolemia WT is about 15% and patients with anaplastic histology is about 50%. The leading locations of relapse are the lung, abdomen/flank and liver. The prognosis and selection of further treatment for patients with recurrent WT depend on many factors, including the site of recurrence, tumor histology, length of initial remission, and initial chemotherapy regimen (2 versus 3 drugs). Historically, the mortality rate of patients with recurrent favourable histology, WT ranges from 25% to 40%. Outcome has recently improved to 60% in patients with relapse [43]. The COG guideline has categorized the patients with recurrent WT into three risk groups: standard risk, high risk and very high risk. For standard-risk relapsed WTs, surgery is given when feasible; radiation therapy and chemotherapy (alternating courses of vincristine/doxorubicin/cyclophosphamide and etoposide/cyclophosphamide) are given. For patients with high risk and very high risk relapsed WTs, chemotherapy (alternating courses of cyclophosphamide/etoposide and carboplatin/etoposide), surgery, and/or radiation therapy and hematopoietic stem cell transplantation are recommended. The SIOP classifies the patients with recurrent WT into group AA, group BB and group CC. For patients in group AA, only vincristine and/or dactinomycin (no radiotherapy) is adopted as the first-line treatment, containing four drugs in the regime (combinations of doxorubicin and/or cyclophosphamide and carboplatin and/or etoposide); for group BB, an intensive reinduction regimen is given (including the combination of etoposide and carboplatin with either ifosfamide or cyclophosphamide), followed by either high-dose melphalan and autologous stem cell rescue or two further reinduction courses; for group CC, camptothecins (irinotecan or topotecan) or novel compounds are recommended [13,26]. (i) Lung metastases Pulmonary nodules seen on chest CT and not on chest radiograph ('CT only' metastases) do not mandate treatment with whole-lung irradiation in COG-5. COG-4 data raise the possibility that children with CT-only pulmonary nodules who receive whole lung irradiation have fewer pulmonary relapses than those who were treated less aggressively (based on the extent of locoregional disease with 2 or 3 drugs), but a greater number of deaths due to treatment toxicity (4-year event-free 89 vs. 80%, overall survival 91 vs. 85%). Lung nodules should be treated with lung irradiation. There are some data on rapid early responders who clear their lungs with chemotherapy alone and lung irradiation may be omitted, but in general, persistent lung lesions require Irradiation. The nodules should be removed to confirm diagnosis [31]. (j) Infants with WT The most common solid renal mass in infants <6 months old is Congenital Mesonephric Nephroma (CMN). Additionally, WT in this group does not necessarily need adjuvant chemotherapy should they meet very low risk requirements above. The SIOP recommends primary nephrectomy for infants younger than 6 months (182 days) unless tumors are judged not suitable to immediate nephrectomy. Postoperative chemotherapy is similar for infants to that in older patients undergoing direct nephrectomy, with drug doses adjusting according to age and body weight [44]. Differential Diagnosis of Wilms Tumor. (a) Clear Cell Sarcoma of the Kidney (CSSK) CCSK accounts for approximately 3% of renal tumors. Its location, clinical presentation, gross appearance and age at diagnosis are same as of WT. It is also called as an unfavourable histologic variant of WT with poor prognosis and was called "bone metastasizing renal tumor". The incidence peaks during the second year of life (COG Mean age at presentation: 36 months, Range: 2 months -14 years). The male to female ratio is 2:1. It has distinctive histopathologic features, a much higher rate of relapse and death than in favourable histology WT. The histopathologic characteristics include a wide diversity of features, ranging from spindle cell to epitheloid patterns. Most tumors show the classic histological picture, i.e. multiple blended patterns. The following histopathologic variants were described: myxoid, sclerosing, cellular, epithelioid, palisading, spindle cell, storiform, and anaplastic pattern. Bone metastasis is the most common mode of relapse, followed by lung metastases, local (abdominal/retroperitoneal) recurrence, and brain metastases. CCSK metastases are frequently encountered in unusual soft tissue (e.g. scalp, epidural, nasopharynx) and other sites (orbital). The time interval to relapse ranged from <16 months to 4 years. Although the overall relapse rate is significantly lower for patients treated with doxorubicin, the risk of recurrence is prolonged. Currently (COG-5), patients with CCSK are treated with initial nephrectomy regardless of stage, abdominal radiation (10.8 Gy) and combined chemotherapy with actinomycin D, vincristine and doxorubicin. The main prognostic factors for favourable outcome in CCSK are revised stage 1, age at diagnosis (2-4 years), therapy with doxorubicin and absence of tumor necrosis. Moreover, Stages I-III patients do very well [45]. (b) Rhabdoid tumor of kidney (RTK) It was initially regarded as a solid monophasic, or rhabdomyosarcomatoid variant of unfavourable histology WT. It is now recognized as a separate highly malignant entity. RTK represents only 1.8% of cases entered into COG since 1969, with a median age at presentation of 17 months and a slight male preponderance (male-to-female, 1.5:1). In about 15% of RTK, patients develop other primary embryonal tumors in the midline posterior fossa, particularly medulloblastoma. These intracranial tumors are histologically distinct from the primary renal lesion. In contrast to WT, about 80% of RTKs have stage III or IV disease at presentation. Grossly, they are bulky, solid and relatively wellcircumscribed lesions. The histiogenesis remains controversial. Deletion of the hSNF5/INI1 gene on chromosome 22 has been found in all of these tumors. The tumor behavior is extremely aggressive and clinical management (triple chemotherapy) has not proven successful. So far, male sex and high tumor stage are the only identified unfavourable prognostic indicators. Metastases occur most frequently in the lung (70%) and Brain and most patients with relapse die from tumor progression (COG 96%). That is why Brain Imaging is required at the time of Diagnosis. The reported survival rate at 3 years is less than 20% [46]. (c) Congenital Mesoblastic Nephroma (CMN) About 2.8% of all renal neoplasms in children are CMNs. It is the most common benign renal tumor in neonates and a low grade spindle cell tumor which arises from the renal medulla. It is also known by other names like fetal renal hamartoma, leiomyomatous hamartoma and mesenchymal hamartoma of infancy. This tumor is the most common in infants <6 months and that Lymph Node sampling is mandatory for staging and in case of non-CMN pathology. With the increasing use of antenatal USG, many cases of CMN have been detected in utero. There is an increased association with prematurity and polyhydramnios. Nearly all solid renal tumors presenting in the first week of life are mesoblastic nephromas. However, a few cases have been reported in older children. Mean age of diagnosis of 3.4 months with a male preponderance (maleto-female ratio 1.8:1). Hypertension, increased renin concentration and skeletal fibromatosis have been reported. On USG, it presents as an evenly echogenic mass with concentric echogenic and hypoechoic rings resembling uterine fibroids. Haemorrhage and cyst formation secondary to central regions of necrosis may occur with time. Calcification is rare. Grossly, it is a light tan, fleshy with a whorled configuration and has illdefined peripheral borders, blending into the adjacent renal parenchyma and even the perirenal fat. Most are centred near the hilum of the kidney. Microscopically, it consists of monomorphic spindle-shaped cells, resembling fibroblasts with scant interstitial collagen. Two morphological subtypes are distinguished: the classical or leiomyomatous type and the atypical or cellular type. Mixed forms have also been described. Despite excellent prognosis, local recurrence and even tumor-related deaths have been described and were always related to the cellular (atypical) form or to the mixed form, particularly in patients aged more than 3 months and in those cases where surgical removal was not complete. Cytogenetics have reported common trisomies in cellular CMN, particularly of chromosome 11 and t (12; 15) (p13; q25)associated ETV6-NTRK3 gene fusions. Total surgical excision independent of histological type without further therapy is recommended for most patients as the treatment of choice. Tumor rupture and difficulties in achieving clear surgical margins have been frequently reported but did not affect the excellent prognosis [47]. (d) Intrarenal neuroblastoma and intrarenal teratoma Neuroblastoma affects mainly aged between 2 months and 2 years and are slightly more common in Caucasian boys. This tumor in most cases resolves spontaneously, leaving just a focus of fibrosis or calcification in adults. Intrarenal neuroblastomas are rare tumors and pose diagnostic challenges. Clinically and radiologically, they are indistinguishable from WT. Elevated urinary vanillylmandelic acid (VMA) levels and serum NSE should allow a differential diagnosis before surgery. The prognosis is grave. While sacrococcygeal teratomas contain elements of WT and WT have been found to produce Alpha-fetoprotein. Few cases of intrarenal teratomas have also been described. The diagnosis depends on histological examination. Teratoid WT is an unusual variant of nephroblastoma, in which there are different types of cells and tissues along with areas of WT [46]. After complete resection, the prognosis should be excellent provided the tumor does not contain yolk sac elements. Prognosis. The prognosis of WT is the most favourable among all solid tumors. The survival rate is 95% in patients in stages I and II, 75-80% in stage III patients and 65-75% in patients with stage IV. WT can be classified into favourable and anaplastic histology groups for prognostic purposes. Only 15% of patients with favourable histology have recurrent disease, compared to 50% in those with anaplastic histology. The most common sites of recurrence are the lungs, pleura, tumor bed and the liver. Among all patients with WT, those with liver involvement have a poor prognosis as compared to those with lung metastasis. Diffuse anaplasia confers poor prognosis, has chemotherapy resistance and may still be present after preoperative chemotherapy, however; children with stage I anaplastic tumors (Stage I Anaplasia) have an excellent prognosis. Stage V patients have a 4-year survival rate of 94% for those with the most advanced lesion of stage I or stage II, and 76% for those with the most advanced lesion of stage III [48,49]. Thus, the important prognostic factors for WT are: 1. Stage of the disease 2. Favourable or unfavourable histology 3. Metastases at presentation 4. Regional lymph node involvement 5. Hyperdiploidy which correlates well with anaplastic variety Long term complications of WT Fortunately Wilms tumor is a curable malignancy, but iatrogenic sequelae are possible. Paulino et al. reported late effects of therapy in more than two thirds of children treated for WT [50]. Besides morbidity from chemotherapeutic agents, potential side effects of radiotherapy like intestinal strictures, ulceration, perforation, haematochezia, growth arrest and osteonecrosis have to be considered [50]. (a) Renal function COG and SIOP studies showed that the risk of renal failure for patients with unilateral WT and a normal opposite kidney is very low (0.25%). Most of these children had unrecognized renal disease (Denys-Drash syndrome) followed by radiation nephritis. In patients with nephrectomy and abdominal irradiation, renal dysfunction is more common. However, the development of compensatory post-nephrectomy hypertrophy of the contralateral kidney is obvious and proteinuria and hypertension may occur long after nephrectomy. 'Renal failure' in these patients is most often caused by bilateral nephrectomy followed by radiation nephritis and surgical complications. The DTPA clearance after unilateral nephrectomy for WT was found to be normal. However, microalbuminuria in 24-h urinary collections has been detected in 84% of the patients, indicating evidence of hyperfiltration injury [51]. This highlights the need for close monitoring of the renal function of long-term follow-up patients after WT in addition to the routine monitoring for tumor recurrence. (b) Lung damage Both chemotherapeutic agents and total lung irradiation can cause severe changes in pulmonary function. Prophylaxis against Pneumocystis carinii is recommended for patients receiving pulmonary irradiation. (c) Congestive heart failure Congestive heart failure is typically seen after administration of anthracyclines. Reported cardiotoxicity includes electrocardiographic changes, changes in myocyte morphology (necrosis and fibrosis), decreased cardiac function and congestive heart failure. Dose related cardiomyopathy caused by doxorubicin is a well-known complication, reported for approximately 5% of patients receiving a cumulative dose of 400-500 mg/m 2 . MUGA scans can be used to assess left ventricular ejection fraction (LVEF) and myocardial movements and thus timely discontinuation of doxorubicin can prevent congestive heart failure [51]. (d) Liver damage COG-4 studies reported a dose-related incidence of hepatotoxicity in patients receiving chemotherapy (especially vincristine and actinomycin D). Irradiation also increases the risk for hepatotoxicity and venoocclusive disease as characterized by hepatomegaly, elevated liver enzymes, hyperbilirubinemia and ascitis [51]. (e) Infertility Damage to the reproductive systems may occur as late sequelae of both, gonadal radiation or chemotherapeutic agents. Radiation effect even on prepubertal germ cells may lead to hormonal dysfunction (hypogonadism) or infertility [51]. Vincristine is a major risk factor for azoospermia. (f) Second malignant neoplasms The risk of developing a second malignant neoplasm in patients with successfully treated WT is 1.6-5.6% [51]. Tumors mainly seen in the irradiated field are hepatocellular, bone, breast and thyroid malignancies. (g) Musculoskeletal function Scoliosis & musculoskeletal abnormalities have been found more frequently in irradiated patients than in those patients who did not receive radiotherapy including lower rib hypoplasia and limb length inequality. Abdominal radiation can also produce significant reduction in sitting height and a more modest decrease in standing height. These effects are more pronounced the younger the patient is at the time of radiotherapy. Flank and abdominal radiotherapy doses of 20-30 Gy produce a height loss calculated by age at treatment. For a child aged 1 year this was 9 cm, aged 5 years 7 cm and aged 10 years 5.5 cm [51]. Ionizing radiation has well been documented to interfere with epiphyseal growth. Follow up. After completion of therapy, the frequency of imaging is dependent on the stage and histology of the tumor. Moreover, physical and laboratory tests coincide with the schedule for imaging. In general, all patients are reviewed every 3 months for the first year, and then every 6 months for another 2 years. During each of the follow-ups in the first three years it is recommended to get a radiological evaluation. This may be an ultrasound or CECT scan in addition to a chest x-ray [25]. The likelihood of recurrence after the first three years is less; however these patients should be followed up every year for various long-term complications. Major challenges in Developing Countries. Challenges faced by developing countries include huge population with large number of cases, poverty, malnutrition and presentation of the disease in advanced stages (huge bulky tumor) coupled with noncompliance with schedule and limited facilities for advanced surgery and supportive services that are necessary for proper management of these cases. Future perspectives Recent advances in understanding the molecular biology of the tumorigenesis of WT have provided significant implications for the clinical management. Thus, both large study groups (COG & SIOP) currently aim to intensify treatment for patients with poor prognosticators while reducing therapy and subsequent long-term complications, for those with favourable prognostic features. Parenchymal sparing renal surgery for patients with small unilateral WT remains controversial. Treatment of children with Wilms tumor should certainly involve a team of specialized paediatric surgeons, oncologists, radiologists, pathologists and radiotherapists. Partial nephrectomy or nephron-sparing surgery should be done in selected patients. Low-risk patients should receive fewer chemotherapeutic agents and at lower cumulative doses. In COG these patients receive no adjuvant therapy. Trials to further reduce radiotherapy doses or omit radiotherapy in selected cases may be undertaken. Conclusion Most of the patients with WT have good prognosis owing to multimodality treatment and multidisciplinary care. But further studies should be done on usage of chemotherapy and radiotherapy under more accurate risk-stratified strategies and to decrease the late effects of surgery. Consent The patients have given their consent for the study to be published. Figure permission Permission has been obtained from the parents for the photographs of the child to be potentially published. Funding There was no funding from any source. Declaration of competing interest We declare that we do not have any conflict of interest.
2021-03-22T17:33:17.652Z
2021-03-07T00:00:00.000
{ "year": 2021, "sha1": "ba44cbfbbf34bf996e6dd2aeca6dc0b9fec577a0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.amsu.2021.102202", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ba44cbfbbf34bf996e6dd2aeca6dc0b9fec577a0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236320712
pes2o/s2orc
v3-fos-license
Intra- and Inter-Rater Reliability of Strength Measurements Using a Pull Hand-Held Dynamometer Fixed to the Examiner’s Body and Comparison with Push Dynamometry Hand held dynamometers (HHDs) are the most used method to measure strength in clinical sitting. There are two methods to realize the assessment: pull and push. The purpose of the present study was to evaluate the intra- and inter-rater reliability of a new measurement modality for pull HHD and to compare the inter-rater reliability and agreement of the measurements. Forty healthy subjects were evaluated by two assessors with different body composition and manual strength. Fifteen isometric tests were performed in two sessions with a one-week interval between them. Reliability was examined using the intra-class correlation (ICC) and the standard error of measurement (SEM). Agreement between raters was examined using paired t-tests. Intra- and inter-rater reliability for the tests performed with the pull HHD showed excellent values, with ICCs ranging from 0.991 to 0.998. For tests with values higher than 200 N, push HHD showed greater differences between raters than pull HHD. Pull HHD attached to the examiner’s body is a method with excellent reliability to measure isometric strength and showed better agreement between examiners, especially for those tests that showed high levels of strength. Pull HHD is a new alternative to perform isometric tests with less rater dependence. Introduction Quantifying the magnitude of strength is useful for rehabilitation programs, providing helpful information on setting target values, for setting up appropriate exercise loads, and the effectiveness and progress of treatment [1]. The evaluation of strength is one of the usual practices by health professionals to assess healthy individuals [2][3][4] and in the management patients with different lower limb or upper limb pathologies [5], such as knee osteoarthritis [6], rotator cuff injuries [7], and neurogenic thoracic outlet syndrome [8]. Among the tools to measure strength in clinical sitting, the most used is hand held dynamometers (HHDs) [9], since it has advantages such as portability, cost, and ease of use compared to other more expensive and less versatile methods (i.e., isokinetic dynamometer) [10]. In general, HHDs can be classified into two types, push or pull [1,[11][12][13]. Push HDD consists of the patient having to push against the HHD, which is usually stabilized by the examiner's hand and has been shown to be a reliable method [10,14]. This push mode has the disadvantage that examiner's sex and strength influence the strength values (the reliability increases when the rater is stronger than the subject) [14]. Pull HHD consists of the patient pulling the HHD, which is generally attached to a rigid structure such as espalier, stretcher, or glass suction cup and showing to be also a reliable method [12,[15][16][17][18]. Materials and Methods This cross-sectional study enrolled 40 healthy volunteer subjects who were recruited through advertising in Blasco Ibañez Campus of the University of Valencia (Table 1 shows the participants' characteristics). The specific inclusion criteria were: (1) participants' age between 18 and 40 years; (2) not having undergone a surgical operation on the lower or upper limb in the last two years; and (3) not having suffered pain episodes in the lower or upper limb two months before data collection. After a detailed explanation of the study procedures, the participants signed informed consent. The experimental protocol was approved by the Ethics Committee of the University of Valencia (Spain) (H1533739889520). Data collection was carried out in the clinical research laboratory of the Department of Physiotherapy (University of Valencia). Table 1. Characteristics of the participants (n = 40). Procedures Two sports and health professionals (a female and a male) were chosen to carry out the isometric tests, both with 1 year of clinical experience and with a master's degree. Both raters, with different body composition (body mass: 55.4 kg and 91.3 kg and stature: 166 cm and 180 cm, respectively) were chosen to reflect different profiles of clinicians working both clinical and research settings. As previous authors [13], the raters completed a test of one maximum repetition of seated bench press as an indicator of general upper-extremity strength (47 kg for female tester and 81 kg for male tester). Raters received a 1-h training session on how to perform the measurements with both the pull HHD and the push HHD. Following the training, they performed testing procedures with 3 volunteers, supervised in turn by a health professional with extensive HHD experience. Both examiners were blinded to the strength values, with a third researcher responsible for viewing the strength values and recording them. The pull HHD selected for the study was DiCI (Ionclinics S.L, L'Alcudia, Spain), which registers the traction strength through two hooks in series [19]. For the DiCI measurement, one end was attached with a strap to the subject's ankle or wrist and the other end, with a belt, to the examiner's body (Appendix A). On the other hand, the push HHD used was MicroFET2 (Hoggan Health Technologies Inc., Salt Lake City, UT, USA), widely used in the literature [20,21]. Isometric tests were performed on the dominant leg or arm in two sessions with a one-week interval between them. The two sessions began evaluating the strength using the pull HHD by tester 1 (male) and, thus, evaluating the intra-tester reliability (both intrasession and intersession). Subsequently, the isometric strength of the participants was again measured either by rater 1 or rater 2, randomly, with the pull HHD the first session and with the push HHD the second session in order to examine the inter-rater reliability of each HHD (Figure 1). The pull HHD selected for the study was DiCI (Ionclinics S.L, L'Alcudia, Spain), which registers the traction strength through two hooks in series [19]. For the DiCI measurement, one end was attached with a strap to the subject's ankle or wrist and the other end, with a belt, to the examiner's body (Appendix A). On the other hand, the push HHD used was MicroFET2 (Hoggan Health Technologies Inc., Salt Lake City, UT, USA), widely used in the literature [20,21]. Isometric tests were performed on the dominant leg or arm in two sessions with a one-week interval between them. The two sessions began evaluating the strength using the pull HHD by tester 1 (male) and, thus, evaluating the intra-tester reliability (both intrasession and intersession). Subsequently, the isometric strength of the participants was again measured either by rater 1 or rater 2, randomly, with the pull HHD the first session and with the push HHD the second session in order to examine the inter-rater reliability of each HHD (Figure 1). Before performing the isometric tests, the anthropometric characteristics of the participants were measured. A warm-up was performed on a bicycle with low resistance and at comfortable speed (80 revolutions per minute) for 10 min and three submaximal isometric contractions for each position. In addition, these submaximal contractions were also used to familiarize the participants with correct execution of the tests. All tests were performed on a stretcher. The lower limb tests were performed both in the supine position for hip abduction (Hip-ABD), hip adduction (Hip-ADD), ankle flexion (Ank-F), and ankle extension (Ank-E) tests; in the prone position for hip extension (Hip-E), hip rotation external (Hip-ER), and internal (H-IR); and in the sitting position for hip flexion test (Hip-F). The upper limb tests were performed in supine for elbow flexion (Elb-F) and extension (Elb-E), for shoulder flexion (Sho-F), extension (Sho-E), and abduction (Sho-A), and for shoulder internal rotation (Sho-IR) and external (Sho-ER). These isometric tests ( Figures A1 and A2), both for lower and upper limb, were selected because they showed small measurement variation in previous studies [13,22,23]. Test order were randomized for each participant to avoid systematic bias related to this. Two 5 s MVICs were performed per movement with 60 s of rest between measurements. A rest of 10 min was applied between rater measurements. The participants were instructed to make the maximum effort and received oral motivations to maintain the strength performed. Statistical Analysis Participant characteristics and strength values (Newtons) are presented as mean ± standard deviation (SD) or percentages, as appropriate. The mean between repetitions was used for analyses. Custom written scripts computed with MATLAB (version R2019b; Before performing the isometric tests, the anthropometric characteristics of the participants were measured. A warm-up was performed on a bicycle with low resistance and at comfortable speed (80 revolutions per minute) for 10 min and three submaximal isometric contractions for each position. In addition, these submaximal contractions were also used to familiarize the participants with correct execution of the tests. All tests were performed on a stretcher. The lower limb tests were performed both in the supine position for hip abduction (Hip-ABD), hip adduction (Hip-ADD), ankle flexion (Ank-F), and ankle extension (Ank-E) tests; in the prone position for hip extension (Hip-E), hip rotation external (Hip-ER), and internal (H-IR); and in the sitting position for hip flexion test (Hip-F). The upper limb tests were performed in supine for elbow flexion (Elb-F) and extension (Elb-E), for shoulder flexion (Sho-F), extension (Sho-E), and abduction (Sho-A), and for shoulder internal rotation (Sho-IR) and external (Sho-ER). These isometric tests ( Figures A1 and A2), both for lower and upper limb, were selected because they showed small measurement variation in previous studies [13,22,23]. Test order were randomized for each participant to avoid systematic bias related to this. Two 5 s MVICs were performed per movement with 60 s of rest between measurements. A rest of 10 min was applied between rater measurements. The participants were instructed to make the maximum effort and received oral motivations to maintain the strength performed. Statistical Analysis Participant characteristics and strength values (Newtons) are presented as mean ± standard deviation (SD) or percentages, as appropriate. The mean between repetitions was used for analyses. Custom written scripts computed with MATLAB (version R2019b; The Mathworks, Natick, MA, USA) was used to perform all statistical analyses by a researcher blinded for measurements. Second, for the analysis of the agreement between the strength measurements (rater 1 and rater 2) and to assess systematic between-rater bias, that is, if values obtained by one rater systematically differed from that of the other rater, paired t-tests were used [26]. Furthermore, the differences between raters were calculated for each method and they were compared using paired t-test, with a level of significance p < 0.05. Additionally, to illustrate the differences between HHDs as a function of the strength obtained, the Bland Altmann plots were performed in those tests with higher strength values. Sample size was calculated using the formula for reliability studies based on confidence intervals (CIs) described by [27]. With the number of instruments (k) equal to 2, the CI around r (the reliability coefficient) of 0.05, and an estimated r of 0.95, the sample size (n) was calculated to be 25 participants. However, ultimately, we included 15 more participants in the final sample in order to increase the study power. Table 2 shows the intra-rater reliability for the tests performed with the pull HHD, both intra-session and inter-sessions. The intra-session reliability showed excellent values, with ICCs ranging from 0.996 to 0.998. Furthermore, the SEM values were less than 1%. The inter-session reliability obtained similar values, with ICC higher than 0.995 and SEMs lower than 1%. Table 3 shows the inter-rater reliability and agreement for the tests performed with the pull HHD. All tests showed excellent reliability (ICCs > 0.991), with SEMs lower than 1%. The agreement between rater showed differences between the measurements of rater 1 and rater 2 ranging from −0.69% to −3.78%, always in favor of rater 1. Figure 2 illustrates differences between raters for the measurements of each participant, in the lower limb ( Figure 2A) and the upper limb ( Figure 2B). As can be seen, for some movements (e.g., hip abduction/adduction or hip rotations) both the pull and the push HHD methods showed differences lower than 20 N (rater differences ranged between 0.20% to 0.89% for pull HHD (Table 3) and between 0.26% to 1.59% for push HHD (Table 4)). On the other hand, for movements such as Hip-F, Ank-F, or Sho-E, both methods show greater differences between raters, but these are greater for the push HHD than for the pull HHD method; the differences between raters are −3.61%, −3.78%, and −2.84% for the pull HHD (Table 3) and −9.68%, −12.91%, and −9.71% for the push HHD (Table 4). Inter-Rater Reliability push HHD methods showed differences lower than 20 N (rater differences ranged between 0.20% to 0.89% for pull HHD (Table 3) and between 0.26% to 1.59% for push HHD (Table 4)). On the other hand, for movements such as Hip-F, Ank-F, or Sho-E, both methods show greater differences between raters, but these are greater for the push HHD than for the pull HHD method; the differences between raters are −3.61%, −3.78%, and −2.84% for the pull HHD (Table 3) and −9.68%, −12.91%, and −9.71% for the push HHD (Table 4). The differences by method as a function of the strength obtained in these three tests are illustrated in Figure 3 by means of the Bland Altman plots. Bland Altman plots show how from strength values greater than 200 N, the differences between raters for the push HHD increase progressively, while the differences in the pull HHD remain stable. Discussion Our results support our initial hypothesis that stabilizing a pull HHD to the examiner's body has excellent reliability achieved for isometric strength measurements performed by examiners with different manual strength and in tests with different strength values. In addition, this new method presents a better agreement between examiners than push HHD against the hand, especially for tests with strength values greater than 200 N. To our knowledge, this study is the first to examine the reliability of a pull HHD attached to the examiner's body. Intra-reliability for this method proved to be excellent (ICCs > 0.998). Other studies with stabilized pull HHDs (these to a fixed external element) have also obtained high ICCs for intra-rater reliability, both for hip and ankle tests (ICCs Discussion Our results support our initial hypothesis that stabilizing a pull HHD to the examiner's body has excellent reliability achieved for isometric strength measurements performed by examiners with different manual strength and in tests with different strength values. In addition, this new method presents a better agreement between examiners than push HHD against the hand, especially for tests with strength values greater than 200 N. To our knowledge, this study is the first to examine the reliability of a pull HHD attached to the examiner's body. Intra-reliability for this method proved to be excellent (ICCs > 0.998). Other studies with stabilized pull HHDs (these to a fixed external element) have also obtained high ICCs for intra-rater reliability, both for hip and ankle tests (ICCs ranged from 0.88 to 0.98) [12] and shoulder tests (ICCs ranged from 0.94 to 0.98) [16,17]. Otherwise, compared with other studies where they have used pull HHD attached to structures, our method showed ICCs for inter-rater reliability (ICCs > 0.991) similar or slightly superior to those studies (ICCs ranging from 0.69 to 0.99 for hip tests, from 0.76 to 0.99 for ankle tests, and from 0.86 to 98 for shoulder tests) [12,15,17,28]. Thus, the reliability of attaching a pull HHD to the examiner's body would not be inferior to attaching it to a fixed external element. The agreement between the examiners' measurements proved to be different between methods, especially in those tests with strength values greater than 200 N. As previous authors have described, in those tests with values greater than 200 N, the measurements of HHD without fixation compared to with fixation tend to underestimate the strength values [13,29]. Our results provide the novelty that fixing the HHD to the examiner's body is sufficient to reduce such underestimation. For example, in tests such as Hip-F or Ank-F (with values close to or greater than 300 N), the differences between raters for the push HHD were 9.68% and 12.91%, respectively, compared to 3.61% and 3.78% for the pull HHD. In the upper limb this is similar, where the Sho-E, with values close to 300 N, showed differences between testers of 9.71% for push HHD versus 2.84% for pull HDD. The differences for push HHD between raters are similar to studies that have also used examiners with different strengths. For example, Kelln et al., 2008 found 8.87% differences for Ank-F [30]. This study proposes a new method to perform isometric tests, stabilizing an HHD pull in the examiner's body. This method has shown excellent inter-and intra-rater reliability, and compared to other methods, its use provides clinical advantages for sports and health professionals. First, compared to other methods that have tried to solve the problem of examiner interaction (e.g., fixing the HHD to espalier, metal bar, or glass suction cup) [12,15,17,28,31,32], pull HHD method of this study presents a similar reliability to such methods but without subtracting clinical application as it does not need external fixation or is limited to specific movements. Second, pull HHD reduces the interaction of the examiner's strength compared to the use of push HHD against the hand. Since push HHD is a common method of strength measurement among sports and health professionals due to its easy use, but it presents the bias of the examiner's interaction, pull HHD fixed to the examiner's body can be an alternative of easy use and less bias. Likewise, other types of clinical test has been used to assess the muscle performance, but with weak positive correlation against HHD [33]. This study had several strengths. First, we performed multiple tests of both lower limb and upper limb movements, eight and seven, respectively. To the best of our knowledge, this is the first study to examine the reliability of fifteen isometric tests, so we proposed a broad measurement protocol with HHD in the same study. Second, we avoided an information bias, since the raters were blinded from the strength values as there was a third researcher who was in charge of reading and recording them. In turn, a fourth researcher in charge of the statistical analysis was blinded as to which HHD corresponded to the different strength records. The main limitation was that the measurements were made on a healthy population, limiting their generalization to other populations. Although it has been shown that the reliability of HHDs is lower in healthy population than in patients (due to greater strength and less variability), future studies should examine our protocol in clinical populations. Likewise, the inter-rater reliability was carried out by two raters, a procedure that according to the literature is sufficient for validation, but that the involvement of three or more raters might have provided even more reliable information. Future studies should address this limitation by considering, at least, three raters. Even so, we consider that this first study is essential to provide normative values in healthy people with which to compare. Conclusions This study examines the intra-and inter-rater reliability of a new proposal to measure isometric strength, a pull HHD attached to the examiner's body. This method showed excellent reliability and acceptable agreement between the examiners' measurements, who had a different body and strength profile. Furthermore, compared to the traditional method of strength measurement with HHD, pushing against the examiner's hand, pull HHD showed better agreement between examiners, especially for those tests that showed high levels of strength. Thus, this new use of pull HHD may represent a new alternative for professionals who want to perform isometric tests with less influence of their strength on the values. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The datasets generated during and/or analyzed during the current study are available from the corresponding authors on reasonable request. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Diagnostics 2021, 11, x FOR PEER REVIEW 9 of 11 different body and strength profile. Furthermore, compared to the traditional method of strength measurement with HHD, pushing against the examiner's hand, pull HHD showed better agreement between examiners, especially for those tests that showed high levels of strength. Thus, this new use of pull HHD may represent a new alternative for professionals who want to perform isometric tests with less influence of their strength on the values. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The datasets generated during and/or analyzed during the current study are available from the corresponding authors on reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2021-07-26T05:34:08.710Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "457d2ba9f85ffad509dd60d6ac4fda2bf02f4c5f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/11/7/1230/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "457d2ba9f85ffad509dd60d6ac4fda2bf02f4c5f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
23895717
pes2o/s2orc
v3-fos-license
Epidermal Growth Factor Receptor (EGFR)-mediated Positive Feedback of Protein-tyrosine Phosphatase ϵ (PTPϵ) on ERK1/2 and AKT Protein Pathways Is Required for Survival of Human Breast Cancer Cells* Background: To investigate the functional role of PTPϵ in human breast cancer cell lines. Results: PTPϵ was up-regulated in human breast cancer cells in an EGFR- and ERK1/2-dependent manner. PTPϵ displayed a positive role in survival of human breast cancer cells. Conclusion: PTPϵ generates a positive feedback regulatory loop required for survival of human breast cancer cells. Significance: PTPϵ could be a putative target in breast cancer treatment. Increased tyrosine phosphorylation has been correlated with human cancer, including breast cancer. In general, the activation of tyrosine kinases (TKs) can be antagonized by the action of protein-tyrosine phosphatases (PTPs). However, in some cases PTPs can potentiate the activation of TKs. In this study, we have investigated the functional role of PTPϵ in human breast cancer cell lines. We found the up-regulation and activation of receptor PTPϵ (RPTPϵ) in MCF-7 cells and MDA-MB-231 upon PMA, FGF, and serum stimulation, which depended on EGFR and ERK1/2 activity. Diminishing the expression of PTPϵ in human breast cancer cells abolished ERK1/2 and AKT activation, and decreased the viability and anchorage-independent growth of the cells. Conversely, stable MCF-7 cell lines expressing inducible high levels of ectopic PTPϵ displayed higher activation of ERK1/2 and anchorage-independent growth. Our results demonstrate that expression of PTPϵ is up-regulated and activated in breast cancer cell lines, through EGFR, by sustained activation of the ERK1/2 pathway, generating a positive feedback regulatory loop required for survival of human breast cancer cells. In human breast cancer cells, the tyrosine kinases (TKs) 3 from the human epidermal growth factor receptor (HER) fam-ily are major drivers of oncogenesis, as a consequence of their increased Tyr phosphorylation and activity (1). Conversely, a group of protein-tyrosine phosphatases (PTPs) counteract the action of HER kinases and are candidate breast cancer suppressor proteins (2). Src-family kinases (SFKs) are non-receptor TKs whose activity is also relevant for breast cancer tumorigenesis (3). However, most of PTPs that dephosphorylate SFKs, including the related receptor PTPs PTPRA/PTP␣ and PTPRE/ PTP⑀, have a positive role on SFK activity, suggesting an oncogenic role for these PTPs (4 -7). PTP⑀ is present in cells as a receptor type (RPTP⑀) or as a cytosolic (cytPTP⑀) protein, encoded by different mRNAs (8 -10). Alternative translation initiation from both RPTP⑀ and cytPTP⑀ mRNAs renders a third PTP⑀ cytosolic protein product, p67-PTP⑀; and calpain cleavage generates in cells another cytosolic PTP⑀ protein species, named p65-PTP⑀ (11). In leukocytes, an additional cytosolic isoform lacking the C-terminal portion has been reported (12). RPTP⑀ is located at the plasma membrane and is highly glycosylated in the extracellular region, while the cytPTP⑀, p67 and p65 are mainly localized in the cytosol (8,11,13,14). Both RPTP⑀ and cytPTP⑀ can form dimers, making the phosphatase inactive, and removing of the 22 N-terminal residues causes enzyme inactivation by constitutive dimerization of the protein (15). In mammary tumor cells, as well as in HEK293 cells, RPTP⑀ can be phosphorylated at its C-terminal Tyr-695 residue by HER2, which reduces dimerization and augments RPTP⑀ activity to dephosphorylate and activate Src (16). On the other hand, phosphorylation by epidermal growth factor receptor (EGFR) of cytPTP⑀ on the equivalent Tyr (Tyr-638) residue, increases cytPTP⑀ association to tubulin from microtubules, reducing its phosphatase activity and localization near the plasma membrane (17). Finally, in osteoclasts, integrin activation also triggers Src-dependent cytPTP⑀ phosphorylation on Tyr-638, which results in increased Src dephosphorylation and activation (18). PTP⑀-deficient mice develop normally, breed well and are fertile, although they show functional abnormalities in bone marrow-derived macrophages (19) and defects in osteoclast subcellular organization and function (20). Abnormalities in cellular morphology, together with reduced intracellular K ϩ content and increased Ca 2ϩ -activated K ϩ channel activity were also documented in red cells from PTP⑀-deficient mice (21). In addition, at an early post-natal age these mice displayed hypomyelination in the sciatic nerve axons, most likely as a result of hyperphosphorylation of voltage-gated potassium channel Kv2.1 in Schwann cells (22,23). RPTP⑀ is overexpressed in mouse mammary tumors initiated by activated Ras or activated HER2 (9). Other tumor types from the Ras-transgenic mice did not overexpress RPTP⑀, suggesting a specific linkage with mammary epithelial tumors. In mammary tumor cells where HER2 is expressed, the up-regulation of PTP⑀, together with its activation upon phosphorylation by HER2, facilitates the dephosphorylation of Src (at Tyr-530), Yes and Fyn, and activation of these SFKs (24,25). Transgenic mice overexpressing RPTP⑀ under the mouse mammary tumor virus (MMTV) promoter developed mammary gland hyperplasia, in which foci of transformed cells were observed, accompanied by residual milk production following several cycles of pregnancy. Additionally, these mice developed sporadic mammary tumors more frequently than wild type control mice (26). Cells from mammary epithelial tumors initiated by HER2 in PTP⑀ knock-out mice confirmed these findings by appearing less transformed morphologically and with reduced cell proliferation (25). Together, these findings suggest that PTP⑀ is necessary for the fully transformed phenotype of HER2-induced mouse mammary tumor cells (4). The alteration of PTP⑀ expression in human cancer has been poorly documented. Global genomic analysis identified PTPRE as a gene deleted in glioblastoma and whose expression is down-regulated in malignant pheochromocytoma (36,37). In breast cancer, DNA microarray analysis revealed up-regulation of PTP⑀ in invasive breast carcinomas versus normal breast tissue (38). On the other hand, human HER2-positive versus HER2-negative breast carcinomas, as well as normal fibroadenomas versus invasive ductal carcinomas, displayed down-regulation of PTP⑀ expression (39). These findings support a role for PTP⑀ both as a tumor suppressor and as an oncogene in human cancer. However, the putative role of PTP⑀ in human cancer, including breast cancer, has not been experimentally addressed. In this study we have analyzed the expression and function of the different PTP⑀ isoforms in human breast cancer cell lines. Our results reveal the existence of an EGFR-induced pro-survival positive feedback loop on the ERK1/2 and AKT pathways exerted by PTP⑀, and suggest an anti-apoptotic and oncogenic role for PTP⑀ in human breast cancer. Cell Culture, RNA Interference, Cell Lysis, and Immunoblot-All parental human breast cancer cell line were obtained from ATTC, and were grown as indicated (40). LNCaP prostate adenocarcinoma cell lines was grown in Roswell Park Memorial Institute medium (RPMI) 1640 (Invitrogen) supplemented with 10% FBS, 2 mM L-glutamine, 100 units/ml penicillin and 100 g/ml streptomycin. HT-29 colon carcinoma cells were grown in McCoy 5A Medium (Invitrogen) supplemented with 10% FBS, 2 mM L-glutamine, 100 units/ml penicillin, and 100 g/ml streptomycin. SH-SY5Y neuroblastoma cell line was grown in Dulbecco's Modified Eagle Medium (DMEM; Invitrogen) supplemented with 10% FBS, 1% nonessential amino acids, and 1% sodium pyruvate. To generate double-stable cell lines, MCF-7/ Tet-On cell line was transfected with the pTRE2hyg plasmids (Clontech) using Fugene HD (Roche Diagnostics). To induce PTP⑀ expression in MCF-7/Tet-On cell lines, cells were pre-treated with 100 -500 ng/ml doxycycline (Dox; Sigma) 24 h before processing for further treatment or analysis. Silencing of gene expression by RNA interference in MDA-MB-231 cells and in parental MCF-7 cell lines was performed by transfection of validated siRNAs (Ambion and Qiagen) with Lipofectamine 2000 (Invitrogen). For flow cytometry analysis, soft agar colony-growth and cell proliferation/viability MTT analysis, cells were processed 24 -72 h post-transfection. Whole cell protein extracts were prepared by cell lysis and immunoblot as indicated (40). Gene Expression Profiling by DNA Microarray and Semiquantitative and Quantitative Real-time PCR-DNA microarray analysis of gene expression was performed using total RNA from MCF-7 cells treated or not with PMA for 4 days, as previously described (40). Semi-quantitative RT-PCR or quantitative real-time PCR (qPCR) were performed using total RNA from cell lines treated or not with PMA, FGF, or serum (FBS) for different times. RNA was purified using illustra RNAspin mini purification kit (GE Healthcare). The breast tissue RNA was purchased from Clontech (Human Total RNA Master Panel II). 1 g total RNA was subjected to reverse transcription (RT) using RevertAidTM reverse transcriptase, oligo(dT)18 primers, and RiboLock and RNase inhibitor (all from Fermentas). PCR reactions were performed using TaqDNA polymerase (Roche), as described (40). To assess the expression of the different PTP⑀ isoforms by semi-quantitative RT-PCR, PCRs were performed on the synthesized cDNA samples (100 ng/reaction) using sets of isoform-specific primers (cytoplasmic forward: 5Ј-AGCAACAGGAGTAGCTTTTCC-3Ј; receptor forward: 5Ј-CGGGCGCCTCCCAGCCGC-3Ј; cytoplasmic and receptor reverse: 5Ј-CCAGTTGGCTCAGAATCACCC-3Ј) and ␤-actin primers (as a control). qPCR reactions were performed using Lightcycler 480 (Roche), with the corresponding SYBRGreenI Master (Roche), and validated primer sets (Qiagen) specific for the PTPs, EGFR, or the reference genes HPRT, ACTB, and HMBS. All quantifications were normalized to the reference gene data. Relative quantification was performed using the comparative 2(Ϫ⌬⌬C t ) method, or as the LOG 2 (2(Ϫ⌬⌬C t )), according to the manufacturer's instructions. Cell Functional Assays-Cell cycle phase distribution and cell death were determined by flow cytometry analysis after propidium iodide (PI) labeling. Cells were plated at a density of 4 ϫ 10 5 cells per well (6-well plates), and grown in complete medium for 24 h. Cells were trypsinized and permeabilized, and labeled with PI (red fluorescence; Sigma-Aldrich). For soft agar colony formation assays, cells were plated at a density of 2500 cells per well (12-well plates) in 0.5 ml of complete media containing 0.35% cell culture tested agar (Sigma-Aldrich), onto a solidified bottom layer of 0.5 ml of complete media with 0.4% agar. Colonies were stained after 2-3 weeks with 0.05% crystal violet, and were photographed at X4 and ϫ40 magnification. Quantification of the number of colonies was made using ImageJ 1.40g, from triplicate plates. Cell proliferation/viability was determined using the MTT assay, according to the manufacturer's protocol (Roche). Cells were plated at a density of 3000 cells per well (96-well plates) with complete medium for 24 h. Then cells were incubated for 1-4 days and collected for processing. The absorbance was mea-sured at 570 nm. Data are presented as the average absorbance Ϯ S.D. corrected for background, from at least three different experiments. Differential Expression of Receptor and Cytosolic PTP⑀ in Human Breast Cancer Cells-To search for PTPs involved in the control of cell growth in breast cancer, a comprehensive analysis of the expression of the complete panel of human classical PTPs (36 genes) was performed. mRNA was obtained from MCF-7 human breast carcinoma cells, grown under control conditions or in the presence of the phorbol ester PMA, a PKC activator that causes prolonged activation of the MAP kinases ERK1/2 (41,42), and PTP expression was measured by DNA microarray gene-expression analysis (Fig. 1A). As shown, significant mRNA up-regulation was observed for PTPRE/PTP⑀, PTPRH/SAP1, PTPRN2/PTP-IA-2␤, PTPRT/RPTP, and PTPN3/PTPH1. Conversely, PTPN22/LYP mRNA was significantly down-regulated. These results were validated by quantitative real-time PCR (qPCR), using oligonucleotides specific for the above mentioned PTPs and mRNA from MCF-7 cells treated with PMA for short-term (6 h) or long-term (72 h) (Fig. 1B). PTPRE/PTP⑀ has been related with mammary carcinogenesis in mice, although scarce information is available on its role in human cancer. Thus, we focused our study on PTPRE/PTP⑀. Quantitative expression of PTP⑀ mRNA was tested in several breast cancer cell lines (BCCLs) upon short-and long-term PMA treatment (Fig. 1C, left panel). Up-regulation of PTP⑀ was observed in both MCF-7 and BT474 cells after short-and longterm PMA treatment. A more moderate up-regulation of PTP⑀ was observed in MDA-MB-231 and MDA-MB-468 cells upon short-term PMA stimulation. In other human adenocarcinoma cells, such as colon HT-29 and prostate LNCaP cells, no significant up-regulation was observed upon PMA treatment, whereas in the SH-SY5Y human neuroblastoma cells, PTP⑀ was moderately up-regulated (Fig. 1C, right panel). This indicates that PTP⑀ up-regulation upon PMA treatment is not exclusive to MCF-7 cells, and it is selectively achieved in different BCCLs and CCLs. We also analyzed PTP⑀ expression in MCF-7 and MDA-MB-231 cells upon other ERK1/2 activating stimuli, such as fibroblast growth factor (FGF) or serum (FBS). Up-regulation of PTP⑀ was observed in both MCF-7 and MDA-MB-231 cells after short-and long-term treatment with FGF or serum, demonstrating that PTP⑀ up-regulation in these cells is achieved by various stimuli (Fig. 1D). Next, we investigated the identity of the PTP⑀ mRNAs and proteins in the distinct human breast cancer cell lines grown in the absence and in the presence of PMA. Semiquantitative RT-PCR was performed, using oligonucleotides specific for cytPTP⑀ and RPTP⑀ mRNAs ( Fig. 2A). MCF-7 cells treated with PMA expressed the RPTP⑀ mRNA, whereas MDA-MB-231 expressed RPTP⑀ constitutively and displayed cytPTP⑀ up-regulation upon PMA treatment. In other breast cancer cell lines, the expression and induction of PTP⑀ isoforms was variable. We also monitored the relative abundance of cytPTP⑀ and RPTP⑀ mRNAs on human breast tissue ( Fig. 2A). As shown, human breast tissue expressed both cytPTP⑀ and RPTP⑀ mRNAs, in line with what observed in human breast cancer cell lines. Next, the expression of PTP⑀ protein in MCF-7 and MDA-MB-231 cells was investigated, using a specific anti-PTP⑀ anti-body that recognizes all four isoforms of PTP⑀ (11) (Fig. 2, B and C). MCF-7 cells displayed increased expression of the RPTP⑀, p67, and p65 isoforms, upon PMA treatment, which was sustained up to 96 h. This kinetics was delayed with respect to the kinetics of activation of ERK1/2, as monitored by pERK1/2 content (Fig. 2B). High levels of RPTP⑀, p67, and p65 proteins were observed in untreated MDA-MB-231 cells, which were modestly increased upon PMA treatment. Despite our mRNA results, we did not detect expression of cytPTP⑀ protein isoform, in consistence with previous reports showing non-overlapping protein expression of cytPTP⑀ and RPTP⑀ isoforms (8,10). Remarkably, the high basal content of RPTP⑀ in MDA-MB-231 cells correlated with high basal ERK1/2 activation (Fig. 2C). We also tested PTP⑀ protein expression upon cell stimulation with FGF or serum. RPTP⑀ was up-regulated in MCF-7 and MDA-MB-231 cells upon these stimuli (Fig. 2, D, E, and F). In general, we observed a correlation between RPTP⑀ up-regulation and ERK1/2 activation, suggesting a functional relation between PTP⑀ and ERK1/2 activity. PTP⑀ Up-regulation in Breast Cancer Cells Depends on the EGFR and ERK1/2 Pathways-Using specific inhibitors we tested the involvement of distinct signaling pathways, including ERK1/2, p38, JNK, PI3K/AKT, and PKC pathways, in the upregulation of RPTP⑀ in MCF-7 and MDA-MB-231 cells treated with PMA (Fig. 3A, and data not shown). As shown, the PKC inhibitor GF109203X strongly diminished the PMA-triggered up-regulation of RPTP⑀ mRNA and protein (Fig. 3A, and data not shown), in agreement with previous reports that identified PKC as the major upstream target of PMA in MCF-7 and MDA-MB-231 cells (43)(44)(45)(46)(47). The ERK1/2 pathway-specific inhibitor PD98059 partially inhibited the up-regulation of RPTP⑀, both at the mRNA and protein level (Fig. 3A). In contrast, the p38specific SB203580, the JNK-specific SP600125, and the PI3Kspecific wortmannin inhibitors slightly increased RPTP⑀ upregulation (data not shown). Up-regulation of EGFR, but not of other HER family members in MCF-7 cells treated with PMA was observed by DNA microarray analysis, which was confirmed by qPCR (Fig. 3B, left panels). Also, FGF and serum stimulation triggered the up-regulation of EGFR in MCF-7 and MDA-MB-231 cells (data not shown). Increase in EGFR protein expression in PMA-treated MCF-7 cells was also observed by immunoblot (Fig. 3B, right panel). This prompted us to investigate the putative role of EGFR in PTP⑀ up-regulation. Treatment of cells with the AG1478 EGFR inhibitor prevented the up-regulation RPTP⑀ by PMA, as well as the activation of ERK1/2 and AKT (Fig. 3C). Experiments with stable MCF-7 cell lines overexpressing RPTP⑀ (see below) demonstrated that RPTP⑀ is phosphorylated in tyrosine 695. Remarkably, this phosphorylation was prevented in the presence of the AG1478 EGFR inhibitor (Fig. 3D). Together, these results indicate that EGFR activity is required for RPTP⑀ up-regulation and activation. Ectopic Expression of PTP⑀ in MCF-7 Cells Enhances Induced ERK1/2 Phosphorylation, and Affects Colony Formation in Soft Agar-Stable MCF-7 clones were generated overexpressing, in a doxycycline (Dox)-dependent, inducible-manner, cytPTP⑀ or RPTP⑀ wild type or catalytically inactive/substrate-trapping mutants (cytPTP⑀ wt, cytPTP⑀ D245A, RPTP⑀ wt, and RPTP⑀ D302A). Overexpression of ectopic PTP⑀ was efficiently induced upon Dox treatment, although some leakage expression was observed in the absence of Dox, especially in the case of cytPTP⑀ (Fig. 4A). Ectopic overexpression of wild type RPTP⑀ or cytPTP⑀ enhanced the pERK1/2 levels upon EGF stimulation, when compared with EGF-treated, control empty vector cell line. On the other hand, in the catalytically inactive substrate-trapping mutant cell lines, a decrease in pERK1/2 levels was observed, suggesting a dominant negative effect on ERK1/2 activation for these mutations (Fig. 4A). Similar results were obtained upon stimulation with PMA or serum (data not shown). The functional consequences of PTP⑀ overexpression in MCF-7 cells were also investigated at the cellular level. Because of PTP⑀ leakage expression in our MCF-7 cells, in the functional studies with the PTP⑀ clones we have compared empty vector (EV) clones with PTP⑀ clones, always in the presence of Dox. The capacity of colony formation in soft agar of the cell lines was tested. MCF-7 cells overexpressing wild type RPTP⑀ or cytPTP⑀ displayed enhanced capacity of colony formation in soft agar, while those overexpressing cytPTP⑀ D245A or RPTP⑀ D302A displayed a decrease in colony formation in soft agar (Fig. 4B). Because PTP substrate-trapping mutants bind their substrates also in the presence of wild-type PTPs (48), our results suggest that endogenous PTP⑀ may play as positive role in cell growth. Silencing of PTP⑀ Decreased Breast Cancer Cell Viability and Colony Formation in Soft Agar, and Abolished ERK1/2 and AKT Activation upon Stimulation-To investigate further the functional role of PTP⑀ in breast cancer cells, PTP⑀ expression was down-regulated in MCF-7 cells using specific siRNA. Downregulation of PTP⑀ (siPTP⑀; ϳ80% efficiency as measured by qPCR and immunoblot) induced robust cell death when compared with nonspecific silenced cells (siNS), as indicated by the morphology of the cells (Fig. 5A). This was confirmed by flow cytometry, measuring cell death, and sub-G0 cell cycle distribution by PI staining, and by immunoblot analysis of apoptotic cell markers. Increased cell death and retention in G0-G1 phase was observed on the siPTP⑀-silenced cells after 72 h of silencing, when compared with control cells (Fig. 5B). We also detected, upon silencing of PTP⑀, the presence of markers for cells undergoing apoptosis, such as cleaved caspase-8 and cleaved PARP (Fig. 5B), indicating that silencing of PTP⑀ in MCF-7 cells triggers caspase-mediated apoptotic pathways. Formation of colonies in soft agar was also decreased upon silencing of PTP⑀ in MCF-7 cells, when compared with nonspecific silenced cells (siNS) (Fig. 5C). Cell proliferation and viability of the cells was measured by MTT assay, using two different siRNAs (siPTP⑀ #1 and siPTP⑀ #2). After silencing for 48 and 72 h, diminished cell viability was observed in siPTP⑀ cells, when compared with control-silenced cells (Fig. 5D). Finally, silencing of PTP⑀ in MDA-MB-231 also caused less proliferation and viability, when compared with siNS cells (Fig. 5E). Together, these data suggest that PTP⑀ is required for the survival of BCCLs. The effect of PTP⑀ silencing on MCF-7 cell signaling was tested by immunoblotting. To this end, 48 h after transfection of siRNAs, cells were treated for 24 h with PMA (Fig. 6A). Silenced PTP⑀ cells showed diminished pERK1/2 and pAKT levels when compared with PMA-treated, control-silenced cells. Interestingly, upon silencing of PTP⑀, lower basal levels of total AKT were observed. However, the basal ERK1/2 levels were not affected by the silencing. Together, these results demonstrate that endogenous PTP⑀ exerts in MCF-7 cells a positive regulation of ERK1/2-and AKT-mediated responses to PMA. DISCUSSION The pTyr content in cells is tightly regulated by the actions of TKs and PTPs, which play important roles in breast cancer (1,2). In this work, we have analyzed the expression of classical PTPs in the MCF-7 human breast cancer cell line grown in the presence of PMA, a pleiotropic agent that activates PKC and ERK1/2. Using DNA microarray technology and semi-quantitative and quantitative PCR methods, we show that five classical PTP mRNAs were up-regulated in MCF-7 cells treated with PMA, i.e. PTPRE/PTP⑀, PTPRH/SAP1, PTPRN2/PTP-IA-2␤, PTPRT/PTP, and PTPN3/PTPH1. PTPRT and PTPN3 transcripts have also been found to be up-regulated in breast cancer samples (49,50). Only one PTP mRNA, PTPN22/LYP, was down-regulated in this system. Interestingly, alterations on C, inhibition of EGFR activity decreases the up-regulation of RPTP⑀ in MCF-7 cells upon PMA treatment. Cells were pre-treated with DMSO or with AG1478 EGFR inhibitor (AG), prior to PMA treatment, and harvested after 48 h of PMA treatment. PTP⑀, pERK1/2, ERK1/2, pAKT, and AKT levels were analyzed by immunoblot. Quantification of PTP⑀ expression relative to GAPDH, pERK1/2 relative to total ERK1/2, and pAKT relative to total AKT, are shown as arbitrary units (AU) in the right panels. In A, B, and C, data represent the mean values Ϯ S.D., statistically significant results are marked with: *, p Ͻ 0.005. D, RPTP⑀ tyrosine phosphorylation in MCF-7 cells depends on EGFR activity. Empty vector (EV) MCF-7-Tet-On cells, or MCF-7-Tet-On cells expressing RPTP⑀ wt were generated, and ectopic expression of PTP⑀ was induced with Dox. Cells were pre-treated with DMSO or with AG, prior to PMA treatment, and harvested after 30 min of PMA treatment. RPTP⑀ phosphorylated on Tyr-695 residue and total RPTP⑀ levels were analyzed by immunoblot using specific antibodies. In B, C, and D, GAPDH content is included as a protein loading control. PTPN22 function are associated with the unbalanced regulation of SFKs during autoimmune diseases (51), but the links between PTPN22 and breast cancer remain to be explored. From our screening, we found PTP⑀ of special interest since this PTP had previously been related with the transformation of mouse mammary tumors (4). However, the association of PTP⑀ with human breast cancer has not been studied in detail. We found up-regulation of PTP⑀ mRNA by PMA in several breast cancer cell lines, but PTP⑀ expression was not induced in the LNCaP prostate or HT-29 colon cancer cell lines. Immunoblot analysis showed that the PTP⑀ isoform mainly induced in MCF-7 and MDA-MB-231 cells was the receptor form, RPTP⑀, together with the two smaller isoforms: p65 and p67. We also found up-regulation of RPTP⑀ by serum and FGF in MCF-7 and MDA-MB-231 cells. However, the patterns of basal and induced expression of PTP⑀ isoforms in other breast cancer cells were variable, suggesting that expression of PTP⑀ is tightly regulated in mammary cells. In mouse mammary tumors initi-ated by MAPK upstream activators, such as constitutively active forms of HER2 and Ras, high levels of RPTP⑀ were observed (9). In line with these findings, we have found that up-regulation of both RPTP⑀ mRNA and protein, in MCF-7 and MDA-MB-231 cells, required the activation of EGFR and ERK1/2 pathways. Interestingly, EGFR is up-regulated and activated upon PMA-treatment (52,53); and our results), suggesting a link between EGFR activity and PTP⑀ up-regulation. At this regard, our results using the AG1468 EGFR inhibitor indicate that EGFR activity is required for up-regulation and phosphorylation of RPTP⑀. Because EGFR is a current target for triple negative breast cancer therapy, it would be of interest to analyze the status of RPTP⑀ in EGFR-altered breast cancer samples. In mouse mammary tumor cells from transgenic mice overexpressing RPTP⑀ in the mammary tissue, RPTP⑀ dephosphorylates and activates Src, which favors a transformed phenotype. No consistent changes in Src phosphorylation and activation Empty vector (EV) MCF-7-Tet-On cells, or MCF-7-Tet-On cells expressing cytPTP⑀ wild type (wt), cytPTP⑀ D245A, RPTP⑀ wt, or RPTP⑀ D302A, were generated, and ectopic expression of PTP⑀ was induced with Dox. Cells were incubated in the absence or in the presence of EGF for 5 min, and levels of PTP⑀, pERK1/2, and ERK1/2 were determined by immunoblot. GAPDH expression is included as a loading control. A representative immunoblot is shown out of at least three different experiments from at least two different clones. Quantification of pERK1/2 relative to total ERK1/2 levels is shown in arbitrary units (AU) in the right panels from doxycyline-induced cells. B, MCF-7-PTP⑀ cells form more colonies in soft agar. Stable cell lines expressing cytPTP⑀ or RPTP⑀, wt or catalytically inactive substrate-trapping mutants (D245A and D302A) were pre-treated for 24 h with Dox before plating in soft agar for the formation of anchorageindependent colonies. Cells were grown for 2 weeks in soft agar, and photographs of representative plates were taken. Quantification of the number of colonies, using ImageJ 1.40g, from triplicate plates, is shown. In A and B, data represent the mean values Ϯ S.D., statistically significant results are marked with: *, p Ͻ 0.005. were detected in our studies when RPTP⑀ overexpressing or silenced MCF-7 cells where compared with empty vector-expressing or control-silenced MCF-7 cells (data not shown). The possibility that other SFKs could be targets of PTP⑀ in this cell system needs to be explored. In this regard, we have identified Lyn and Fyn as SFKs whose mRNAs are up-regulated in MCF-7 cells upon PMA stimulation (data not shown). Alternatively, it is possible that the molecular mechanism of PTP⑀ action in the In the bottom panel, cells were silenced as above, and the levels of cleaved caspase-8 and cleaved PARP were determined by immunoblot. C, silencing of PTP⑀ in MCF-7 cells decreases colony formation in soft agar and viability. In the left panel, MCF-7 cells were transfected with siNS or with siPTP⑀, and cells were plated in soft agar for the formation of anchorage independent colonies. Cells were grown for 2 weeks in soft agar, and photographs of representative plates were taken. In the right panel, quantification of the number of colonies, using ImageJ 1.40g, from triplicate plates, is shown. Results represent the mean values Ϯ S.D. *, p Ͻ 0.005. D, silencing of PTP⑀ decreases viability in MCF-7 cells. MCF-7 cells were transfected with siNS or with two different PTP⑀ specific siRNAs (siPTP⑀#1 and siPTP⑀#2), and cell viability was measured during 4 days by the MTT assay. E, silencing of PTP⑀ decreases viability in MDA-MB-231 cells. MDA-MB-231 cells were transfected with siNS or siPTP⑀-specific siRNAs, and cell viability was measured by MTT as above. In B and C, data represent the mean values Ϯ S.D., statistically significant results are marked with: *, p Ͻ 0.005. MCF-7 human cell line is distinct from its mechanism in mouse mammary tumor cells. We observed an increase in ERK1/2 activation when catalytically active PTP⑀ was overexpressed, and a decrease in ERK1/2 and AKT activation upon PTP⑀ silencing. This suggests that the expression and phosphatase activity of PTP⑀ is required to maintain the activation of ERK1/2 and AKT pathways in MCF-7 cells. This also suggests that PTP⑀ could dephosphorylate and activate upstream components in the ERK1/2 and AKT pathways, and that PTP⑀ exerts its function in a positive feedback loop for the activation of both pathways in MCF-7 cells (Fig. 6B). In contrast, in other cell types, such as NIH3T3, HEK293, lymphocytes, or primary hepatocytes, PTP⑀ inhibited the activation of ERK1/2 (31, 32, 34, 53). Interestingly, analogous findings have been reported for PTP1B. PTP1B is a positive regulator of HER2 signaling, favoring the activation of ERK1/2 and PI3K/AKT pathways in breast cancer cells, and PTP1B knock-out mice display attenuated mammary tumorigenesis and malignancy (54 -56). However, in other cell types, PTP1B inhibits ERK1/2, and PI3K/AKT pathways (57). In support of a positive role for PTP⑀ in survival and transformation of breast cancer cells, we have found that MCF-7 cells overexpressing PTP⑀ displayed enhanced formation of colonies in soft agar, whereas the PTP⑀ substrate-trapping mutant expressing cells showed decreased colony formation. Silencing of PTP⑀ in MCF-7 cells also decreased the formation of colonies, likely as a result of the decreased survival and viability detected in these cells. In this regard, silencing of the closely related PTP␣ induced apoptosis in estrogen receptor FIGURE 6. A, silencing of PTP⑀ decreased the activation of ERK1/2 and AKT in PMA-treated MCF-7 cells. Cells were transfected with siNS or with siPTP⑀. 48 h after transfection, cells were kept untreated or treated with PMA for 24 h, before harvesting. PTP⑀, pERK1/2, ERK1/2, pAKT, and AKT levels were analyzed by immunoblot. A representative immunoblot is shown out of at least three different experiments. GAPDH content is included as a protein loading control. Quantification of relative pERK1/2 relative to total ERK1/2 levels, and pAKT relative to total AKT levels, are shown as arbitrary units (AU) in the right panels. Data represent the mean values Ϯ S.D., statistically significant results are marked with: *, p Ͻ 0.005. B, scheme of the feedback regulatory mechanism of PTP⑀, and its role in the control of survival and transformation of MCF-7 cells. EGFR up-regulation and activation is required for PTP⑀ induction and activation. Low levels of PTP⑀ diminish ERK1/2 and AKT activation, which reduces cell survival. EGFR and ERK1/2 activation induces PTP⑀ expression, which results in further activation of ERK1/2 and enhancement of survival and transformation. The possibility that PTP⑀ affects cell survival and/or transformation through other pathways is indicated with the lower dashed arrow. (ER)-negative MDA-MB-231 breast cancer cells, but not in MCF-7 cells (58). This suggests a differential involvement of PTP␣ and PTP⑀ in the growth control of distinct human breast cancer cell types. We have found up-regulation of RPTP⑀ expression in different human breast cancer cell lines upon different cell-growth conditions. Interestingly, we did not detect expression of cytPTP⑀ in the absence of RPTP⑀, suggesting the existence of internal regulatory loops of PTP⑀ expression. In MCF-7 and MDA-MB-231 cells, we have identified a positive feedback regulatory loop of RPTP⑀ expression upon the control of the EGFR and ERK1/2 signaling pathways. Our functional results in MCF-7 cells and other breast cancer cell lines suggest a positive role for PTP⑀ in cell growth and survival, which makes PTP⑀ a suitable candidate target for breast cancer therapy. The development of specific PTP⑀ inhibitors and/or specific antibodies against the RPTP⑀ extracellular region will be necessary to test the consequences of inhibiting PTP⑀ in breast cancer.
2018-04-03T01:46:36.274Z
2011-11-23T00:00:00.000
{ "year": 2011, "sha1": "7b8c9378486df086824ccc21368ba03c48579224", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/287/5/3433.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "e77f5e9f9bf7e80effc69338b1bb7302d89dac6c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259227085
pes2o/s2orc
v3-fos-license
SARS-CoV-2 Survival on Surfaces. Measurements Optimisation for an Enthalpy-Based Assessment of the Risk The present work, based on the results found in the literature, yields a consistent model of SARS-CoV-2 survival on surfaces as environmental conditions, such as temperature and relative humidity, change simultaneously. The Enthalpy method, which has recently been successfully proposed to investigate the viability of airborne viruses using a holistic approach, is found to allow us to take a reasoned reading of the data available on surfaces in the literature. This leads us to identify the domain of conditions of lowest SARS-CoV-2 viability, in a specific enthalpy range between 50 and 60 kJ/Kgdry-air. This range appears well-superimposed with the results we previously obtained from analyses of coronaviruses’ behaviour in aerosols, and may be helpful in dealing with the spread of infections. To steer future investigations, shortcomings and weaknesses emerging from the assessment of viral measurement usually carried out on surfaces are also discussed in detail. Once demonstrated that current laboratory procedures suffer from both high variability and poor standardisation, targeted implementations of standards and improvement of protocols for future investigations are then proposed. Introduction Studies on SARS-CoV-2 during the pandemic confirmed that possible transmission routes are direct contact, aerosols, and fomites. Using a holistic approach to investigate how temperature and humidity simultaneously affect the vitality of airborne viruses, since the beginning of the pandemic we proposed a method [1] based on the thermodynamic property Enthalpy, formerly introduced by the Dutch physicist H. Kamerlingh Onnes as H = U + pV, i.e., the sum of the internal energy U of a system and the product of its pressure p and volume V. As a matter of fact, during a process occurring at constant pressure, the Enthalpy variation represents the overall heat (sensible + latent) exchanged by the system, thus allowing us to define its state with just one parameter, which brings together information about both temperature and water content (humidity). The method can be used to analyse the results of literature or research experiments aimed at investigating the relationship between pathogens and environmental conditions; and, more importantly, to better design the ambient air parameters to assess the survival patterns. Indeed, when dealing firstly with the coronaviruses viability in aerosols with the aim of understanding how to mitigate the virulence of SARS-CoV-2 by maintaining adverse conditions in indoor environments, specific enthalpy h (i.e., Enthalpy per unit mass) was 2 of 16 found [1] to be correlated with virus survival. The method was then also successfully used to attempt a relationship between SARS-CoV-2 infectivity and outdoor climatic conditions, leading to an enthalpy-related seasonal risk scale [2] to predict the potential danger of the spread. It is worth noting that, until then, the scientific literature investigating the survival of viruses in air had not been able to provide unambiguous indications when temperature or humidity varied separately [3], sometimes finding trends that contradicted the experimental evidence collected over time. Assessing whether the domain identified of higher survival and infectivity in aerosols, which falls in an enthalpy range of 50 to 60 kJ/Kg dry-air , is superimposable with that on surfaces, could allow for a general model of the survival of viruses in the environment. This result could steer future investigations and provide valuable indications for facing the spread of infections. The method can also guide the correct design of setup/setting of HVAC facilities to reduce the risk of indoor infection. Moreover, we can also use the method as an index to predict the risk linked to outdoor climatic variations thus supporting decision-makers in selecting the most appropriate social actions. However, a number of physical and environmental parameters influencing the survival of viruses within their envelope could interfere with sampling during virus viability measurements in the laboratory environment. In addition to temperature and relative humidity, the pH value, the presence of pollutants, and UV [4] radiation can be decisive. Therefore, testing on surfaces from which the virus must be removed with its envelope for measurement purposes entails reducing the disturbance of confounding factors arising during the handling of samples. In this frame, the present work has made it possible not only to prove the general validity of the enthalpy range previously established for aerosols but also, at the same time, to identify relevant criticalities of experiments on surfaces and to suggest improvements in their measurement procedures. Materials and Methods For the present work, an extensive search of peer-reviewed publications, which dealt with the survival of SARS-CoV-2 and reported on its half-life by modelling the phenomenon with an exponential decay equation, was performed. To do this, the works based on the two-phase decay model, which results from the sum of a fast and a slow exponential decay [5,6], were excluded. As a matter of fact, directly correlating the published half-lives relating to the two stages of decay to the half-life relating to a single-phase model would theoretically be possible, but reprocessing the original data could lead to additional noise to the results produced by the authors. Moreover, not all types of data needed for our purpose were always available in publications. It was therefore preferred, comforted by the small number of works based on the two-phase model, to exclude the latter from the analysis. The articles consulted that did not report half-life as a summary parameter were also not included in the present discussion [7][8][9][10][11][12]. The survival studies we analysed can be grouped as follows: (i) those that investigated the dependence of virus survival on the type of surface; and (ii) those that investigated it as the temperature or relative humidity varied. The former set generally provides survival data for a fixed setup of environmental parameters as the surface type changes, while the latter set provides information on the interaction between the virus and the environment. More specifically, considering that differences in the aminoacidic composition and sequence (and therefore in the tridimensional structure) influence the behaviour of the protein in response to changes in the surrounding physical and chemical environment, it is worth noting that: • Relative humidity, RH: is believed to be responsible for greater or lesser stability of viruses with a lipidic envelope [13,14], albeit the presence of specific proteins jointly influences the envelope stability [15,16] • Temperature: is generally most investigated by holding relative humidity constant and is believed to be responsible for stabilising the lipidic layer at low temperatures and high humidity [17]. Furthermore, low temperatures and low relative humidity favour the survival and transmission of certain influenza viruses [18,19], and are associated with an increased occurrence of respiratory tract infections. • pH: is believed to be responsible for changes in the survival of enveloped viruses, as it causes alterations in viral glycoproteins that result in a limited ability to infect [20,21]. Furthermore, while viability decreases in saline solutions, it increases significantly in mucous [22] • UV light: action on viruses is well known in the literature, and sterilisation by UV light is a commonly used process. Although several studies have already demonstrated the effect of UV on SARS-CoV-2 [23][24][25][26], controlling the exposure of samples to light during experiments is not always carried out, which becomes an additional confounding factor and a source of data dispersion. This circumstance affects the validity of comparing the results of different experiments. • Medium: different media, or the variation in their composition, is another confounding factor [26][27][28]. The protein composition of the medium, for example, alters the ability of the virus to proliferate and survive, as demonstrated by Pastorino et al. [29] ( Figure 1). This constitutes a further obstacle when comparing data from different experiments. We can perceive this dependence appropriately by visualising the data from the work of Szpiro et al. [30] ( Figure 2) and Matson et al. [31] (Figure 3), the latter expressly given as a function of specific enthalpy. • Pollutants: the opinion of the scientific literature is now converging on the established role of pollutants in the survival and transmission capacity of viruses [32,33]. However, this role is especially significant when studying the survival of viruses in aerosols; it does not appear to be relevant-as was also the case in the present study-for the survival of SARS-CoV-2 on surfaces. • Temperature: is generally most investigated by holding relative humidity constant and is believed to be responsible for stabilising the lipidic layer at low temperatures and high humidity [17]. Furthermore, low temperatures and low relative humidity favour the survival and transmission of certain influenza viruses [18,19], and are associated with an increased occurrence of respiratory tract infections. • pH: is believed to be responsible for changes in the survival of enveloped viruses, as it causes alterations in viral glycoproteins that result in a limited ability to infect [20,21]. Furthermore, while viability decreases in saline solutions, it increases significantly in mucous [22] • UV light: action on viruses is well known in the literature, and sterilisation by UV light is a commonly used process. Although several studies have already demonstrated the effect of UV on SARS-CoV-2 [23][24][25][26], controlling the exposure of samples to light during experiments is not always carried out, which becomes an additional confounding factor and a source of data dispersion. This circumstance affects the validity of comparing the results of different experiments. • Medium: different media, or the variation in their composition, is another confounding factor [26][27][28]. The protein composition of the medium, for example, alters the ability of the virus to proliferate and survive, as demonstrated by Pastorino et al. [29] ( Figure 1). This constitutes a further obstacle when comparing data from different experiments. We can perceive this dependence appropriately by visualising the data from the work of Szpiro et al. [30] ( Figure 2) and Matson et al. [31] (Figure 3), the latter expressly given as a function of specific enthalpy • Pollutants: the opinion of the scientific literature is now converging on the established role of pollutants in the survival and transmission capacity of viruses [32,33]. However, this role is especially significant when studying the survival of viruses in aerosols; it does not appear to be relevant-as was also the case in the present study-for the survival of SARS-CoV-2 on surfaces. A critical point that emerges when analysing the work on the survival and viability of SARS-CoV-2 on surfaces, is the great variability among the parameters of the laboratory setup. As also highlighted by Bueckert et al. [28], we can see differences in: (i) the type and composition of the culture medium; (ii) its volume; (iii) the strain of the virus under investigation; (iv) the substrate and the method of titre quantification (PFU, TCID50). All these varying factors contribute to generating the noise observed when comparing data from different authors. In this regard, a comparison with the procedure used to measure the survival of the virus in aerosol showed that the latter has greater standardisation and generates less noise, allowing for a better results comparison. Indeed, the variability of decay data, especially in surface tests, is a criticality that has already been reported in the literature [34]. A critical point that emerges when analysing the work on the survival and viability of SARS-CoV-2 on surfaces, is the great variability among the parameters of the laboratory setup. As also highlighted by Bueckert et al. [28], we can see differences in: (i) the type and composition of the culture medium; (ii) its volume; (iii) the strain of the virus under investigation; (iv) the substrate and the method of titre quantification (PFU, TCID50). All these varying factors contribute to generating the noise observed when comparing data from different authors. In this regard, a comparison with the procedure used to measure the survival of the virus in aerosol showed that the latter has greater standardisation and generates less noise, allowing for a better results comparison. Indeed, the variability of decay data, especially in surface tests, is a criticality that has already been reported in the literature [34]. A critical point that emerges when analysing the work on the survival and viability of SARS-CoV-2 on surfaces, is the great variability among the parameters of the laboratory setup. As also highlighted by Bueckert et al. [28], we can see differences in: (i) the type and composition of the culture medium; (ii) its volume; (iii) the strain of the virus under investigation; (iv) the substrate and the method of titre quantification (PFU, TCID50). All these varying factors contribute to generating the noise observed when comparing data from different authors. In this regard, a comparison with the procedure used to measure the survival of the virus in aerosol showed that the latter has greater standardisation and generates less noise, allowing for a better results comparison. Indeed, the variability of decay data, especially in surface tests, is a criticality that has already been reported in the literature [34]. The present work uses the Enthalpy method to identify infectious risk domains. As in our previous studies [1,2], for each thermodynamic equilibrium state identified by its temperature and relative humidity, the specific enthalpy of moist air h has been calculated as follows [35]: where c a and c v are, respectively, the specific heat at a constant pressure of dry air, and of water vapour, which, around ambient temperature, can be assumed to be correspondingly equal to 1.006 kJ/kg • C and 1.86 kJ/kg • C; t is the temperature in centigrade degrees; AH is the absolute humidity of moist air, in kg v /kg dry-air , also called humidity ratio and defined as the ratio of the mass of water vapour to the mass of dry air in the moist air sample; r is the latent heat of vaporisation of water at its triple point, equal to 2501 kJ/kg; p s (t) is the saturated vapour pressure of water at temperature t in Pascal; and p is the total pressure of moist air, typically the atmospheric pressure, in Pascal. The saturated vapour pressure of water in Pascal can be calculated from the empirical formula derived by Hyland and Wexler for the temperature range of 0 to 200 • C [35,36]: ln(p s (T)) = C 1 /T + C 2 + C 3 T + C 4 T 2 + C 5 T 3 + C 6 ln(T) (3) in which C1 = −5.8002206 × 10 3 , C2 = 1.3914493 × 10 0 , C3 = −4.8640239 × 10 −2 , C4 = 4.1764768 × 10 −5 , C5 = −14452093 × 10 −8 , C6 = 6.5459673 × 10 0 , whereas T is the absolute temperature in Kelvin degrees, namely T = t + 273.15. In Figure 4, a map (in terms of psychrometric chart) of the most significant environmental conditions occurring at the ground in terms of indoor or outdoor thermodynamic states of equilibrium is reported. The present work uses the Enthalpy method to identify infectious risk domains. As in our previous studies [1,2], for each thermodynamic equilibrium state identified by its temperature and relative humidity, the specific enthalpy of moist air h has been calculated as follows [35]: where ca and cv are, respectively, the specific heat at a constant pressure of dry air, and of water vapour, which, around ambient temperature, can be assumed to be correspondingly equal to 1.006 kJ/kg°C and 1.86 kJ/kg°C; t is the temperature in centigrade degrees; AH is the absolute humidity of moist air, in kgv/kgdry-air, also called humidity ratio and defined as the ratio of the mass of water vapour to the mass of dry air in the moist air sample; r is the latent heat of vaporisation of water at its triple point, equal to 2501 kJ/kg; ps(t) is the saturated vapour pressure of water at temperature t in Pascal; and p is the total pressure of moist air, typically the atmospheric pressure, in Pascal. The saturated vapour pressure of water in Pascal can be calculated from the empirical formula derived by Hyland and Wexler for the temperature range of 0 to 200 °C [35,36]: in which C1 = −5.8002206 × 10 3 , C2 = 1.3914493 × 10 0 , C3 = −4.8640239 × 10 −2 , C4 = 4.1764768 × 10 −5 , C5 = −14452093 × 10 −8 , C6 = 6.5459673 × 10 0 , whereas T is the absolute temperature in Kelvin degrees, namely T = t + 273.15. In Figure 4, a map (in terms of psychrometric chart) of the most significant environmental conditions occurring at the ground in terms of indoor or outdoor thermodynamic states of equilibrium is reported. In the present work, we performed both linear and polynomial linear regressions; the level of significance was established at p < 0.05; the analysis was carried out in R version 4.2.1; figures were produced using the package ggplot2. Results The synthetic parameter here assumed to summarise the response of the virus to experimental conditions is its half-life. For it, reference was made to the values published by the various authors in the reviewed articles. Data on virus survival over time in the examined works were evaluated in terms of Plaque Forming Units (PFU) or the Median Tissue Culture Infectious Dose (TCID50). The two parameters can be related to each other as long as assumptions about cell lines and titration protocols are verified [37]. The use of the half-life parameter allowed us to overcome the problem, making the phenomenon directly comparable without the risk of introducing an additional source of uncertainty. Table 1 summarises all the data collected from the reviewed literature and the calculated specific enthalpy values. 1 We used only the data from the experiment without BSA. The available data were analysed, given the related dependence of virus survival measures, according to the medium chosen for experimentation. In addition: (i) from the papers that explored fewer than three different environmental conditions, only the points are reported; (ii) from the papers that explored three (minimum number of points required) or more environmental conditions, fitting was restricted here to a second-degree polynomial ( Figure 5); (iii) eventually, a third-order curve will be attempted later (Figure 7) once all the points from the different authors have been aggregated by medium type. The available data were analysed, given the related dependence of virus survival measures, according to the medium chosen for experimentation. In addition: (i) from the papers that explored fewer than three different environmental conditions, only the points are reported; (ii) from the papers that explored three (minimum number of points required) or more environmental conditions, fitting was restricted here to a second-degree polynomial ( Figure 5); (iii) eventually, a third-order curve will be attempted later ( Figure 7) once all the points from the different authors have been aggregated by medium type. Figure 5. Comparison of SARS-CoV-2 survival data from different available works [27,29,31,34,[38][39][40][41][42][43], grouped by medium. The grey dashed lines are second-degree polynomial regressions of all available data points for each medium. As with other coronaviruses, experimental evidence confirmed the improved surface survival of the virus at low temperatures and, consequently, specific enthalpies [3,27,28,30,31,[42][43][44]. Indeed, when analysing the data displayed in Figure 5, we can see that almost all the works are in good accordance with the expected behaviour of the virus. However, some exceptions arise. First, Cappi et al. [45] dispute the possibility of defining a recurrent seasonal pattern for SARS-CoV-2. Yet, the finding about a missing seasonal pattern could be related to the spread of the Omicron variant and the lack of data regarding this strain does not permit us to explore in depth his survival pattern. Secondly, the data reported by Kratzel et al. [38] seem to indicate a pattern of increasing survival on surfaces moving from winter to summer conditions, which overturns the evidence of the seasonal behaviour of the virus as noticed by Bueckert et al. [28]: "Anomalously, Kratzel et al. reported that SARS-CoV-2 was more stable on stainless steel at 30 °C than at 4 °C". Furthermore, the results of Matson et al. [31], when placed in the general context, indicate a low survival sensitivity on surfaces to changing environmental conditions. Although a state of low specific enthalpy was examined, the half-lives associated with this setup remain very low, as if the peak survival was not appreciable. The same authors, comparing the results with a previous study [39], state: "The t1/2 we report here for SARS-CoV-2 in surface nasal mucus and sputum at 21 °C/40% (Table) is considerably shorter than what we found in culture media under similar conditions". They do not mention the low-temperature state because the previous work [39] had analysed only one environmental condition, close to intermediate. Lastly, a different observation can be made about data As with other coronaviruses, experimental evidence confirmed the improved surface survival of the virus at low temperatures and, consequently, specific enthalpies [3,27,28,30,31,[42][43][44]. Indeed, when analysing the data displayed in Figure 5, we can see that almost all the works are in good accordance with the expected behaviour of the virus. However, some exceptions arise. First, Cappi et al. [45] dispute the possibility of defining a recurrent seasonal pattern for SARS-CoV-2. Yet, the finding about a missing seasonal pattern could be related to the spread of the Omicron variant and the lack of data regarding this strain does not permit us to explore in depth his survival pattern. Secondly, the data reported by Kratzel et al. [38] seem to indicate a pattern of increasing survival on surfaces moving from winter to summer conditions, which overturns the evidence of the seasonal behaviour of the virus as noticed by Bueckert et al. [28]: "Anomalously, Kratzel et al. reported that SARS-CoV-2 was more stable on stainless steel at 30 • C than at 4 • C". Furthermore, the results of Matson et al. [31], when placed in the general context, indicate a low survival sensitivity on surfaces to changing environmental conditions. Although a state of low specific enthalpy was examined, the half-lives associated with this setup remain very low, as if the peak survival was not appreciable. The same authors, comparing the results with a previous study [39], state: "The t 1/2 we report here for SARS-CoV-2 in surface nasal mucus and sputum at 21 • C/40% (Table) is considerably shorter than what we found in culture media under similar conditions". They do not mention the low-temperature state because the previous work [39] had analysed only one environmental condition, close to intermediate. Lastly, a different observation can be made about data published by Biryukov et al. [41]. Although not showing a marked peak, this profile grows up toward low enthalpies. However, a significant peak would still be compatible with the experimental data in this case. Its absence appears to be related simply to the lack of investigation on very low enthalpies. This clearly shows the practical utility of using the specific enthalpy as a physical quantity to steer experimental investigations. The fit of the polynomial models shown in Figure 5 is worth being examined. For the blood, we have an R 2 of approximately 0.96, but the F-test has a p-value greater than 0.05. The fitting to all culture medium test data (dashed grey line) is statistically significant but has an R 2 less than 0.40. In contrast, when it is possible to regress the polynomial to the single author data, we found a significant F-test and R 2 greater than 0.84. The regression to nasal mucus data presents a non-significant F-test and an R 2 of approximately 0.28. The fitting to saliva and simulated saliva data shows an F-test with a p-value of 0.051 but an R 2 of about 0.37. Regressions for semen and sputum show F-tests with a p-value of less than 0.05 or slightly higher, respectively, and an R 2 of approximately 1. The fitting to tears test data give similar results to that for blood. The most significant analysis appears to be the one regarding the culture medium data, which confirms that there is a lot of noise when considering data from different sources. In contrast, the problem disappears as soon as only the tests performed by the same author are considered. A deeper insight into the SARS-CoV-2 data in culture medium may be then meaningful. In Figure 6, we can see that the data from experiments on banknotes show an atypical pattern. The polynomial regression model is non-significant on the F-test and has an R 2 of approximately 0.21. Focussing on these data reveals that the measurements of Harbourt et al. [34], which report low half-lives at low temperatures when placed in the general context, force the fitting by inverting the concavity of the parabola. Here again, we can detect anomalous data compared to the typical behaviour of SARS-CoV-2 when varying environmental parameters. After removing anomalous data points that did not capture the behaviour of the virus at low specific enthalpies, and those produced by Pastorino et al. [29] with a high-protein medium in order to highlight the boost effect they have on virus survival, we can analyse all available data grouped by medium. We can fit a third-order polynomial, which can capture local minima and maxima, as shown in Figure 7. The third-order regressions for blood, semen, sputum, and tears cannot be statistically evaluated due to the limited number of points available. However, it is possible to appreciate the visualisation to understand whether the data agree with the general model. On the other hand, the regression to culture medium data is now statistically significant in general (F-test p-value << 0.001) with an R 2 of approximately 0.57. The nasal mucus regression can still be considered significant given the p-value of 0.053 and an R 2 of approximately 0.96. The regression to saliva data is also statistically significant (F-test p-value 0.001) with an R 2 of approximately 0.92. These analyses confirm the results of the culture medium tests as those with the highest variability. This variability can be explained by the interference of the various substances of which the culture media are composed, but also by the higher number of available data sources. The borderline significance of the nasal mucus model may be explained by the smaller set of available data. The results of the regression analyses performed are shown in Table 2. p-Value: F-statistic p-value; R 2 : coefficient of determination; nd: not done (not enough data to perform a regression). *: value greater than the level of significance (0.05); ** value slightly higher than the significance level established but acceptable in the context of the analysis. Discussion At first, when analysing the survival data of viruses on surfaces, it must be emphasised that the laboratory procedures for obtaining the measurements suffer from high variability and poor standardisation. The problem is even more evident when compared with those used to measure aerosol survival. Taking note of the above evidence on the influence of the media used to carry out surface virus survival tests, the seasonal behaviour of SARS-CoV-2 also appears rather pronounced ( Figure 5). This happens not only from a qualitative point of view but also from a quantitative one. As we can observe from the reprocessing of the data published by Kwon et al. [27,42] in Figures 8 and 9, the dispersion of the measuring points does indeed increase as the specific enthalpy decreases. The dependence of the survival on the surface and fluid in which inoculation takes place is most pronounced at low enthalpies, that is, the region of best virus survival (Figure 8). In that range of specific enthalpies, the effects of different surfaces and media on SARS-CoV-2 survival should be investigated the most. p-Value: F-statistic p-value; R 2 : coefficient of determination; nd: not done (not enough data to perform a regression). *: value greater than the level of significance (0.05); ** value slightly higher than the significance level established but acceptable in the context of the analysis. Discussion At first, when analysing the survival data of viruses on surfaces, it must be emphasised that the laboratory procedures for obtaining the measurements suffer from high variability and poor standardisation. The problem is even more evident when compared with those used to measure aerosol survival. Taking note of the above evidence on the influence of the media used to carry out surface virus survival tests, the seasonal behaviour of SARS-CoV-2 also appears rather pronounced ( Figure 5). This happens not only from a qualitative point of view but also from a quantitative one. As we can observe from the reprocessing of the data published by Kwon et al. [27,42] in Figures 8 and 9, the dispersion of the measuring points does indeed increase as the specific enthalpy decreases. The dependence of the survival on the surface and fluid in which inoculation takes place is most pronounced at low enthalpies, that is, the region of best virus survival (Figure 8). In that range of specific enthalpies, the effects of different surfaces and media on SARS-CoV-2 survival should be investigated the most. However, anomalous points are not lacking. The results of Matson et al. [31], although they confirm a general tendency toward a longer half-life under winter conditions when compared to other works, do not capture the intensity of increased virus survival for reasons probably linked to the laboratory setup. In the context of all the measurements analysed, these values are essentially anomalous with respect to the expected behaviour. This could be explained by the action of light, which can significantly reduce the survival of the virus in the local environment. However, since no laboratory control of light was mentioned in this work, this cannot be ruled out as a cause for more limited virus survival. Regarding the data published by Harbourt et al. [34], it should be noted that this is the only work that subjected the samples to storage at −80 • C before quantifying them. This different procedure and the possible action of light again, the control of which in their laboratory environment is here not specified, may explain the anomaly found. Lastly, the data published by Kratzel et al. [38]-in which the Bayesian calculation method adopted constitutes a further element of heterogeneity with respect to the other works-when analysed in the general context, are in contrast with all the different experiments and do not reveal the expected survival peak at low enthalpies. Once again, no explicit reference was made to the control of illumination during the experiments, so the effect of light could help to explain the absence of the peak. However, anomalous points are not lacking. The results of Matson et al. [31] hough they confirm a general tendency toward a longer half-life under winter condi when compared to other works, do not capture the intensity of increased virus sur for reasons probably linked to the laboratory setup. In the context of all the measurem analysed, these values are essentially anomalous with respect to the expected behav This could be explained by the action of light, which can significantly reduce the surv of the virus in the local environment. However, since no laboratory control of light mentioned in this work, this cannot be ruled out as a cause for more limited virus surv Regarding the data published by Harbourt et al. [34], it should be noted that this i only work that subjected the samples to storage at −80 °C before quantifying them. different procedure and the possible action of light again, the control of which in laboratory environment is here not specified, may explain the anomaly found. Lastly data published by Kratzel et al. [38]-in which the Bayesian calculation method ado constitutes a further element of heterogeneity with respect to the other works-when alysed in the general context, are in contrast with all the different experiments and do reveal the expected survival peak at low enthalpies. Once again, no explicit reference made to the control of illumination during the experiments, so the effect of light c help to explain the absence of the peak. In the following, a discussion of the results grouped by the medium is made. Culture Medium As expected, we can see in Figure 7 that the highest data dispersion affects the cu medium results. This dispersion appears to be due only in part to the greater numb data and, thus, to different laboratory measurement modalities, but mainly to the diffe compositions of the media used. Indeed, the composition of the culture media is no different, as demonstrated by Pastorino et al. [29], and it constitutes an extensive so of noise. The same figure shows how the Enthalpy method allows us to highlight the e behaviour of the virus due to variations in environmental conditions. As a matter of ings, at low specific enthalpies, the absolute maximum of survival always occurs In the following, a discussion of the results grouped by the medium is made. Culture Medium As expected, we can see in Figure 7 that the highest data dispersion affects the culture medium results. This dispersion appears to be due only in part to the greater number of data and, thus, to different laboratory measurement modalities, but mainly to the different compositions of the media used. Indeed, the composition of the culture media is not indifferent, as demonstrated by Pastorino et al. [29], and it constitutes an extensive source of noise. The same figure shows how the Enthalpy method allows us to highlight the exact behaviour of the virus due to variations in environmental conditions. As a matter of findings, at low specific enthalpies, the absolute maximum of survival always occurs. At enthalpies of around 50-55 kJ/kg dry-air , we find a very pronounced local minimum; then the survival ability of SARS-CoV-2 tends to a slight local maximum or plateau, eventually reaching its absolute minimum for high enthalpies. However, it should be noted that negative enthalpies (values according to the calculation convention less than zero), which are representative outside of winters in very cold climates, and indoors of typical cold chain situations [46,47], have not been investigated. There are no specific studies on the effects-irreversible or temporary-of such extreme conditions on the virus, although they probably even occurred during storage procedures. Blood In the case of blood used as a medium (see Figure 7), it can be seen that the aforementioned local minimum is shifted slightly to the left (namely toward lower enthalpies). The possibility that this could be a peculiar behaviour of blood cannot be ruled out, but the data available are very few and provided by only one author. Indeed, the small dataset may have influenced our result. Four is, in fact, the minimum number of points required to regress a third-order polynomial. Such a limited number does not allow the unavoidable errors to be compensated for by repeated measurements and could explain the slight deviation of the minimum from the general model. In any case, it should be noted that the regression trend remains consistent with the general model. The high value of SARS-CoV-2 survival in this medium is worth noting. Tears The data collected using tears as a medium show (Figure 7) a local minimum shifted to the left. In this case, the saline pH of the tears could have interfered with the viability of the virus. The effect of pH on the survival of viruses, as well as the set of antimicrobial molecules present in these secretions, is actually known from the literature [48]. Again, the small number of points available to carry out the regression could explain this slight deviation. However, the general trend of the model is highlighted again, as in the case of blood. Sputum The same considerations regarding the number of data can be made when analysing survival in sputum (Figure 7). Once again, the deviation is slight and not in contrast with the general pattern. These results agree with those previously obtained by Spena et al. [1,2] and seem to trace a consistent pattern of SARS-CoV-2 behaviour, both in aerosols and on surfaces. Saliva The saliva facet in the plot (Figure 7) gathers data collected both in saliva and in simulated saliva. The regression performed showed a solid significance and a very high R 2 . The pattern observed is in accordance with the model hypothesised, where it is possible to note the peak at low enthalpy, the minimum in its aforementioned range and the tendency toward virus death with higher values. It is worth being noted that the maximum value of survival is similar to the others observed in the different media examined, with the exception already stated for blood. Nasal Mucus The same considerations made with regard to saliva are valid for the results of the model applied to the survival data in nasal mucus. The survival pattern is confirmed, and the statistical parameters reflect the minor availability of data points with a significance level borderline but a very high coefficient of determination. It should be noted that the value of the peak (namely toward lower enthalpies) is less prominent than for other media, suggesting an interfering role of the mucus [49,50]. Semen The survival pattern of the virus in semen is in good accordance with the model (Figure 7) and it is possible to observe the peak at low enthalpy and the minimum in the confirmed range between 50 to 60 kJ/Kg dry-air . Again, the data availability is critical, so the coefficient of determination should be interpreted with caution. Nonetheless, the statical model showed a good level of significance. Very Low Enthalpies The dependence of survival on low-enthalpy environmental conditions (cold winters or cold chains) remains to be investigated [46,47]. Data analysis on the evolution of the pandemic confirmed how particularly severe climatic conditions generally hindered the spread of the virus. However, when moving from the analysis of contagions to the analysis of virus survival in a laboratory, consideration must be given to the possibility that very low temperatures could implement a kind of "conservation" for the virus. When the sample is moved from the climatic chamber to the station where the virus titration is performed, a recovery of virus activity may occur due to a temperature rise. Consideration must be given to the possibility that the absence of virus spread in very cold winters may be related more to a passive virus stasis or other confounding factors than to an actual reduction in virus survival. Criticism of Measurements The major limitation of this study consists, as already mentioned, rather than in the small availability of data, in the nonuniform laboratory procedures in which many confounding variables were involved. Indeed, the freezing and de-freezing of microbiological samples covers many applications in the field of life and medical sciences. Numerous studies and protocols are dedicated to these procedures, aiming to: (i) make the process increasingly efficient; (ii) preserve the sample as best as possible, and (iii) minimise the loss of microorganism viability. Furthermore, numerous in vitro infection studies are conducted under optimal and controlled conditions for host cell growth (for human pathogenic microorganisms, around 37 • C). In contrast, a literature analysis shows that it is more complex to find homogeneous and comparable data, particularly with regard to the effect of low temperatures on the viability and infectivity of pathogenic microorganisms. The lack of standards mentioned above also precludes the development of calculation models that are more reliable and consistent with the virus survival assessment. To enrich the availability of reliable experimental data to study the effect of environmental conditions on virus infection ability, it appears then necessary in the future to adopt suitable protocols aimed at ensuring the control of the temperature, relative humidity, and pressure parameters of human pathogenic microorganisms transmitted directly or indirectly, not only through bioaerosols but also on surfaces. In particular, it would be desirable to attain and share a universal standardised procedure that: (i) regulates the composition of the culture medium; (ii) solves the problem of unintended light interference; (iii) defines a single unit for the quantification of the virus on samples of different nature. It is also essential to agree on the model more suited to describe the phenomena under consideration. Although models of biphasic decay are sometimes more appropriate, most works are based on the single-phase model. Therefore, to enhance opportunities for literature results comparison, it seems convenient to report half-lives for both models even when a biphasic model is considered more suited. This issue was also recently raised by Gracely [51] in a letter commenting on a paper by Hirose et al. [52], highlighting the difficulty the authors may have in describing the decay of the observed phenomenon when the monophasic model is not appropriate. Conclusions First of all, when dealing with the survival of coronaviruses on surfaces, it must be highlighted that the laboratory procedures for obtaining the measurements suffer from: (i) high variability, and (ii) poor standardisation. The problem is even more evident when compared with procedures used to measure survival in aerosols. Indeed, the major limitation of the present study is in the source of the data: not having the opportunity to collect data through experiments with SARS-CoV-2 specifically aimed, we had to refer to experimental evidence found in previously peer-reviewed publications. This fact exposed our calculations to the aforementioned uncertainties, intrinsically occurring when comparing data from different sources but also, fortunately, allowed us to disclose this criticality. Despite the difficulties above mentioned, the work carried out demonstrates that the detection of minimum viability of the SARS-CoV-2 virus on surfaces, and thus its probability of infecting susceptible exposed hosts, is possible and it occurs in an enthalpy range between 50 and 60 kJ/Kg dry-air . This range appears well-superimposed with the results we previously obtained from analyses in aerosols. The present results also confirm that, while it remains impossible to clearly correlate the behaviour of SARS-CoV-2 with the variation in temperature or relative humidity independently, this becomes possible if the thermodynamic potential specific enthalpy is taken into account as an explanatory variable summarising the state of the environment in which virus survival is investigated. Additionally, it sorts out evidence that the role of different surfaces becomes discriminating only in case of low enthalpies, i.e., in the range between 10 and 40 kJ/Kg dry-air , when the survival of SARS-CoV-2 generally increases; this means that mainly under these conditions the claimed difficulties with experimental procedures have to be addressed. More specifically, this refers to: (i) the medium used; (ii) the possible exposure of the samples to light; (iii) the pH of the experimental environment; (iv) the pressure at which the measurements are taken. These factors must then be kept under strict control, and their values must be reported in close association with the results; if not, the possibility of processing data from different sources will be lost. Moreover, in order to improve comparisons from results in the literature, is relevant the general need for survival data standardisation. First of all, quantitatively, through a conventional unit for virus quantification in samples. However, also qualitatively: most works are based on the single-phase decay model, even though models of biphasic decay are sometimes more appropriate. That is, the half-lives of both models should be reported. Lastly, as a further criticality in the case of the surface study of virus survival, the risk that a single investigation explores a small number of experimental points over a narrow range of specific enthalpy must be avoided. Measurements should investigate the survival of the virus by simultaneously diverging both temperature and relative humidity in a range of respective values sufficiently wide and consistent with the expected pattern of behaviour. All of the above appears to be highly suitable and usefully decisive. Especially because, based on the evidence obtained, the Enthalpy approach is confirmed to be a simple, powerful, and robust method, as it can generally be extended to a broad context of different mechanisms of viral infection propagation.
2023-06-23T04:34:10.113Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "784d98b5c1e94141508202e703092bc9c40c4e58", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "784d98b5c1e94141508202e703092bc9c40c4e58", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
211677436
pes2o/s2orc
v3-fos-license
Experimental demonstration of dynamic thermal regulation using vanadium dioxide thin films We present an experimental demonstration of passive, dynamic thermal regulation in a solid-state system with temperature-dependent thermal emissivity switching. We achieve this effect using a multilayered device, comprised of a vanadium dioxide (VO2) thin film on a silicon substrate with a gold back reflector. We experimentally characterize the optical properties of the VO2 film and use the results to optimize device design. Using a calibrated, transient calorimetry experiment we directly measure the temperature fluctuations arising from a time-varying heat load. Under laboratory conditions, we find that the device regulates temperature better than a constant emissivity sample. We use the experimental results to validate our thermal model, which can be used to predict device performance under the conditions of outer space. In this limit, thermal fluctuations are halved with reference to a constant-emissivity sample. Scientific RepoRtS | (2020) 10:13964 | https://doi.org/10.1038/s41598-020-70931-0 www.nature.com/scientificreports/ work measured the infrared optical constants of VO 2 thin films grown using pulsed layer deposition (PLD) 19,26 , sputtering, and sol-gel 27 . It was found that the growth technique influences the optical properties due to the quality of the thin crystalline films 27 . We used atomic layer deposition (ALD) to deposit a VO 2 thin film on a Si substrate. Compared to traditional growth methods, ALD allows deposition of highly conformal VO 2 films over large areas 28 . The optical constants at temperatures above and below the VO 2 phase transition were measured using spectroscopic ellipsometry. The deposition process and measurement method are described in detail in the Methods section, which also lists the ellipsometric fitting parameters. Figure 1 shows the real (n, solid lines) and imaginary (k, dashed lines) parts of the complex refractive index of the insulating (blue line) and metallic (red line) VO 2 states. We observe that both n and k change significantly between the two states. The higher value of k in the metallic state indicates an increase in loss over the entire 2 to 30 μm range. Design of optimized devices for homeostasis Using the measured optical constants of VO 2 shown in Fig. 1, we use numerical electromagnetic simulations to optimize our homeostasis device. The figure of merit P rad is defined as the difference in normalized thermal radiation power between the metallic and insulator states of the device. This quantity is calculated using a fullwave electromagnetic solver, as described in the Methods. Figure 2 shows that for an isolated VO 2 thin film, P rad is positive for thicknesses less than ~ 6 μm. For experimental convenience, we add a silicon handle layer with a thickness of 200 μm (red line in Fig. 2). P rad is again positive for thickness below ~ 3 μm, with a peak value of 0.22 at a thickness of 800 nm. Adding a gold back reflector to the VO 2 /Si stack enhances the peak value to 0.3 at a smaller VO 2 thickness of 75 nm. Smaller thicknesses are highly desirable for ALD fabrication. The use of a gold back reflector also prevents any background thermal radiation from being transmitted through the device, a useful property for thermal homeostasis. Figure 3 shows a comparison between the VO 2 / Si (Fig. 3a) and VO 2 /Si/Au (Fig. 3b) systems. Smoothed lines are superimposed as a guide to the eye. Both structures have higher broadband emissivity in the metallic state of VO 2 than in the insulating state (Fig. 3c,d), owing to the increase in optical losses in this state (Fig. 1). A key difference between the two structures, however, is their transmissivity. The VO 2 /Si/Au structure has zero transmissivity in both the metallic and insulating states. When it is used to cover an external body (e.g. experimental sample holder, or object whose temperature we wish to regulate) the total thermal emission depends only on the emissivity of the VO 2 /Si/Au stack, not that of the external body. We thus use a gold back reflector in experiments. Measurement of infrared device properties We fabricated a VO 2 /Si/Au device with a VO 2 thickness of 62 nm, close to the optimal value calculated in Fig. 2. We measured the infrared absorptivity as a function of wavelength for both the insulating and metallic states using FTIR. Figure 4 compares the experimental measurement to smoothed simulation results (see "Methods" section for details). Simulations and FTIR measurements in Fig. 4a,b show similar broadband emissivity switching: the emissivity is higher in the metallic state than the insulating state. The integrated difference in radiation power P rad calculated from the spectra is equal to 0.29 (simulated spectra) and 0.22 (experimental spectra) (Fig. 4c). The results suggest that the fabricated sample should emit significantly more heat in the hot (metallic) state, a necessary feature for thermal homeostasis. www.nature.com/scientificreports/ Simulations capture most of the measured FTIR spectral features. An offset between the simulated and measured spectra in the insulating state of Fig. 4a is observed at wavelengths below 15 μm. This is largely due to a difference between the optical constants of silicon in experiment and simulation. We verified this directly by taking FTIR measurements of a witness sample (Si/Au), which showed a prominent peak at 9 µm due to oxygen impurities 29 and an increased emissivity over the entire wavelength range, relative to simulations based on literature constants (taken from Ref. 30 ). Experimental setup and calibration We designed an experiment to directly test the temperature regulation capabilities of our device. A photograph and a schematic of the experiment are shown in Fig. 5a,b, respectively. Device samples are mounted on either side of a ceramic heater, containing an embedded thermocouple. The entire structure is suspended in a vacuum chamber, which has an interior black surface to minimize infrared reflection. The chamber is submerged in an ice bath at an ambient temperature of T o = 0.5 °C. The heat load on the sample is varied by changing the input power to the heater, and the resulting temperature is recorded using the thermocouple embedded in the heater. www.nature.com/scientificreports/ As shown in the bottom portion of Fig. 5b, the system loses heat through two mechanisms: (1) radiation from the sample, which is the quantity of interest, and (2) parasitic losses that include both radiation from the perimeter of the sample and conduction to the wire leads. To calibrate the parasitic loss, we use gold mirrors with low, constant emissivity (ε ≈ 0.05) and measure the temperature rise as a function of applied heat load (yellow circles in Fig. 5c). The experiment is conducted by using a complete heating and cooling cycle while recording temperature at each steady state. The temperature is first increased in discrete steps, and then decreased again. At each temperature increment, we allow 45 min for the system to reach steady state. The temperature-dependent parasitic heat loss function Q loss (T) is determined from a linear fit to this characteristic (see Fig. 5d). For each value of applied heat load Q, we subtracted the calculated, net radiative loss of gold to obtain Q loss (T), plotted in Fig. 5d (see "Methods" section). To probe the dynamic range of our measurement system, we also measure a diffuse-black sample with a high total emissivity. Results are shown in Fig. 5c. The data curve for the diffuse-black sample is well separated from the curve for the mirror-gold sample. These two measurements, at the extremes of high and low emissivity, define an operational window for our subsequent, variable-emissivity measurements. Measurement of device emissivity Next, we measure the temperature rise as a function of applied heat load for our VO 2 devices. The results are shown in Fig. 6a. The heating and cooling curves trace a hysteresis window around the VO 2 phase transition. Inside this window, for constant applied heat load, there is a temperature difference as large as ~ 5 °C between the heating and cooling curves. The convergence of the heating and cooling curves above and below the hysteresis window suggests that there is negligible temperature drift in the experimental setup. The location of the phase transition can be more readily observed by plotting the derivative of the heat load-temperature curve (Fig. 6a, inset). During heating, the response dQ/dT peaks in the red, shaded region, indicating the transition from insulator to metal at ~ 80 °C. Upon cooling, dQ/dT peaks at a lower temperature ~ 60 °C, indicating transition back to the insulating state. We ran this measurement over multiple complete heating/cooling cycles (circles and diamonds) to ensure that there was minimal run-to-run variation in thermal response. We can use the data from Fig. 6a along with the calibration curve in Fig. 5d to determine the radiative heat flux emitted by the VO 2 sample, Q ′′ rad (T) . In steady state, the net heat input to the system is equal to the output: The radiative heat flux is shown in Fig. 6b. The graph shows that along the heating curve, the radiative heat flux increases sharply near the upper edge of the hysteresis loop. This corresponds to an increase in emissivity. Along the cooling curve, the radiative flux drops at the lower edge of the loop, corresponding to a decrease in emissivity. The inset of Fig. 6b replots the data to show emissivity as a function of temperature. It can be seen that ε ins = 0.22 in the insulator phase, and ε met = 0.46 in the metallic phase. These values are consistent with those measured using FTIR microscopy (ε ins = 0.22 and ε met = 0.44). Dynamic thermal regulation To demonstrate dynamic thermal regulation, we apply a time-varying heat load and measure the resulting temperature as a function of time. The input power is plotted in Fig. 7a and has the form of a square wave with power levels of 0.22 and 0.59 W. For reference, we first measure a near temperature-independent emissivity structure with an alumina top layer. (Al 2 O 3 /Si/Au with the corresponding thicknesses of 480 nm/200 µm/60 nm). The experimental, timedependent temperature data is shown by the red, dotted line in Fig. 7b. In response to an increase in input power, the measured temperature rises and then plateaus. When the input power is decreased, the temperature drops again and stabilizes at a lower value. The total range of temperature fluctuation measured is 56 °C (red arrows). The measured results can be accurately reproduced using a numerical heat transfer model given by where ρ is the effective material density (kg ∕m 3 ), C is the effective heat capacitance (J ∕K-kg), L C is the characteristic length scale of the system (m), and T 0 = 273.6 K is the ambient temperature. The numerical solution to Eq. (4) is shown by the red, solid line in Fig. 7b. Physically, the response time of the device is determined by the effective heat capacity, material density, diffusion length and the emissivity of the system. The simulation shows an excellent match to experiment for a fitted value of ρCL C = 5,500 J/(m 2 -K). We then measure the performance of our variable-emissivity VO 2 device. The experimental data is shown by the blue, dotted line in Fig. 7b. In comparison to the constant-emissivity Al 2 O 3 device, the total temperature fluctuations are reduced to a value of 50 °C. The data can again be well modeled by Eq. (4) as shown in Fig. 7. Discussion In space applications, under ideal conditions, radiative loss is the only heat dissipation mechanism; parasitic losses vanish. We can use our thermal model to predict the performance of our VO 2 device under these conditions. In the absence of parasitic losses, thermal self-regulation of the device is far more effective than under laboratory conditions. We choose input powers of 0.037 W and 0.146 W to ensure that the radiative heat loss from the sample is the same. In this case, the temperature fluctuations in the VO 2 device are again around 50 °C, as in the experiment of Fig. 7. However, the fluctuations for the constant-emissivity Al 2 O 3 sample are now 108 °C. This increase is due to the absence of the parasitic loss pathway. The VO 2 sample can therefore self-regulate its own temperature far better than the constant-emissivity sample. In fact, the magnitude of fluctuations in the VO 2 device can be predicted directly from Fig. 6b. For a device area of 3.3 cm 2 , the power levels in Fig. 8 correspond to 112 W/m 2 and 442 W/m 2 , respectively. In the absence of parasitic loss, the steady state radiative heat flux is equal to the input power per unit area. From Fig. 6b, a value of 112 W/m 2 corresponds to a temperature of ~ 39 °C, while a value of 442 W/m 2 corresponds to a temperature of ~ 89 °C. These values correspond well with those obtained in the simulation of Fig. 8b. For the constant emissivity sample, the temperature fluctuations are much higher. Approximating the Al 2 O 3 sample with a constant emissivity of 0.35, the lower power level corresponds to a temperature of 6 °C, whereas the upper power level corresponds to a temperature of 114 °C, lying well outside the edges of Fig. 6b. This corresponds to the larger fluctuation of 108 °C seen in Fig. 8b. If we use applied power levels that result in a full transition of the VO 2 between metal and insulating states (i.e. P low < 130 W/m 2 and P high > 400 W/m 2 ), the temperature fluctuations are at least as wide as the hysteresis loop (Fig. 6). For our experimental device, this is close to 20 °C. Further improvement in material quality can bring this number down substantially, as observed in literature [31][32][33] . Another route to performance improvement is to incorporate microstructured designs 6,10,11 to increase the total difference in radiated power between metal and insulator states. In this case, for fixed value of temperature fluctuation, the device is expected to accommodate a larger variation in input heat load. The experimental and thermal modeling methods form a general platform for further investigation of dynamic thermal regulation in variable-emissivity systems. conclusion We have directly demonstrated dynamic, passive thermal regulation via experiments on a VO 2 phase-change device. Our device is designed to optimize the increase in radiated power at the phase transition. This trend allows the sample to "self-regulate" its temperature in response to a time-varying, input heat load. Under laboratory conditions, the VO 2 device shows a reduction in thermal fluctuations relative to a constant-emissivity device. Using a thermal model, we can extrapolate the device performance to conditions typical of outer space, where radiation is the only heat loss pathway and parasitic losses vanish. Our results demonstrate that emissivity switching can reduce the thermal fluctuations by up to a factor of 2. www.nature.com/scientificreports/ Recent investigations 34,35 have shown flexibility in tuning the phase-transition temperature of VO 2 from 28 to 63 °C through doping, addition of dopant atoms, or alloying films. This suggests that various devices could be designed to regulate temperature around fixed values in this range. In terms of ultimate applications, the work presented here provides a key step toward understanding a larger trade space, one that incorporates not only material selection, but also system-level concerns such as payload target temperature and solar heat load. Methods Simulations. The thermal emissivity spectrum ε( , T) is calculated using the ISU-TMM package 36,37 , an implementation of the plane-wave-based transfer matrix method. The simulation calculates absorptivity at normal incidence, where absorptivity is equal to emissivity by Kirchoff 's law. The wavelength range shown is chosen to be 2-30 μm; outside this range, the blackbody radiance at room temperature is negligible. The normalized thermal radiation power was calculated as where I BB ( , T) is the blackbody radiance, and ε( , T) is the emissivity spectrum. Matlab Savitzky-Golay filter with an order of 3 and a frame length of 41 was used to smooth the simulated spectra in Figs. 3c-e and 4a,b. The optical constants for VO 2 are taken from the experimental data of Fig. 1; the constants for Si and Au are taken from the literature 30 . Fabrication. Amorphous VO 2 films (60-120 nm) were deposited on 12.5 mm × 12.5 mm double-side polished, 200 μm thick Si wafers by atomic layer deposition (ALD) in a Veeco Savannah 200 reactor at 150 °C using tetrakis(ethylmethyl)amido vanadium and ozone precursors with optimized pulse/purge times of 0.03 s/30 s and 0.075 s/30 s, respectively. Under these conditions, the saturated growth rate was 1 Å/cycle. All samples of a particular thickness were deposited simultaneously to avoid any run-to-run variation. The thickness was determined using spectroscopic ellipsometry and a general oscillator model previously calibrated with TEM. As-deposited amorphous ALD films underwent an ex-situ anneal at 475 °C in 6 × 10 -5 Torr of oxygen for 3-4 h depending on thickness of the film to facilitate the crystallinity required to achieve sharp metal-to-insulator transitions. Raman spectra were collected at room temperature to verify the presence of crystalline, monoclinic VO 2 films for all samples after annealing. ellipsometer. A VASE JA Woollam spectroscopic ellipsometer was used to characterize the atomic layer deposited VO 2 thin films. Ellipsometry measures the complex reflectance ratio of p and s polarization components, respectively, which may be parametrized by the amplitude component Ψ and the phase difference Δ. Ψ and Δ values for 10 different angles were collected between 55° and 75°. The optical constants were fitted using a series of Lorentzian oscillators in the insulating state with an addition of a Drude oscillator in the metallic state to account for free-electrons in this state, using IR-VASE software. fourier transform infra-red (ftiR) spectroscopy. A Fourier Transform Infrared (FTIR) Spectrometer was used to characterize the reflectance and transmittance of the 62 nm VO2/Si/Au multilayer device. We used a Bruker (Hyperion 3000) FTIR attached to Vertex 70 microscope. A 0.5 cm −1 resolution and an integration time of 1 s were used. Each measurement was averaged over 5 scans. A ceramic heater (THORLABS HT19R) was used to heat the sample, and the temperature was incrementally varied between 25 and 85 °C. For each temperature, the sample was allowed to thermally equilibrate and the interferogram signal was maximized before a measurement was collected. thermal experiment. We use a vacuum chamber with black-painted interior walls submerged in an ice water bath to establish a cold, dark, and low pressure ambient environment (see Fig. 5a,b). A ceramic resistive heater (Watlow Ultramic, 11.5 × 11.5 × 3 mm 3 , resistance 12 Ω) containing an embedded k-type thermocouple is suspended in the center of the chamber. Two nominally-identical samples (each with a surface area of ~ 1.65 cm 2 ) are affixed with vacuum grease to either side of the heater to ensure robust thermal contact. The stiff bundle of wires connected to the heater are coiled to suspend the heater in the center of the vacuum chamber. This configuration thermally isolates the heater from the vacuum chamber to minimize parasitic heat losses and promotes isothermal conditions between the heater and the sample. Vacuum is pulled and the chamber is submerged in an ice water bath until interior temperature reaches a stable T 0 = 0.5 °C, which is maintained throughout the duration of the experiment. Once the system is at low vacuum and in thermal equilibrium, we apply incremental changes in heater power and record the steady state temperature at each heat load (see Fig. 5c). Each experiment includes a complete heating and cooling cycle that steps up from zero power to maximum power (corresponding to a temperature of 100 °C), and then back down to zero power. This generates a power-temperature characteristic as shown in Fig. 5c. There is a high sample-to-ambient thermal resistance due to the deliberate thermal isolation of the sample. The high resistance leads to a long thermal time constant, and each data point is collected after 45 min when a steady temperature is reached. The primary parasitic loss in the experiment is due to conduction into the wire bundle that connects the heater to the chamber feedthrough. The rate of heat loss Q loss is independent of the sample being tested and is only a function of the heater temperature T. We measure the temperature-dependent heat loss characteristic Q loss (T) www.nature.com/scientificreports/ for the experimental setup by measuring the relationship between heat dissipation and temperature rise for a set of gold mirror samples with a constant, low emissivity. By letting Q rad,net be defined by the Stefan-Boltzmann equation for a gray body of known emissivity ε in a vacuum at temperature T 0 , Eq. 1 can be rewritten as where A is the total sample surface area (A = 3.3 cm 2 in this work), Q = I × V is the applied Joule heat load, I is the driving current, and V is the voltage drop across the resistive heating element. To calibrate Q loss , we use a low-emissivity sample made using polished silicon with evaporated gold (ε ≈ 0.05). We generate the temperature response T as a function of Q, as shown in Fig. 5c, across a complete heating and cooling cycle. We then calculate Q loss (T) from Eq. (6) and fit the calibration to a linear function, since the range of temperatures is relatively small (less than 100 °C). The calibration curves and extracted loss function are shown in Fig. 5d.
2020-03-03T02:00:27.446Z
2020-02-28T00:00:00.000
{ "year": 2020, "sha1": "0fe2c3e2b240597755dd0917e7497b7f3835797e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-70931-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee63a70f8ebe441b682e36a74f66ca18d8d1ad87", "s2fieldsofstudy": [ "Physics", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science", "Medicine" ] }
269781121
pes2o/s2orc
v3-fos-license
Patients’ experiences with tele-mental health services during COVID-19 in Pakistan Background: Although the concept of telehealth is of great interest globally, its potential has not yet been realized in Pakistan. It is therefore essential to explore the perspectives of stakeholders on the technology, particularly for mental health, to be able to increase and improve its use. Aim: To assess the perceptions and experiences of patients receiving tele-mental health services, including telepsychiatry and tele-psychotherapy, in Pakistan. Methods: For this qualitative exploratory study, we conducted in-depth interviews with 49 individuals at a tertiary care hospital in Karachi, Pakistan. Using the Cresswell framework for content analysis, we identified 3 major themes that focused on the positive and negative aspects of tele-mental health services and made suggestions for enhancing them. Results: Twenty-six of the participants received telepsychiatry, while the remaining 23 received tele-psychotherapy services. Technical literacy, cost of consultation, privacy, and therapeutic alliance were the major challenges identified by the patients, while convenience and the absence of stigma were highlighted as key facilitators for tele-mental health. Tele-consultations reduced travel and waiting time, thus improving access to healthcare. Participants suggested that the processes for booking appointments and making payments should be streamlined and the cost of tele-consultation reduced. Conclusion: This study provides insightful findings on tele-mental health services from the perspectives of patients living in an Asian culture. The major benefits highlighted were destigmatization of mental health and elimination of commuting costs and travel time. There were concerns about privacy, therapeutic alliance and availability and affordability of the technology. Background Originating in Wuhan, China, the SARS-CoV-2 virus rapidly engulfed the globe.COVID-19 was officially declared a pandemic by the World Health Organization on 11 March 2020 (1).As of 19 January 2022, there were 332 617 707 confirmed cases globally, with 5 551 314 deaths (2).Pakistan reported its first case of COVID-19 in February 2020, and as of 19 January 2022, a total of 1 333 521 confirmed cases and 29 029 deaths were reported (2).To minimize the spread of the infection, healthcare systems rapidly adopted alternative models for health care delivery, including telehealth services. Pakistan, a developing South Asian country with a population of over 220 million, is the world's fifthmost populous country (3).Karachi is the largest city in Pakistan and a primary commercial centre.It is situated at the southern tip of the country, along the Arabian Sea coast.The city has an official population of 20.3 million, with an annual growth rate of 4.1% (3).The population has a diverse linguistic, ethnic, religious, cultural and socioeconomic background.In the city, and indeed across the entire country, the healthcare system is overburdened and inefficient due to unequal access, poor governance, poverty and lack of accountability (4).Numerous barriers limit access to quality mental health care: these include population density, shortage of mental health professionals, considerable distances between health care centres, underfunding, and the stigma associated with mental illness (5).Concurrently, the COVID-19 pandemic posed a serious threat to mental health by elevating anxiety, depression, post-traumatic stress disorder and negative societal behaviours (6,7).The uncertainty, helplessness and fear resulting from this outbreak have traumatized much of the population. Against the background of these challenges, telemental health has emerged as the most efficient and most accessible means of providing mental health care for the broader population.Aga Khan University is a private, tertiary care teaching hospital located in Karachi; it provides multi-specialty services in a single location.During the COVID-19 pandemic, psychiatry and psychotherapy outpatient services at the hospital were transitioned to virtual modes to reduce the risk of COVID-19 transmission and EMHJ -Vol.30 No. 4 -2024 to meet the increasing mental health demands (5).The use of technology in the health sector in Pakistan, as in other developing countries, is in its early phase.The wide acceptance and subsequent success of any new technology depends primarily on factors like users' understanding of the new concept, the skills required for its successful implementation, and a working environment conducive to the adoption of new technology (8).Thus, for telemental health services to be successfully integrated into the Pakistani health care sector, it is essential to explore the experiences of stakeholders with the technology.Some studies from developed countries have explored patients' and providers' experiences of virtual mental health services (9)(10)(11)(12)(13).Technology-based health services are in the emergent phase in Pakistan, hence, our study aimed to explore perceptions and experiences of patients receiving virtual services, including telepsychiatry and tele-psychotherapy, during the COVID-19 pandemic in Karachi. Methods This study was conducted in accordance with the guidance provided in the Standards for Reporting Qualitative Research (14).None of the investigators had any relationship with any of the participants prior to the study.We used a qualitative exploratory design with semi-structured interviews. The study was conducted at the Psychiatry Department of a private tertiary care teaching hospital in Karachi, Pakistan, and the study population included all those patients who received tele-mental health services, including telepsychiatry and tele-psychotherapy, during the period April 2020 to August 2020.The inclusion criteria were: adult patients, both male and female, receiving virtual mental health services during the study period; having valid contact details in their records; willing to participate and reflect on their experiences in Urdu or English; and providing informed consent.Participants who did not receive tele-mental health services during the study period or who refused to participate were excluded. A list of patients accessing the telepsychiatry and telepsychotherapy mental health services was obtained from the Psychiatry Department database of the hospital.We selected every alternate patient from the list (n = 248).Contact details were extracted from the database and eligible participants were given an introductory phone call to explain the study and to assess their willingness to participate.Only 6 participants refused to be interviewed.Those who gave consent were given an appointment for interview.We interviewed patients until data saturation was achieved. In-depth interviews were conducted using a semistructured interview guide developed by the authors after an extensive review of current literature (9,15).The interview guide was reviewed by a team of experts, including psychologists and psychiatrists, and was pretested on 5 participants to assess language and clarity; data from the pretest group were not included in the analysis.Following this process, questions were refined and further improved for the final version of the guide (Table 1). Due to the lockdowns and the rapid spread of the infection, interviews were conducted via telephone by a trained co-investigator during September and November 2020.The interviews were conducted from an office in the Psychiatry Department, ensuring privacy.Before the interview, participants were asked whether they had any queries.Study participants were assured that their information would be kept confidential and that none of their identifying features would be used.Participants provided verbal consent along with permission to record the audio.Interviews lasted approximately 25-30 minutes, and were all conducted in Urdu according to the participants' preferences.The recorded interviews were saved on a password-protected computer with a code to ensure security and privacy.Data collection was stopped on reaching saturation, signifying the point at which patients' narratives no longer generated new themes or information. The data were analysed using Creswell's content analysis framework (16).This framework comprises 5 steps as detai led in Figure 1. For the first step, the recorded interviews were transcribed in Urdu and then translated into English by a co-investigator who is fluent in both languages.The translated data were double-checked by reading the transcripts and listening to the recorded interviews.A code number was allocated to each transcript to maintain anonymity of the participants.The transcripts were read by the researchers several times to thoroughly examine the data and the results.The third step involved describing, classifying and interpreting data into codes and themes.During this phase, researchers extracted meaning units from the transcripts.The coding of data was carried out by 2 investigators independently using a colour-coding method to highlight codes having a similar meaning.All codes that seemed consistent in meaning and which had the same colour were aggregated to form categories.Each category was compared by the researchers independently and then assessed together.From the authors' consensus on categories, multiple themes emerged.The findings of the study were then compared with previously reported research to verify and identify new findings.The last step comprised representing and visualizing the data in the form of a table that illustrated the comprehensive findings. Permission to carry out the study and ethical approval were obtained from the Aga Khan University ethics review committee.Verbal informed consent was given by the study participants before being interviewed.The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees. From the data analysis, 3 major themes emerged (Table 3): 1. Challenging factors of tele-mental health consultation during the pandemic 2. Facilitating factors of tele-mental health consultation during the pandemic Suggestions to improve tele-mental health services These themes were constructed from the categories: "technology"', "cost of consultation", "privacy" and "therapeutic alliance".Some of the specific difficulties for each category are as follows: Technology The major challenges reported by most of the participants were technological issues such as frequent disconnection, gadget availability, poor internet quality, signals issues, audio/visual lag, and no electricity. " The internet connection issue made things difficult; the doctor did not understand what I was saying and I did not understand what the doctor was saying. It was annoying." "Physical presence is very important for me -to talk to a person. I don't talk to my friend on the phone for more than 5 minutes, so 40 minutes consultation on the phone was not comfortable for me at all." Access to devices was another challenging factor for a few participants. "My laptop is for family use. Me and my brother take online classes also, so availability of laptop and consultation timings are important." There were times when the participants could not access the online link provided by the hospital.Several participants described being disappointed as their consultant was not technology savvy and had little experience with virtual equipment or was not interested. "Medical files were not present for the session, so we had to use chat box.The doctor was not aware of technology; he had to call his assistant to see the shared screen.""Teleclinic was for a short duration as compared to faceto-face clinic.""Appointment was given for 2pm and doctor got online at 3pm." Cost of consultation Most of the participants reported that they saw technology as a major advantage that saved the hospital resources.They also felt that if hospital resources are not used, there should be reduced costs for virtual consultation."I think there should be a lot of cost-cutting in online consultation.""Staff do not check weight, height and blood pressure and they are less involved so there should be difference in consultation charges." Some participants said there should be some concession in virtual consultation charges for more frequent consultations. "I live in Quetta and always do teleclinic.My appointment is every month so I think there should be reduced charges for repeated appointments."However, a few participants believed that the cost should be the same. "Cost should be the same because the doctor doctor is providing the same amount of time, care and involvement in tele-therapy." Privacy Another important obstacle highlighted by the participants was the problem of privacy.Participants felt uncomfortable if their family members were around during the online consultation. "There are a few things which we can share with only the doctor.My wife becomes suspicious when I tell her to go out of the room.Practitioner should ask patients' family to leave room during online consultation.""During therapy, most of my discussion is about my home issues so I prefer it should be outside the home."Some participants stated that they were afraid of eavesdropping. "During in-person appointments, I feel assured and secure as no family member is with me.At home there is always fear of other people hearing my conversation.""I was sitting in my room, but the family was trying to hear the conversation." Due to the COVID-19 lockdown, people were homebound and working from home.Securing a private space at home for a virtual consultation was a huge challenge for some of the participants.Against the setting of a collectivistic culture in Pakistan, where the majority of families live in extended family settings and notions of personal space and privacy are considered "western" concepts, having a private conversation while at home was not easy. "During the lockdown, I was working from home, had telepsychiatry at home, had repeated phone calls from the office, and kids sometimes would come inside the room and these caused distractions and privacy issues.""It was difficult to get space at home for online consultation and if space was available there was a connectivity issue." Therapeutic alliance Participants thought that a possible disadvantage of virtual consultation could be the lack of therapeutic alliance, which is considered an important factor in achieving positive outcome.This t heme was constructed from the categories "convenience" and "de-stigmatizing mental health". Convenience The greatest benefit participants experienced was that they did not have to travel an entire day to meet the physician.Participants described virtual consultation as a financial gain as opposed to expensive travelling. "Telepsychiatry reduces the cost and travel time." "Teleclinics are a very good initiative and feasible for those who are very busy and are unable to visit hospital because it saves time." Some participants said they wanted to have the opportunity for video consultation in the future so that they would not have to wait in the clinic for many hours. "Teleclinic is flexible; a lot of hassle is removed like it cuts transport time and waiting time in clinic." "I was getting my psychotherapy session almost every week. I am comfortable having it online because I don't have to wait in clinic for 45 minutes. During the waiting time in clinic, I become anxious and sometimes think that the doctor has missed me. I can't even go to the rest room because I always think that I may miss my turn." Participants were aware of the fact that virtual consultation provides opportunities to gain access to relevant expertise, without the burden of having to travel long distances."I live in Quetta which is far from Karachi, therefore, telepsychiatry is more convenient for me." De-stigma tizing mental health The stigma attached to mental illness is huge in the Pakistani context.Several participants said telepsychiatry and tele-psychotherapy are effective in countering the stigma associated with mental health problems. "Mental illness attracts stigma and shame. People experience stigma for seeking psychotherapy and don't want to visit the clinic physically." "Culturally, virtual consultation is better as it removes taboos and no need to tell other people about your mental issues." This theme was constructed from the categories "streamlin ing the processes for making appointments and payment" and "subsidizing cost of virtual consultation". Streamlining the processes for making appointments and payment Participants suggested streamlining the appointment system and the payment process to improve teleconsultation.They said it would be easier if they could make payments at facilities near their residence, e.g. bank branches in their neighbourhood. "Bank payment should be streamlined to speed up the process.""The payment process was an issue.It should be streamlined, especially for regular appointments."Some of the participants had a hard time getting an appointment and they highlighted improving the system for making appointments. "Appointment should be physical, but in a crisis, it should be through email. The appointment process should be streamlined." "It was very tough to get an appointment due to difficult booking methods." "I had to wait for 10 minutes to get an appointment." In addition to streamlining the appointment process, a few participants said there should be flexibility in rescheduling the appointment. "Patients should be allowed to reschedule calls and have appointment flexibility." Subsidizing cost of virtual consultation Most of the participants suggested reducing the cost of virtual consultation. "Reduce the cost of online consultation to improve the teleclinic." Most of the participants stated that hospital resources were not used during the online consultation, suggesting that the cost of virtual consultations should be subsidized. "Since hospital resources are not used, costs should be reduced."Some participants suggested that, while the cost of the initial consultation should remain the same, there ought to be a reduced fee for follow-up consultations. "The cost should be reduced, especially for follow-ups, to improve services." Discussion This study identified several challenges and facilitators experienced by patients while receiving virtual consultations for mental health during the COVID-19 pandemic.One of the major hurdles was technical difficulties relating to access to technology, acceptance and technical literacy: on the whole, digital literacy was a major challenge for patients.Our participants reported concerns consistent with previously documented evidence relating to technical issues, such as poor bandwidth, connectivity issues and audio lag (17,18).The most commonly reported issues in our study were frequent breakdowns in connectivity, power outages and voice distortions.The availability of devices such as a smartphone or laptop for individual sessions was sometimes a challenge as it was common for one laptop to be shared by the whole family.Participants said the consultants lacked competence in using technology, and this aligned with similar findings reported by Imlach et al. in New Zealand, which indicated that physicians often lacked knowledge to operate online tools during virtual consultations (19).Other research has shown that experienced physicians lack training in telemedicine (20), an indication of the pressing need among patients and clinicians to overcome technological barriers for telehealth to be successful. From the patient's viewpoint, it seemed that the therapeutic alliance was compromised, primarily due to the absence of physical presence.For nearly all our patients this was the first time to use a virtual approach to discuss their personal lives, leaving them with an uncomfortable feeling.Previous research had indicated a relationship between therapeutic alliance and e-therapy outcomes (21).In a typical therapy room, body language and facial expressions are important aspects of the therapeutic alliance, which is not experienced in the same way when communicating via a computer screen.In Asian culture, where interpersonal connectedness is intricately woven into the social fabric, communicating through a screen does not give the same experience. Privacy was a concern of most participants, which is indicative of the family systems in Pakistan as the majority live in the same households with their extended family.As a result of this lack of space, there is an increased risk of other family members overhearing confidential therapy sessions. Along with reporting these challenges, participants highlighted the positive factors of tele-mental health services.Virtual consultation reduced waiting time , enhanced access to care, and reduced travel time, as indicated in other studies conducted in Pakistan and elsewhere (19,22).Similar to the findings of Imlach et al. (19), our participants reported that the teleclinic was cost-effective as it helped avoid travelling expenses and reduced the risk of cross infection during the pandemic.A recent systematic review on mental health care in Pakistan reported that time and distance constraints are among the barriers to seeking mental health services (23).Indeed, most mental health facilities are only available in urban areas, making it difficult for individuals living in rural areas to travel to urban centres, manage their time, cover transportation costs, and take leave from their employers.Hence, tele-mental health provides a platform for mental health care delivery to remote and rural areas as well as metropolitan areas, thus improving access to care. Another advantage of virtual consultation reported by our participants was that they did not feel stigmatized for seeking mental health services.Globally, it has been estimated that 792 million people have a mental disorder (24).Effective treatments are available, yet nearly twothirds of those with a known mental disorder never seek help from a health professional because of stigma and discrimination (25).The stigma associated with mental health disorders remains a significant challenge in Pakistan.Negative societal attitudes and misconceptions contribute to social exclusion, discrimination and reluctance to seek mental health services.In fostering the de-stigmatization of mental health in Pakistan, tele-mental health services have emerged as a pivotal approach, improving access and addressing mental health needs. Participants suggested ways of improving the telemental health services.These include streamlining the processes for making appointments and payments to improve teleconsultation.Previous research has shown that, although teleconsultation provides a variety of opportunities, payment systems can become a significant obstacle to the optimal use of tele-services (26).It is important to offer user-friendly virtual services to improve access and acceptability (27).Continuous audits and quality assurance measures could be implemented to improve the processes. Our participants suggested that the cost of teleconsultation should be reduced because less facility resources are used during virtual consultations, e.g.no initial health assessment is made by paramedical staff and no cost is incurred on the physical facility or utilities.It is vital that cost-effective, convenient, safe and EMHJ -Vol.30 No. 4 -2024 acceptable virtual services are offered to all stakeholders and that patients feel they have benefited from the consultation (27). The major strength of our study is that this was the first qualitative study in Pakistan to explore tele-mental health services from the patients' perspectives during the COVID-19 pandemic.The study examined the combined experiences of a sufficiently large group of patients using telepsychiatry and tele-psychotherapy and provided valuable data on the issue.Our findings have important implications for strengthening tele-mental health services in low-and middle-income countries. The study had a few limitations.Of major concern is the use of telephone interviews.These captured the participants' experiences through verbal content alone: any accompanying nonverbal or emotional expressions were left unrecorded.Telephone interviews were challenging, primarily due to technical issues like distortions, poor connectivity and busy transmission lines, frequently resulting in multiple attempts to reach the participant.A further limitation resulted from the household arrangements of some patients.Although the interviewer ensured the comfort of the participants during the interview, in many cases the presence of family members in the room hindered patients from expressing themselves openly.To ensure privacy and address technical issues, interviews were occasionally longer than usual to achieve the desired depth of information from our participants.The fact that the study was conducted in a single tertiary care teaching hospital in Karachi may also be considered a limitation. These constraints limit the generalisability of the study findings, however, the findings provide valuable insights into patients' experiences of virtual mental health consultations. Conclusion This qualitative study provides insights on the perceptions and experiences regarding tele-mental health consultations from the perspective of patients living in an Asian culture.The key findings of the study indicate several benefits of virtual consultations, including destigmatizing mental health, convenience and costeffectiveness due to the elimination of commuting costs and travel time.However, certain challenges were identified, including concerns relating to privacy and the therapeutic alliance as well as technology issues such as access to communications devices and proficiency in using virtual tools by both patients and physicians.It is imperative that healthcare acquire the skills needed to operate tele-health technology and work with their institutions to develop policies relating to confidentiality, privacy and cyber safety.The COVID-19 pandemic has provided an opportunity to revamp health curricula: medical education programmes should now include modules on professional and ethical standards for virtual consultations. Another opportunity emerging from the pandemic is the stabilization and enhancement of tele-consultation services.Future qualitative exploration of tele-health could include face-to-face interviews to improve generalizability and understanding of patients' perspectives vis-à-vis improvements to the service. Our findings have important implications for strengthening tele-mental health services in Pakistan and integrating digital health-based model for improving access among the general population, especially in remote areas. Figure 1 Figure 1 Creswell framework for qualitative data analysis
2024-05-16T15:13:09.889Z
2024-05-13T00:00:00.000
{ "year": 2024, "sha1": "0ff3b22203412b650af115c742c352a2342e2fe4", "oa_license": null, "oa_url": "https://doi.org/10.26719/2024.30.4.283", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f3344f4385f4d48917c01f42498795d90c686201", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
118384680
pes2o/s2orc
v3-fos-license
Model-independent distance calibration of high-redshift gamma-ray bursts and constrain on the $\Lambda$CDM model Gamma-ray bursts (GRBs) are luminous enough to be detectable up to redshift $z\sim 10$. They are often proposed as complementary tools to type-Ia supernovae (SNe Ia) in tracing the Hubble diagram of the Universe. The distance calibrations of GRBs usually make use one or some of the empirical luminosity correlations, such as $\tau_{\rm lag}-L$, $V-L$, $E_p-L$, $E_p-E_{\gamma}$, $\tau_{\rm RT}-L$ and $E_p-E_{\rm iso}$ relations. These calibrating methods are based on the underling assumption that the empirical luminosity correlations are universal over all redshift range. In this paper, we test the possible redshift dependence of six luminosity correlations by dividing GRBs into low-$z$ and high-$z$ classes according to their redshift smaller or larger than 1.4. It is shown that the $E_p-E_{\gamma}$ relation for low-$z$ GRBs is consistent with that for high-$z$ GRBs within $1\sigma$ uncertainty. The intrinsic scatter of $V-L$ relation is too larger to make a convincing conclusion. For the rest four correlations, however, low-$z$ GRBs differ from high-$z$ GRBs at more than $3\sigma$ confidence level. As such, we calibrate GRBs using the $E_p-E_{\gamma}$ relation in a model-independent way. The constraint of high-$z$ GRBs on the $\Lambda$CDM model gives $\Omega_M=0.302\pm 0.142(1\sigma)$, well consistent with the Planck 2015 results. can be used in the calibration. Norris, Marani & Bonnell (2000) found a correlation between spectrum lag and isotropic peak luminosity (τ lag − L relation). Fenimore & Ramirez-Ruiz (2000) found a correlation between time variability and isotropic peak luminosity (V − L relation). Amati et al. (2002) found a tight correlation between the peak energy of νFν spectrum and isotropic equivalent energy (Ep − Eiso relation). found a similar correlation between peak energy and collimation-corrected energy (Ep − Eγ relation). Yonetoku et al. (2004) found a correlation between peak energy and isotropic peak luminosity (Ep − L relation). Schaefer (2007) found a correlation between minimum rise time of light curve and isotropic peak luminosity (τRT − L relation). All of the calibrating methods based on the empirical luminosity correlations have an underlying assumption, that is, the luminosity correlations do not evolve with redshift. If the luminosity correlations is not universal over the whole redshift range, these calibrating methods will fail. In fact, the possible redshift dependence of luminosity correlations has already been tested by some authors. Basilakos & Perivolaropoulos (2008) investigated the above six empirical luminosity correlations in four redshift bins, and showed that the slopes of all six correlations differs between redshift bins, although the intercepts do not vary significantly. Since the GRB sample is not large enough in each bin, the statistical uncertainty is large. Therefore, they concluded that no statistically significant evidence for the redshift evolution of the luminosity correlations was found. With the updated data, Wang, Qi & Dai (2011) got a similar conclusion. However, Li (2007) investigated the Amati relation in four redshift bins and showed that the slope and intercept varies with redshift systematically and significantly. Recently, Lin et al. (2015) divided GRBs into two redshift bins, and found that the Amati relation (especially the slope parameter) of low-z GRBs differs from that of high-z GRBs at more than 3σ confidence level. Dainotti et al. (2013) investigated the slope evolution of GRB correlations and showed that correlation slope that differs from the intrinsic one may overestimate or underestimate the cosmological parameters. In this paper, we recheck the possible redshift dependence of six luminosity correlations. We divide GRBs into low-z and high-z classes according to their redshift smaller or larger than 1.4, and test the luminosity correlations for low-z and high-z GRBs, respectively. The main difference between our work and Wang, Qi & Dai (2011)'s is that we just divide GRBs into two redshift bins, so that the number of GRBs in each bin is large enough to do statistical analysis. We choose z = 1.4 as the threshold because the redshift of SNe Ia is usually smaller than 1.4, and the Universe below this redshift has already been tightly constrained. We find that, among the six luminosity correlations, only the Ep − Eγ relation is consistent between low-z and high-z GRBs within 1σ uncertainty. As such, we can calibrate GRBs through the Ep − Eγ relation using the Padé approximation proposed by Liu & Wei (2014), and the Hubble diagram of GRBs can be constructed. The rest of the paper is arranged as follows: In section 2, we test the redshift dependence of six luminosity correlations. In section 3, we calibrate the distance of high-z GRBs using the Ep − Eγ relation, and then use them to constrain the ΛCDM model. Finally, a short summary is given in section 4. TESTING THE REDSHIFT DEPENDENCE OF LUMINOSITY CORRELATIONS All the six luminosity correlations mentioned above have the exponential form R = AQ b , which can be linearized by taking the logarithm, i.e., where " log" represents the logarithm of base 10. For the sake of clarity, we write the six luminosity correlations explicitly here: log log where quantities with a subscript " i " represent the quantities in the comoving frame, which can be transformed to the observer frame by The isotropic peak luminosity L can be calculated from the bolometric peak flux P bolo as (Schaefer 2007) where dL is the luminosity distance. The bolometric peak flux P bolo is calculated from the observed peak photon flux in the rest frame 1 − 10, 000 keV energy band by assuming the Band spectrum (Band et al. 1993). The luminosity distance depends on a specific cosmological model. In the concordance ΛCDM model, it is given as where ΩM is the mater density, H0 is the Hubble constant, and c is the light speed. Here we take ΩM = 0.280 and H0 = 70.0 km s −1 Mpc −1 from fitting to the Union2.1 dataset (Lin et al. 2015). The uncertainty of L propagates from the uncertainty of P bolo , while that from dL is absorbed into the intrinsic scatter. The isotropic equivalent energy Eiso can be calculated from the bolometric fluence S bolo as (Schaefer 2007) Similar to the bolometric peak flux, the bolometric fluence S bolo also corresponds to the rest frame 1 − 10, 000 keV energy band. For Eiso, we also only consider the error propagation from S bolo . The collimation-corrected energy, Eγ , is the isotropic equivalent energy multiplied by a beaming factor F beam ≡ 1 − cos θjet, where θjet is the jet opening angle, i.e, The uncertainty of Eγ propagates from the uncertainties of both S bolo and F beam . The error propagation from Q to log Q is given as where " ln" represents the natural logarithm. If Q has nonsymmetric error, we symmetrize it by taking the average, i.e., σQ = (σ + Q + σ − Q )/2. To test the possible redshift dependence of luminosity correlations, we analyze the GRB sample taken from Wang, Qi & Dai (2011). This sample consists of 116 long GRBs in the redshift range z ∈ [0.17, 8.2]. This dataset is a collection of GRBs with well-measured spectra properties from various instruments, such as BATSE, Konus, Swift, etc.. We divide GRBs into two subsamples according to their redshift smaller or larger than 1.4, and call them low-z and high-z subsamples, respectively. We choose z = 1.4 as the threshold because the redshift of SNe Ia is usually smaller than 1.4. The Universe below this redshift has already been well studied using SNe Ia (Amanullah et al. 2010;Suzuki et al. 2012;Betoule et al. 2014). Low-z and high-z subsamples consist of 50 and 66 GRBs, respectively. We fit each luminosity correlation to the two subsamples separately. Since the plot of each correlation in the xy plane show large error bars in both the horizontal and vertical axes, and intrinsic scatter dominates over the measurement error, the ordinary least-χ 2 method does not work well. We apply the fitting method presented in D' Agostini (2005). The best-fit parameters (a, b, σint) can be derived by maximizing the D'Agostini's likelihood, where the intrinsic scatter σint represents any other unknown errors except for the measurement error. Equivalently, we can minimizing the χ 2 , We use the publicly available Matlab package FMINUIT 1 to derive the best-fit parameters and their uncertainties. The results are listed in the fourth to sixth columns in Table 1. This table gives the mean values of the best-fit parameters and their 1σ uncertainties. Note that not all GRBs are available in the analysis of each luminosity correlation. For example, GRBs without measurement of the jet opening angle is unavailable in the Ep − Eγ analysis, while GRBs having no spectrum lag measurement are invalid in the τ lag − L analysis. For this reason, we also list the number of available GRBs in each fitting in the third column of Table 1. All the six luminosity correlations are plotted in Figure 1 in logarithmic coordinates. Low-z and high-z GRBs are denoted by black and red dots, respectively. The error bars represent 1σ uncertainties. Since the Swift/BAT instrument is only sensitive in a narrow energy band (∼ 15 − 150 keV), the uncertainties of peak energy of some GRBs are extremely large. The lines stand for the best-fit results (black line for low-z GRBs and red line for high-z GRBs). Besides, we also plot the 1σ, 2σ and 3σ contours in the (a, b) plane for low-z (black curves) and high-z (red curves) GRBs in Figure 2. The best-fit central values are denoted by dots. From Table 1 and Figure 1, we can see that among six luminosity correlations, the V − L relation has the largest intrinsic scatter, while the Ep − Eγ relation has the smallest intrinsic scatter. The intrinsic scatter of the V − L relation is so large that it is unreasonable to fit it with a line. For all the six luminosity correlations, high-z GRBs have larger intercept, but smaller absolute slope than low-z GRBs, although the difference of intercepts between low-z and high-z GRBs is not as significant Table 1. The intrinsic scatters (σ int ), intercepts (a) and slopes (b) of six luminosity correlations for low-z and high-z GRBs, derived from maximizing the D'Agostini's likelihood. The quoted errors are of 1σ. N is the number of GRBs available in the fitting. (1) ( as that of slopes. The slope difference of the τRT − L relation is especially evident. This can be seen more clearly from the contour plots in the (a, b) plane in Figure 2. The Ep − Eγ relation of low-z GRBs is consistent with that of high-z GRBs within 1σ uncertainty. However, for the rest five luminosity correlations, low-z GRBs differ from high-z GRBs at more than 3σ confidence level. Especially, there is no overlap between the 3σ contours of two subsamples for the Ep − L relation. As for the Amati relation, we recover the results of Lin et al. (2015). The results above are derived using D'Agostini's likelihood. Since the observed data points have significant errors on both the x-axis and y-axis, there is no unique method to determine the best-fit parameters. Reichart (2001) has constructed a likelihood which is slightly different from D'Agostini's one. To test whether the above results depend on the choice of a specific best-fit method, we also do a similar calculation using Reichart's likelihood. The Reichart's likelihood is written as (Reichart 2001) where σx and σy are the intrinsic scatters along the x-axis and y-axis, respectively. The corresponding χ 2 is given as where N is the number of data points. The best-fit parameters are the one which can minimize the right-hand-side of Eq.(16). The best-fit parameters and their 1σ uncertainties are listed in Table 2. The last column gives the "equivalent" total intrinsic scatter, which is calculated from σint ≡ (σ 2 y +b 2 σ 2 x ) 1/2 . Comparing to Table 1, we can see from Table 2 that the Ep −Eγ relation has the smallest (while the V − L relation has the largest) intrinsic scatters, although the uncertainties of intrinsic scatters in Table 2 are much larger. Using Reichart's likelihood, the parameters (especially the intrinsic scatter) cannot be well constrained. Reichart's likelihood leads to larger absolute slope parameters compared to D'Agostini's likelihood. Figure 3 is the contour plot in the (a, b) plane. We can see an important common feature between the results derived from two different likelihoods: only for the Ep − Eγ relation, low-z subsample is consistent with high-z subsample within 1σ uncertainty. For the τ lag − L and Ep − L relations, low-z subsample still differs from high-z subsample at more than 3σ confidence level. As for the τRT − L and Ep − Eiso relations, low-z subsample differs from high-z subsample at more than 2σ confidence level. The relatively lower significance is due to the larger uncertainties of the best-fit parameters. The uncertainties of slope parameters of V − L relation derived from Reichart's likelihood are extremely large. In a word, only Ep − Eγ relation shows no significant evidence for the redshift evolution. This conclusion does not depend on the choice of the best-fit methods. Reichart's likelihood differs from D'Agostini's one by an extra factor (1 + b 2 ) 1/2 . Otherwise, these two likelihoods are identical (if we set σ 2 int ≡ σ 2 y + b 2 σ 2 x ). D' Agostini (2005) pointed out that Reichart's likelihood has a problem: m 2 cannot be added tout court to 1, since m 2 is in general dimensional (although in our case it is dimensionless). The factor (1 + b 2 ) 1/2 has the net effect of overestimating m. This is one reason why Reichart's likelihood leads to a larger slope parameters relative to D'Agostini's likelihood. Therefore, we use the results of D'Agostini's likelihood when calibrating the distance of GRBs in the next section. Figure 1. The luminosity correlations for low-z (black) and high-z (red) GRBs. Error bars represent the 1σ uncertainties. The lines are the best-fit results, which are derived from maximizing the D'Agostini's likelihood. DISTANCE CALIBRATION AND COSMOLOGICAL IMPLICATIONS As we have shown that the Ep − Eγ relation does not significantly evolve with redshift, we can use it to calibrate GRBs. To avoid the circularity problem, the Padé method proposed by Liu & Wei (2014) is applied. The main calibrating procedures are as follows: Firstly, derive the distance-redshift relation of SNe Ia (here we use the Union2.1 (Suzuki et al. 2012) dataset) using the Padé approximation of order (3,2), i.e., where the coefficients (α0, α1, α2, α3, β1, β2) and the corresponding covariance matrix are derived by fitting Eq.(17) to the Union2.1 dataset (see Liu & Wei (2014) for details). Assuming that the low-z GRBs trace the same Hubble diagram to SNe Ia, we can calculate the distance moduli of low-z GRBs directly from Eq.(17). The uncertainty of µ propagates from the uncertainties of the coefficients (αi, βi). Then the luminosity distance of low-z GRBs can be obtained using the relation As dL is known, the collimation-corrected energy can be further calculated from Eq.(11). Note that there are only 12 low-z GRBs and 12 high-z GRBs available since the others have no measurement of jet opening angle. Then we fit the Ep − Eγ Table 2. The intrinsic scatters along the x-axis (σx) and y-axis (σy), intercepts (a) and slopes (b) of six luminosity correlations for low-z and high-z GRBs, derived from maximizing the Reichart's likelihood. The quoted errors are of 1σ. N is the number of GRBs available in the fitting. The last column gives the total intrinsic scatter σ int ≡ (σ 2 y + b 2 σ 2 x ) 1/2 . By directly extrapolating the Ep − Eγ relation to high-z GRBs, we can inversely obtain the collimation-corrected energy for 12 high-z GRBs from Eq.(5). Finally, calculate the luminosity distance of high-z GRBs from Eq.(11), and then the distance moduli from Eq.(18). The uncertainty of distance moduli propagates from the uncertainties of Eγ, S bolo and F beam , i.e. (Schaefer 2007), where The distance moduli of 12 high-z GRBs and their 1σ uncertainties calibrated through the Ep − Eγ relation are listed in Table 3. We also plot the 12 high-z GRBs in z − µ plane in Figure 4, where the black curve is the best-fit result to ΛCDM model. The fit of 12 high-z GRBs to the ΛCDM model gives ΩM = 0.302 ± 0.142, well consistent with the Planck 2015 results (Ade et al. 2015). From Figure 4, we can see that the distance of GRB 060605 is much overestimated. This is because the Ep − Eγ relation overestimates the energy of GRB 060605 (see also the Ep − Eγ plot in Figure 1, where the red star represents this burst). On the contrary, the distance of 060526 is underestimated because the Ep − Eγ relation underestimates its energy. The rest 10 GRBs are consistent with the ΛCDM model within 1σ uncertainties. For comparison, we also calibrate GRBs through the Ep −Eiso relation (the so called Amati relation GRBs and 61 high-z GRBs are available. The constraint of 61 high-z GRBs on the ΛCDM model gives ΩM = 0.805 ± 0.144, which is much larger than the Planck 2015 results. The reason for this can be easily understood. From the Ep − Eiso plot in Figure 1, we can see that high-z GRBs have in average larger isotropic equivalent energy than low-z GRBs at the same Ep value. Therefore, when extrapolating the Amati relation from low-z GRBs to high-z GRBs, the energy (so the distance) of most high-z GRBs is underestimated. The underestimation of distance further leads to the overestimation of ΩM . For this reason, we can predict that GRBs calibrated through the rest four luminosity correlations (τ lag − L, V − L, Ep − L and τRT − L) may also overestimate the value of ΩM . SUMMARY In this paper, we checked the possible redshift dependence of six luminosity correlation in long GRBs. We divided GRBs into low-z and high-z subsamples according to their redshift smaller or larger than 1.4. The slope and intercept parameters of six luminosity correlations are derived by maximizing the D'Agostini's likelihood. For all the six luminosity correlations, high-z GRBs seem to have larger intercept, but smaller absolute slope than low-z GRBs. It was shown that the intrinsic scatter of V − L relation is to large to make a convincing conclusion. The Ep − Eγ relation has the smallest intrinsic scatter among the six, although the number of available GRBs is small. Most importantly, the Ep − Eγ relation shows weak redshift dependence. Strong evidence (> 3σ) for the redshift evolution was found in the rest four correlations. Similar features can be seen when we use Reichart's likelihood instead of D'Agostini's, although the statistical significance is lower. We calibrated high-z GRBs using the Ep − Eγ relation in a model independent way and reconstruct the Hubble diagram. The constraint of high-z GRBs on the ΛCDM model gives matter density ΩM = 0.302 ± 0.142, which is well consistent with the Planck 2015 results, although the error bar is large. Calibrating GRBs using the Amati relation, as was done by Liu & Wei (2014), in some cases may overestimate ΩM . One of the disadvantage in using the Ep − Eγ relation, of course, is that only a small number of GRBs are available since most GRBs have no measurement of jet opening angle. We hope that the future observation will enlarge the GRB sample so as to improve the statistical significance.
2015-11-27T09:44:28.000Z
2015-07-19T00:00:00.000
{ "year": 2015, "sha1": "fb1f9d93a61959b1d5edc5184ee21dff42df27ab", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1507.06662", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fb1f9d93a61959b1d5edc5184ee21dff42df27ab", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225527549
pes2o/s2orc
v3-fos-license
Clinical Impact of Poorly Differentiated Cluster at the Invasive Front in Colorectal Cancer Invading beyond the Muscle Layer Introduction: The clinicopathological significance of poorly differentiated cluster (PDC) at the invasive front in colorectal cancer (CRC) has been reported. We analyzed whether PDC reflects malignant findings in patients with CRC invading beyond the muscle layer. Patients and methods: Sixty-eight patients who underwent surgery between January 2015 and June 2016 for CRC invading beyond the T3 (median observation period: 32.2 months) were enrolled. The relationship between PDC and clinicopatho- logical factors was analyzed. PDC was graded based on the criteria described in a report by Ueno H et al. Results: Tumor location was at the proximal colon in 26 cases, distal colon in 34 cases, and rectum in eight cases. The number of cases with ly2,3 and v2,3 was 24 and 38, respectively. Thirty-eight cases had node positive and 11 cases had distant metastases, including 10 cases with hematogenous metastasis and four cases with peritoneal metastasis. The number of cases with stages II, III, and IV was 28, 28, and 12, respectively. The number of cases with PDC grades 1 (G1), 2 (G2), and 3 (G3) was 48, 15, and 5, respectively. A PDC G2 or G3 is a risk factor for lymph node and distant metastases. Cases with PDC G2 or G3 had significantly poor overall survival (OS) ( p < 0.0001). In cases with curability (cur) A resection for stage II or III, disease-free survival (DFS) and OS were significantly poorer in cases with PDC G2 or G3 ( p = 0.0022 and p = 0.0049, respectively). Conclusion: Analyses concerning PDC at the invasive front in cases with CRC invading beyond the muscle layer were per- formed. As the stage progresses, cases with PDC G2 and G3 increased significantly. In cases with PDC G2 and G3, the DFS and OS were significantly poorer. These results suggest that PDC is a malignant predictor in patients with CRC invading the T3 or deeper. Introduction The incidence of colorectal cancer (CRC) is increasing in Japan, and it is the leading cause of cancer death among Japanese women 1) . Despite curative surgery such as Japanese D3 lymph node dissection, deeper tumor invasion beyond the muscle layers leads to death in approximately 30% of cases 2) . Although the TNM Classification of Malignant Tumors has been mainly used as a prognostic tool after curative surgery, it could not determine which patient would be relapsed or not in the same TNM stage. Therefore, a grading system that is independent of TNM classification is needed. Histological grading has been used to assess the malignant potential of CRC. The most widely accepted histological grading is based on the degree of tumor differentiation 3) . Recently, Ueno et al. reported that the grading of poorly differentiated clusters (PDCs), which are defined as tumor cells without glandular formation in the invasive front reflected in the prognosis. PDCs were classified as grade 1 (G1), grade 2 (G2), and grade (G3) based on their count 4) . The prognosis of patients with CRC with PDC G3 is more unfavorable than those with PDC G1 or G2 [5][6][7][8][9] . In this study, we classified patients who underwent resection of CRC tumors invading the T3 or deeper into their respective PDC grades to investigate the clinical significance. Here, we report the importance of PDCs in clinical practice for patients with CRC tumors invading beyond the T3. Patients One hundred ten patients with CRC who consecutively underwent surgery at Saiseikai Kurihashi Hospital between January 2015 and June 2016 were enrolled as a cohort for this study. From this cohort, 68 patients were chosen with a pathological diagnosis of T3 or deeper tumor invasion. Clinicopathological findings were described according to the Japanese Classification of Colorectal Carcinoma 10) . Definition of PDC PDCs are cancerous clusters composed of five or more cells without glandular formation 4) . To assess the grading of PDCs, the most frequent area of PDCs was identified using a low-power magnification view. The number of PDCs was counted using an X20 objective lens. PDCs were classified according to Ueno's criteria. In brief, tumors with <5, 5-9, and >10 PDCs were defined as G1 (Fig. 1a), G2, and G3 ( Fig. 1b), respectively. The grading of PDCs in this cohort was judged by one investigator (K. Y.) who was blinded to all clinical information. Statistical analyses Statistical analyses were performed using JMP Pro (version 13; SAS institute Inc., Cary, NC, USA). The relationship between PDCs and clinicopathological findings was analyzed using the chi-squared test and Fisher's exact test. The overall survival (OS) and disease-free survival (DFS) were estimated using the Kaplan-Meier method. Significant differences were assessed using the log-rank test. P values less than 0.05 were considered statistically significant. Ethical approval The protocol of this retrospective study was assessed and approved by the institutional review board of Saiseikai Kurihashi Hospital (approval no. 79-6). (Table 1) In this study, the median age of the study cohort was 68.5 (range, 40-90 years). Fifty-three were men and 15 were women. The primary tumor location was at the proximal colon in 26 cases distal colon in 34 cases, and rectum in eight cases. Lymph node metastasis was observed in 38 cases, including 24 cases with ly 2,3 and 38 cases with v 2,3. Regarding stage distribution, 28 patients had stage II, 19 had stage IIIa, nine had stage IIIb, and 11 had stage IV, including 12 cases with hematological metastasis and four cases with peritoneal metastasis. The median number of PDCs was 3. Regarding PDC grading, 48 cases had G1, 15 cases had G2, and 5 cases had G3. Relationship between PDC grade and clinicopathological factors (Table 2) Lymph node metastasis including N1, N2, and N3 was significantly observed more in cases with PDC G2 or G3 (p = 0.0046). Moreover, highly lymphatic and venous invasion were significantly found more in cases with PDC G2 or G3 (p = 0.0022 and p = 0.0206, respectively). The significant relationship between histology and PDC grade was also elucidated. Additionally, distant metastasis significantly occurred more in cases with PDC G2 or G3 (p = 0.0029). Therefore, PDC grade significantly increased as the stage progressed (p = 0.0006). Prognosis and PDC grade The OS in 68 cases was significantly divided between the cases with PDC G1, G2, and G3 (p < 0.0001) (Fig. . As for the curative cases in stages II and III, the DFS and OS in cases with PDC G2 and G3 were significantly poorer than those in cases with PDC G1 (p = 0.0022 and p = 0.0049, respectively) (Fig. 2b, 2c). Discussion PDCs are often observed in the center of a tumor; however, their molecular characteristics are different from the PDCs at the invasive front 11) . Bertoni L et al. reported that the PDCs at the invasive front of CRC showed a similar expression pattern of two epithelialmesenchymal-transition (EMT)-related proteins 11) . EMT is an early stage driver of the cancer metastasis pathway including cell migration and lymphovascular invasion 12) . Therefore, we focused on the PDCs at the invasive front of CRC tumors invading the T3 or deeper to investigate their impact on such CRC in which we should treat. Clinicopathological parameters were assessed relative to the PDC grade. Lymph node metastasis and PDC grade were significantly correlated. Additionally, the number of cases with PDC G2 or G3 increased as the grade of lymph node metastasis progressed. Among the cases with a highly lymphatic invasion, the number of cases with PDC G2 or G3 increased. Similar results were reported using cohorts consisting of 239 patients with pT2-T3 CRC 13) and pT1 CRC 8) . Among the cases with distant metastasis including both peritoneal and hematological metastasis, the number of cases with PDC G2 or G3 was significant. The number of cases with PDC G2 or G3 increased among cases with highly venous invasion. To metastasize to different organs, tumor cells should move and invade lymphatic or vascular vessels. Therefore, transformation into small clusters shaped like spheroids is needed. This phenomenon reflects the morphological change of EMT because a key event in promoting stationary tumor cells to migrate and invade is the EMT program 12) . These results suggest that PDC represents a morphologically hallmark EMT 14) . Moreover, the prognostic value of the PDC grades was assessed. In the total cohort of this study, the OS in cases with PDC G2 or G3 was significantly worse than that in those with PDC G1. In the curative cases including stages II and III, the DFS and OS in cases with PDC G2 or G3 were also significantly worse. These results confirm that the number of PDCs is a prognostic factor. Even though our investigation obtained reproducible results in terms of the clinical impact of PDCs, there were several limitations to this study. First, this was a retrospective study, which could result in selection bias, although consecutive cases were used. Second, the number of cases in this study was limited because it was performed in a single institution. Data from prospective studies with more cases will be required. In conclusion, the number of cases with PDC G2 and G3 significantly increased as the stage progressed. In cases with PDC G2 and G3, the DFS and OS were significantly poor. Our reproducible results indicate that the number of PDCs at the invasive front of CRC has great clinical efficacy.
2020-10-28T19:08:22.844Z
2020-07-09T00:00:00.000
{ "year": 2020, "sha1": "b55b976a54277d58de7788243d4ec550ee5052ea", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/acrt/28/2/28_107/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b980f442ed3564b66e38a6a070f71d0ef084e788", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
119482930
pes2o/s2orc
v3-fos-license
Discrete Physics and the Dirac Equation We rewrite the 1+1 Dirac equation in light cone coordinates in two significant forms, and solve them exactly using the classical calculus of finite differences. The complex form yields ``Feynman's Checkerboard''---a weighted sum over lattice paths. The rational, real form can also be interpreted in terms of bit-strings. Introduction In this paper we give explicit solutions to the Dirac equation for 1+1 space-time. These solutions are valid for discrete physics [1] using the calculus of finite differences, and they have as limiting values solutions to the Dirac equation using infinitesimal calculus. We find that the discrete solutions can be directly interpreted in terms of sums over lattice paths in discrete space-time. We document the relationship of this lattice-path with the checkerboard model of Richard Feynman [2]. Here we see how his model leads directly to an exact solution to the Dirac equation in discrete physics and thence to an exact continuum solution by taking a limit. This simplifies previous approaches to the Feynman checkerboard [3,4]. We also interpret these solutions in terms of choice sequences (bit-strings) and we [5,6] needed to build these solutions. The paper is organized as follows. Section 2 reviews the Dirac equation and expresses two versions (denoted RI, RII) in light cone coordinates. The two versions depend upon two distinct representations of the Dirac algebra. Section 3 reviews basic facts about the discrete calculus and gives the promised solutions to the Dirac equation. Section 4 interprets these solutions in terms of lattice paths, Feynman checkerboard and bit-strings. Section 5 discusses the meaning of these results in the light of the relationship between continuum and discrete physics. The 1+1 Dirac Equation in Light Cone Coordinates We begin by recalling the usual form of the Dirac equation for one dimension of space and one dimension of time. This is where the energy operator E satisfies the dictates of special relativity and obeys the equation where m is the mass, c the speed of light and p the momentum. Dirac linearized this equation by setting E = cαp + βmc 2 where α and β are elements of an associative algebra (commuting with p, c, m). It then follows that Thus whenever α 2 = β 2 = 1 and αβ + βα = 0, these conditions will be satisfied. Thus we have Dirac's equation in the form ih ∂ψ ∂t = (cαp + βmc 2 )ψ. For our purposes it is most convenient to work in units where c = 1 andh/m = 1. Then i ∂ψ ∂t = (αp/h+β)ψ and we can take p =h i ∂ ∂x so that the equation is We shall be interested in 2 × 2 matrix representations of the Dirac algebra α 2 = β 2 = 1, αβ + βα = 0. In fact we shall study two specific representations of the algebra. We shall call these representations RI and RII. They are specified by the equations below We shall see that these paths lead to exact solutions to natural discretizations of the equations. We now make the translation to light cone coordinates. First consider RI. Essentially this trick for replacing the complex Dirac equation by a real equation was suggested to one of us by V. A. Karmanov [7]. Using this representation, the Dirac equation is If ψ = ψ 1 ψ 2 where ψ 1 and ψ 2 are real-valued functions of x and t, then we have Now the light cone coordinates of a point (x, t) of space-time are given by [r, ℓ] = and hence the Dirac equation becomes Remark. It is of interest to note that if we were to write ψ = ψ 1 + iψ 2 , then the Dirac equation in light cone coordinates takes the form Dψ = iψ where D(ψ 1 +iψ 2 ) = ∂ψ 1 ∂ℓ + i ∂ψ 2 ∂r . In any case, we shall refer to Eq. (9) as the RI Dirac Equation Now, let us apply the same consideration to the second representation RII. The Dirac equation becomes Thus Hence We shall call (Eq. 13) the RII Dirac equation. Discrete Calculus and Solutions to the Dirac Equation The discrete derivative of f with respect to ∆ is then defined by the equation Consider the function Lemma. Proof. We are indebted to Eddie Grey for reminding us of this fact [8]. Note that as ∆ approaches zero x (n) approaches x n , the usual n th power of x. Note also that is a (generalized) binomial coefficient. Thus With this formalism in hand, we can express functions whose combination will yield solutions to discrete versions of the RI and RII Dirac equations described in the previous section. After describing these solutions, we shall interpret them as sums over lattice paths. To this end, let ∂ ∆ /∂r and ∂ ∆ /∂ℓ denote discrete partial derivatives with respect to variables r and ℓ. Thus Define the following functions of r and ℓ Note that as ∆ → 0, these functions approach the limits: Note also, that if r/∆ and ℓ/∆ are positive integers, then ψ ∆ R , ψ ∆ L and ψ ∆ 0 are finite sums since x n n! = ∆ n C x/∆ n will vanish for sufficiently large n when x/∆ is a sufficiently large integer. Now note the following identities about the derivatives of these functions With ∆ = 0, these can be regarded as continuum derivatives. We can now produce solutions to both the RI and the RII Dirac equations. For RI, we shall require We shall omit writing the ∆'s in those equations, since all these calculations take the same form independent of the choice of ∆. Of course for finite ∆ and integral r/∆, ℓ/∆ these series produce discrete calculus solutions to the equations. Let It follows immediately that this gives a solution to the RI Dirac equation. Similarly, if we let then This gives a solution to the RII Dirac equation. In the next section we consider the lattice path interpretations of these solutions. Lattice Paths In this section we interpret the discrete solutions of the Dirac equation given in the previous section in terms of counting lattice paths. As we have remarked in the previous section, the solutions are built from the functions ψ 0 , ψ R and ψ L . These functions are finite sums when r/∆ and ℓ/∆ are positive integers, and we can rewrite them in the form denotes the choice coefficient. We are thinking of r and ℓ as the light cone coordinates r = 1 2 (t + x), ℓ = Figure. 1. Clearly, the simplest way to think about this combinatorics is to take ∆ = 1. If we wish to think about the usual continuum limit, then we shall fix values of r and ℓ and choose ∆ small but such that r/∆ and ℓ/∆ are integers. The combinatorics of an r × ℓ rectangle with integers r and ℓ is no different in principle than the combinatorics of an (r/∆) × (ℓ/∆) rectangle with integers r/∆ and ℓ/∆. Accordingly, we shall take ∆ = 1 for the rest of this discussion, and then make occasional comments to connect this with the general case. We can count RL corners by the point on the L axis where the path increments. We can count LR corners by the point on the R axis where the path increments. A lattice path is then determined by a choice of points from the L and R axes. More specifically, there are paths that begin in R (go right first) and end in L, begin in L and end in R, begin in L and end in L, begin in R and end in R. We call these paths of type RL, LR, LL and RR respectively. (Note that a RL corner is a two-step path of type RL and that an LR corner is a two step path of type LR.) It is easy to see that an RL path involves k points from the R axis and k + 1 points from the L axis, an LR path involves k + 1 points from the R axis and k points from the L axis, while an LL or RR path involves the choice of k points from each axis. See Figure 3 for examples. Figure 3: Showing by example that C r k C ℓ k+1 enumerates RL paths and C r k C ℓ k enumerates RR paths. As a consequence, we see that if XY denotes the number of paths from A to B of type XY, then We see, therefore, that our functions ψ 0 , ψ R and ψ L can be regarded as weighted sums over these different types of lattice path. In fact, we can re-interpret (−) k in terms of the number of corners (choices) in the paths: Hence if N c (XY ) denotes the number of paths with c corners of type XY then From the point of view of the solution to the RI Dirac equation (ψ 1 = ψ 0 − ψ L , it is an interesting puzzle in discrete physics to understand the nature of the negative case counting that is entailed in the solution. (An attempt has been made by one of us to interpret this in terms of spin or particle number conservation in the presence of random electromagnetic fluctuations producing the paths [9].) The signs do not appear to come from local considerations along the path. The RII Dirac solution gives a different point of view. Here ψ 1 = ψ 0 − iψ L , Taken the hint given by the appearance of i, we note that i 2k = (−) k while i 2k+1 = (−1) k i. Thus where N c (R) denotes the number of paths that start to the right and have c crossings, while N c (L) denotes the number of paths that start to the left and have c crossings. This shows that our solution in the RII case is precisely in line with the amplitudes described by Feynman and Hibbs (Ref. [2]) for their checkerboard model of the Dirac propagator. See also H. A. Gersch [10] and Ref. [3] for the relationship of the as is discussed elsewhere [11,12]. A choice sequence such as 8131A8 3-96 R L RR L RRRR L has "corners" wherever R meets L or L meets R. We have characterized these corners into two types RL and LR: Corners in the bit-string sequence alternate from RL to LR and from LR to RL. The moral of Feynman's (−i) c where c is the number of corners is that this alteration should be regarded as an elementary rotation: One may wonder, why does this simple combinatorics occur in a level so close to the making of one distinction, and yet implicate fully the solutions to the Dirac equation in continuum 1+1 physics?! We cannot begin to answer such a question except with another question: If you believe that simple combinatorial principles underlie not only physics and physical law, but the generation of space-time herself, then these principles remain to be discovered. What are they? What are these principles? It is no surprise to the mathematician that i ends up as central to the quest. For i is a strange amphibian not only neither 1 nor −1, i is neither discrete nor continuous, not algebra, not geometry, but a communicator of both. In this essay we have seen the beginning of a true connection of discrete and continuum physics. (k+1)! , ψ R = Σ ∞ k=0 (−1) k r k+1 (k+1)! ℓ k k! . Here we have a glimpse of the possibilities inherent in a complete story of discrete physics and its continuum limit. The continuum limit will be seen as a summary of the real physics. It is a way to view, through the glass darkly, the crystalline reality of simple quantum choice.
2017-08-08T02:17:34.963Z
1996-03-29T00:00:00.000
{ "year": 1996, "sha1": "801ff1bf0e340024f381805d61b9a9529d6c8c52", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9603202", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "36efe156c34c2ec9c6115ab37cd6ec5c32bc7521", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
5877024
pes2o/s2orc
v3-fos-license
The rotation-magnetic field relation Today, the generation of magnetic fields in solar-type stars and its relation to activity and rotation can coherently be explained, although it is certainly not understood in its entirety. Rotation facilitates the generation of magnetic flux that couples to the stellar wind, slowing down the star. There are still many open questions, particularly at early phases (young age), and at very low mass. It is vexing that rotational braking becomes inefficient at the threshold to fully convective interiors, although no threshold in magnetic activity is seen, and the generation of large scale magnetic fields is still possible for fully convective stars. This article briefly outlines our current understanding of the rotation-magnetic field relation. INTRODUCTION The rotational evolution of stars is the result of the complex interaction of several fundamental processes. First, the molecular cloud contracts conserving initial angular momentum spinning up the central object. Angular momentum can be stored in a disc, which may brake the rotation of the central object. After the disc is dissipated, the star can contract reaching the highest rotation rate after several ten million years of lifetime. Solar-type stars, i.e., stars with convective envelopes, start generating magnetic fields that couple to the stellar wind. The interaction between charged particles in the wind and the magnetic field generates a torque braking the star's rotation. In the case of the Sun, braking has led to a rotation rate of about 1 revolution every month. According to the rotation-activity relation, rapidly rotating stars produce strong magnetic fields generating a strong magnetic torque that brakes the star. This leads to slower rotation, which in turn weakens the magnetic field production and braking is weakening, too. At young ages in open clusters, we observe rapidly rotating, very active stars, while the (single) field stars generally are slowly rotating and only weakly active. This means that in principle rotation and activity can tell about the age of a star [e.g., 2,3]. The connection between rotation, stellar wind, magnetic fields, and magnetic activity is reviewed in this splinter session summary. First, we give an overview about the current picture on rotation in both clusters and the field, i.e., in young and old stars. Next, we discuss results from direct and indirect magnetic field measurements and their connection to stellar wind theory. In the last part, we give a summary on the theoretical work on magnetic field generation through stellar dynamos. Low-mass stars and in particular the regime where stars become completely convective currently present a rather puzzling picture of the connection between magnetism, activity, and rotation. Thus, low-mass stars are in the focus of our summary. Young objects The net effect of early stellar evolution and disc-coupling is that a star has approximately constant angular velocity for the first few Myr while it is still coupled the disc. Then it spins up rapidly once the disc dissipates, reaching maximum rotational velocity close to when it arrives on the zero age main sequence, followed by a gradual decay due to the stellar wind, lasting for the remainder of the main sequence lifetime. Rotational evolution has traditionally been constrained by using measurements in open clusters to provide "snapshots" during the evolution. Large samples of these are now available covering ∼ 1 − 650 Myr (see Fig. 1). It is becoming increasingly clear that the evolution is strongly mass and rotation rate dependent, and this has important consequences for the nature of the mechanisms governing the angular momentum losses. In particular, unlike solar-type dwarfs, low-mass dwarfs spin up much more rapidly, appear to experience no significant angular momentum losses on the pre main sequence, and much weaker losses due to winds on the main sequence. Field stars Early stars with no relevant convective envelopes cannot generate surface magnetic fields, they rotate rapidly during their entire lifetime. Solar-type stars with convective envelopes are strongly braked as seen above. This is consistent with observations of coronal emission, chromospheric emission, and latitudinal differential rotation, which set in exactly where stars are believed to form convective envelopes, i.e., around spectral type A7-F0. Wind braking becomes very efficient around late-F type stars, and the Sun for example has slowed down to less than 2 km s −1 during its lifetime. Field stars of spectral type K and early-M typically rotate very slowly as well, although in their youth braking was probably somewhat weaker (see above). Virtually all single field K and M dwarfs, including early-M classes M0-M3, are rotating at velocities slower than about 3 km s −1 . Around spectral type M3.5, however, a dramatic increase in rotation rate is observed. This threshold coincides with the mass range where stars become fully convective. It appears that for some reason rotational braking becomes weaker at this boundary. Distance and reddening are from the literature using I-band absolute magnitudes and NextGen stellar models of Baraffe et al. [1]. For the (numerous) appropriate references, please see Irwin [13]. Fig. 2 shows a compilation of (projected) rotational velocities v sin i in objects of spectral classes M-T. The sudden increase of rotation rate is evident at spectral type ∼M3.5. Another important result is that braking does not completely vanish at least until spectral class L0. In the figure, members of the (statistically) young population are shown in blue and old stars in red, and the two subdwarfs shown with green squares are probably very old. Young objects are found predominantly in the upper part of the plot while the old sample shows slower rotation. This indicates that rotational braking still works in ultra-cool dwarfs. Solid lines in Fig. 2 show evolutionary tracks according to a modified braking law of the form Here, Kω 2 crit was scaled according to the right panel in Fig. 2; braking is weaker at lower temperature [see 25]. A viable explanation for this may be the weaker coupling of magnetic field lines (which still exist) to the atmosphere that is becoming more and more neutral. Rotational braking in fully convective field stars and brown dwarfs appears to be so weak that after a few billion years the distribution of rotational velocities can tell a lot about their angular momentum evolution and the underlying processes, magnetic fields and (sub)stellar winds. [24,25], Delfosse et al. [9], Mohanty & Basri [21] (blue: kinematically young, red: kinematically old); triangles from Zapatero Osorio et al. [33]. Magenta stars indicate the three members of LHS 1070 [26], filled green squares the two subdwarfs 2MASS 0532+8246 and LSR 1610−0040 [22]. Solid lines mark evolutionary tracks for objects of 0.1, 0.09, 0.08, and 0.07 M ⊙ , dashed lines mark ages of 1 and 10 Gyrs (from upper left to lower right). Right: Scaling of the magnetic wind-braking with temperature in Eq.1. MAGNETIC FIELDS The discovery of X-ray emission from the brown dwarf LP944-20 [28] provided the first direct demonstration of magnetic activity in the substellar regime. Subsequent Xray, Hα, and radio observations revealed that low-mass stars and even brown dwarfs ubiquitously generate magnetic activity. No break is observed at the boundary to full convection, but chromospheric activity weakens after spectral type about M7 [21,32,29,25], an effect that may be due to decreasing fractional ionization [20]. Quiescent activity and flaring are still observed in even cooler objects [e.g., 11,17,25,27]. About 10% of ultracool dwarfs in the range M7-L4 produce both quiescent and flaring radio emission, with inferred field strengths of 0.1-3 kG and covering fractions of order unity [5], and it likely correlates with rotation velocity [7]. At the same time, the tight radio/X-ray correlation that exists in a wide range of stars (including the Sun) is strongly violated beyond M7, roughly the same regime where chromospheric and coronal emission become weaker. Equally important, several ultracool dwarfs have been observed to produce periodic radio emission and Hα emission. This emission may carry information about the field topology. In general, radio observations suggest that a low multipole, large-scale field configuration is the best explanation for the observed variability [6,12]. Activity indicators like X-ray, Hα, and radio emission provide strong constraints on the magnetic flux depending on the mechanism that generates the observed emission. Direct measurements of magnetic fields in M dwarfs through Zeeman splitting of atomic lines were carried out by Johns-Krull & Valenti [14], results from a re-analysis with a multi-component fit are given in Johns-Krull & Valenti [15]. In late-M dwarfs, however, atomic lines become rare and more and more blended so that molecular Zeeman diagnostics would be useful, and Valenti & Johns-Krull [30] suggested that FeH could be a good indicator of magnetic flux. Reiners & Basri [23,24] developed a method to mea-sure magnetic flux through FeH and did so in a sample of M3-M9 dwarfs. They found that the relation between magnetic fields and (chromospheric) activity is intact through the entire M spectral range; the most active M stars exhibit magnetic fields on the order of a few kG. Thus, the lack of rotational braking in mid-to late-M dwarfs cannot be a consequence of weaker magnetic fields. Fully convective stars obviously find a way to efficiently generate magnetic fields. MAGNETIC FIELDS AND WIND BRAKING How does the magnetic field connect to rotation? When a rotating star drives an outflow that is well-coupled to the stellar magnetic field, the wind and magnetic field conspire to extract angular momentum from the star. This happens because, as wind material leaves the stellar surface and tries to conserve its own angular momentum, it lags behind the star in a rotational sense. Thus, the magnetic field connecting the stellar surface to the outflowing wind is bent backwards with respect to the stellar rotation. This imparts a torque, which acts to give "extra" specific angular momentum to the wind, removing it from the star. A method for calculating this stellar wind torque dates back to Weber & Davis [31] and Mestel [19], and magnetic stellar wind theory is still an active research topic. A generic result is that the torque can be written τ =Ṁ w Ω * r 2 A , whereṀ w is the mass loss rate in the wind, Ω * is the angular spin rate of the star, and r A is sometimes called the "magnetic lever arm" in the flow. In a one-dimensional flow, r A is the Alfvén radius, the radial location where the wind flow speed equals the magnetic Alfvén wave speed. We can quantify the efficiency of angular momentum extraction by dividing the stellar angular momentum by τ, which gives a characteristic spin-down time where k is the "mean radius of gyration" (in main sequence stars, typically k 2 ∼ 0.1) and R * and M * are the stellar radius and mass. Note that the first two terms on the righthand-side are dimensionless. The last term has the units of time and represents the mass loss time for the star. In the solar wind, for example, r A /R * ∼ 10 [e.g., 16]. Thus the angular momentum loss in magnetic stellar winds can be very efficient in a sense that the spin-down time can be much shorter than the mass loss time. This is an elegant result, but the difficulty lies in calculating the effective r A for an arbitrary star and a realistic (3-dimensional) wind. Our understanding of the observed evolution of stellar spins depends on this calculation of the torque. Recent work by Matt & Pudritz [18, and see contribution in these proceedings] emphasizes that, while there is still no adequate theory for predicting how the wind torque depends on stellar mass and age, significant progress can be made with the use of numerical simulations. STELLAR DYNAMOS Overview The solar activity cycle is believed to be the result of a dynamo process either in the convection zone or the stably stratified layer beneath it. The original model was an αΩ dynamo in the convection zone generating a predominantly toroidal and axisymmetric magnetic field. Problems with flux storage and the internal rotation pattern found by helioseismology led to a revised model where the dynamo is located at the bottom of the convection zone. That sort of dynamo, however, produces too many toroidal field belts and too short cycle periods. The advection-dominated dynamo is an extension of the αΩ dynamo where a large-scale meridional flow advects the magnetic field towards the poles at the surface and towards the equator at the bottom of the convection zone. The butterfly diagram is now the result of the meridional flow rather than a dynamo wave and the cycle time depends on the flow as much as on the dynamo number. For stars there is no clear picture yet. One would expect stars similar to the Sun to show the same type of activity but Doppler imaging frequently finds large spots at high latitudes and both solar-type and anti-solar cycles have been found in stellar butterfly diagrams from photometry. Large polar spots can be explained as the consequence of flux tube instability in the tachocline while anti-solar butterfly diagrams could indicate a meridional flow pattern opposite to that of the Sun. Main sequence stars with masses below ∼ 0.3 M ⊙ are fully convective, ruling out any dynamo mechanism involving the tachocline, but some sort of dynamo must still be at work. The α 2 dynamo, where the α effect alone generates the field, is a possible mechanism. It generates completely non-axisymmetric fields that do not oscillate, so that monitoring of active low-mass stars will provide an important step towards understanding of the dynamo in these stars. At the moment, observations support neither the αΩ nor the α 2 dynamo: AB Dor shows pronouced differential rotation but a strongly non-axisymmetric surface field while V374 Peg has an axisymmetric dipole geometry despite nearly rigid surface rotation [10]. Fully convective stars Particularly puzzling for dynamo theorists has been the finding that fully convective M dwarfs can host large-scale magnetic fields, even in the absence of any apparent differential rotation. Browning [8] discussed 3-D simulations of convection and dynamo action in fully convective stars, with an eye toward answering two main questions: first, how large-scale fields might be generated without a "tachocline" of shear, and second, whether differential rotation is always absent in such stars or might be maintained in certain circumstances. In this model [8], convection acted effectively as a dynamo, quickly building magnetic fields that (in stars rotating at the solar angular velocity) were approximately in equipartition with the turbulent velocity field. More rapidly rotating stars built somewhat stronger fields, whereas slower rotators hosted weaker fields. Although differential rotation was established in hydrodynamic simulations, the strong magnetic fields realized in most MHD cases acted to strongly quench those angular velocity contrasts. Despite the absence of any significant shear, the magnetic fields realized in the simulations had structure on a broad range of spatial scales, and included a substantial large-scale component. The large-scale field generation is attributed partly to the strong influence of rotation upon the slowly overturning flows realized in M-stars. SUMMARY Our current picture of magnetic field generation, rotation, and stellar activity may be summarized as follows: 1. Rotation rates are available for a wide range of masses and ages. Measurements of projected rotation velocities extend far into the brown dwarf regime, but direct measurements of rotational periods are lacking at very low masses. 2. We observe a sharp break in rotation around the threshold where stars become fully convective. This probably indicates a breakdown of wind braking. 3. Magnetic field measurements as well as activity tracers like X-rays, Hα, and radio emission show now obvious break at the convection boundary. However, around spectral type M7 normalized activity strongly weakens and the relation between radio and X-ray emission breaks down. 4. Apparently, very low mass stars can have strong large-scale magnetic fields yet only little wind braking. This remains an unresolved problem. 5. A key for understanding spindown is a theoretical understanding of wind braking. However, it is still a challenge for magnetic stellar wind theory to reliably calculate the wind torque for a range of stellar parameters. Furthermore, the wind torque is affected by the mass loss rate, so it is very important that we get measurements of mass loss rates and continue to improve mass loss theory. 6. Efforts to theoretically understand magnetic field generation evolved from the solar dynamo to the larger class of stellar dynamos, in particular to fully convective ones in absence of a tachocline. First models successfully reproduce magnetic field generation, but it certainly is still a long way to understanding magnetic dynamos in very cool stars.
2008-09-26T07:10:20.000Z
2008-09-26T00:00:00.000
{ "year": 2008, "sha1": "6315514acd95fafdc4749df77a696289cc8d5fce", "oa_license": null, "oa_url": "https://authors.library.caltech.edu/43669/1/1.3099099.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6315514acd95fafdc4749df77a696289cc8d5fce", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
202719595
pes2o/s2orc
v3-fos-license
The Impossibility of Efficient Quantum Weak Coin-Flipping How can two parties with competing interests carry out a fair coin flip, using only a noiseless quantum channel? This problem (quantum weak coin-flipping) was formalized more than 15 years ago, and, despite some phenomenal theoretical progress, practical quantum coin-flipping protocols with vanishing bias have proved hard to find. In the current work we show that there is a reason that practical weak quantum coin-flipping is difficult: any quantum weak coin-flipping protocol with bias $\epsilon$ must use at least $\exp ( \Omega (1/\sqrt{\epsilon} ))$ rounds of communication. This is a large improvement over the previous best known lower bound of $\Omega ( \log \log (1/\epsilon ))$ due to Ambainis from 2004. Our proof is based on a theoretical construction (the two-variable profile function) which may find further applications. Introduction Suppose that Alice and Bob are two cooperating but mutually mistrustful parties, and they must make a unified decision between two choices (X and Y ). Alice wants choice X, and Bob wants choice Y . However, neither of them will gain if they do not agree on their decision. How can the decision be made fairly? A natural solution would be for Alice and Bob to have a trusted third party (Charlie) flip a coin and report the result to both Alice and Bob. But, can coin-flipping be done in absence of any trusted third party or common source of randomness? In this paper we will be concerned with the question of whether coin-flipping can be done if Alice and Bob share a two-way noiseless quantum channel. A standard way to model a protocol in this scenario is like so (see Figure 1). Let n be a positive integer. 1. Alice possesses a quantum system A, which she controls, and Bob possesses a quantum system B which he controls. 2. There is an additional quantum system M, initially possessed by Alice, which stores quantum messages exchanged by Alice and Bob during the protocol. 3. If i is odd, then on the ith round of the protocol, Alice performs a prescribed joint quantum operation on A and M, and then sends M across the quantum channel to Bob. 4. If i is even, then on the ith round of the protocol, Bob performs a prescribed joint quantum operation on B and M and then sends M across the channel to Alice. 5. After the nth round of communication, Alice performs a binary measurement on A and reports the result as a bit (a), and Bob performs a binary measurement on B and reports the result as a bit (b). It is presumed that an "honest" party will carry out their operations and measurements exactly as prescribed; however, a dishonest party may perform arbitrary manipulations of the quantum systems that they possess, and may perform any final measurement that they choose at the end of the protocol. (In particular, there is no constraint on the computational resources of either party. Security proofs in this setting are based on physical assumptions only. ) We say that such a protocol is a weak coin-flipping protocol with bias if the following hold: Figure 1: The first two rounds of a weak quantum coin-flipping protocol. 1. If Alice and Bob both perform honestly, then P(a = b = 0) is exactly 1 2 and P(a = b = 1) is exactly 1 2 . 2. If Alice behaves dishonestly and Bob behaves honestly, then P(b = 0) ≤ 1 2 + . 3. If Bob behaves dishonestly and Alice behaves honestly, then P(a = 1) ≤ 1 2 + . The assumption is that Alice wishes for the outcome to be 0, and that Bob wishes for the outcome to be 1. These conditions assert that Alice cannot bias the outcome by more than in her favor, and Bob cannot bias the result by more than in his favor. (A strong coin-flipping protocol with bias is one which guarantees that a dishonest party cannot bias the result of the coin flip by more than in either direction. Strong coin-flipping with vanishing bias is impossible by an elementary argument attributed to A. Kitaev -see [11].) Quantum coin-flipping was formalized as early as 1998 ( [14]), and a series of works proved coin-flipping protocols with progressive improvements in the bias. Aharonov et al. [2] proved bias 0.42. Spekkens and Rudolph, and independently Ambainis, proved successive results [19,20,4] which brought the bias down to √ 2−1 2 ≈ 0.207. These results involved a small constant number of rounds of communication. Mochon [16,17] then introduced a family of quantum weak coin-flipping protocols which approach bias 1/6 ≈ 0.166, with the number of communication rounds tending to infinity. Finally, in a landmark work in 2007, Mochon [18] showed the existence of a family of weak coinflipping protocols with bias tending to zero. Mochon exploited the idea of point games (a concept also attributed to A. Kitaev) to achieve this result. Mochon's existence proof was later simplified, re-written and published by Aharonov et al. [1]. Then, in recent work [5], Arora et al. introduced an algorithm which effectively constructs the protocols in the family whose existence was proven by Mochon. Following this phenomenal progress, at least one major loose end remains. The number of communication rounds used in the protocols in [18,1] was only shown to be (1/ ) O(1/ ) . This asymptotic quantity is hardly efficient or practical. Meanwhile, the best known lower bound on the number of communication rounds [4] is Ω(log log(1/ )), leaving a vast range of uncertainty about the optimal resources needed to achieve vanishing bias. How many rounds of quantum communication are needed to achieve a particular bias ? Note, for example, that in a different coin-flipping setting considered in [10] (classical communication with a computational hardness assumption), there is a polynomial relationship between the bias and the number of rounds of communication. Could a similar relationship exist for quantum weak coin-flipping? Summary of result In the current paper, we prove the following lower bound on the number of communication rounds for quantum weak coin-flipping (see Theorem 8.2): Theorem 1.1. Let C be an n-round quantum weak coin-flipping protocol with bias . Then, This result shows that, at least in the standard model, practical quantum weak coin-flipping with vanishing bias is not feasible. The proof of this result builds on previous techniques, including the concept of a valid time-independent point game. A valid TIPG is a pair of real-valued functions m1, m2 on R ≥0 × R ≥0 that satisfy a certain infinite set of linear constraints (see subsections 3.2-3.3). It is known that any weak coin-flipping protocol determines a valid TIPG, and vice versa. This correspondence was used to prove the family of protocols with vanishing bias in [18]. Here I prove a negative result: any TIPG obtained from a weak coin-flipping protocol with small bias must have very large 1-norm -i.e., m1 1 + m2 1 must be very large as a function of . Since there is a relationship between the number of communication rounds of a protocol and the 1-norms of its associated time-independent point games, this implies the main result. Ambainis's original bound [4] of Ω(log log(1/ )) was based on a more direct study of quantum weak coin-flipping protocols: he performed an inductive argument, using the fidelity function, on the intermediate states arising in the protocol. This approach appears to be fairly different from the one in this paper, although it may be possible to relate the two. Theorem 1.1 is a further step in mapping out the full range of cryptographic possibilities in a twoparty quantum setting (see [6,21] for surveys on this topic). A number of other negative results are known: secure two-party computation (under certain definitions) is impossible [12,7], and strong coinflipping [11,8], bit commitment [13,15,8], and oblivious transfer [12,9] are all impossible except with fixed positive bias. This paper shows that the case of quantum weak coin-flipping is different: it can be achieved with arbitrarily small bias, but it is impossible to do so in polynomial time. Certainly, this is not the end of the story. The model for quantum weak coin flipping makes a number of assumptions, including that the players exchange information in discrete stages, and that they are completely unconstrained in their ability to manipulate any quantum systems that are not under the control of the other player. This impossibility result gives us additional motivation to study coin-flipping in other settings, including relativistic models. Outline Sections 2-3 of this paper cover preliminaries and known material, and then new contributions appear in sections 4-8. In section 4, I define the profile of a time-independent point game, which is a twovariable function associated to a time-independent point game that distills some of its most relevant information. Sections 5-6 prove some mathematical lemmas, including a key result on the behavior of highly concentrated rational functions (Proposition 5.3). Section 7 proves the result about the 1norm of a time-independent point game using tools from the previous sections. Finally, section 8 proves Theorem 1.1. I conclude by noting some further directions in section 9. Acknowledgements I am grateful to Aarthi Sundaram (my co-author on a companion project) for many interesting discussions about the coin-flipping literature which helped to seed some of the ideas contained here. This paper also owes a large debt to Alexandre Eremenko, who showed me the complex analysis method that I used to prove Proposition 5.3. Thanks to Scott Wolpert, Gorjan Alagic, and Yi-Kai Liu for their help with this project, and to Michael Newman for giving me an introduction to [1] some years ago. This work is a contribution of the U. S. National Institute of Standards and Technology (NIST), and is not subject to copyright in the United States. Any mention of commercial products is for information purposes only, and does not imply endorsement by NIST. Preliminaries Let R denote the set of real numbers, and let R ≥0 denote the set of nonnegative real numbers. If a, b are real numbers with a ≤ b, then [a, b] denotes the closed interval {x | a ≤ x ≤ b}, (a, b) denotes the open interval {x | a < x < b}, and [a, b) and (a, b] are similarly defined. We let ∞ denote infinity, and define intervals such as [a, ∞] ⊆ R ∪ {∞} in the obvious way. If A is a set and B ⊆ A is a subset, then A B denotes the set of all elements of A that are not in B. Our notation follows previous work [18,1,5] in part. Since we will work extensively with functions on R ≥0 that have finite support, we make the following definitions. Definition 2.1. For any x ∈ R ≥0 , let x denote the function from R ≥0 to R which maps x to 1 and is zero elsewhere. For any x, y ∈ R ≥0 , let x, y denote the function from R ≥0 × R ≥0 to R which maps (x, y) to 1 and is zero elsewhere. If f is a real-valued function on a set S, define f + : S → R and f − : S → R by Note that f = f+ − f−. Let Supp f denote the support of f (i.e., the set of points in S on which f is nonzero). Let f denote the function f (x, y) = f (y, x). When f is a function with finite support, then (even if the set S is not countable) we will write s∈S f (s) to mean the sum of f (s) over all points in the support of f . The expression f 1 denotes the sum s∈S |f (s)| (that is, the 1-norm of f ). The function log: R ≥0 → R ∪ {−∞} denotes the logarithm in base 2. We use the term universal function to mean a function that is not dependent on any variables other than its input variables. Thus, even if we refer to a universal function after some variables have been quantified (e.g., "for all c, ...") it is understood that the function has no implicit dependencies on those variables. We will use boldface Roman letters (A, B, . . .) for universal functions. When we use asymptotic big-O notation, we may use O(u) as a set (e.g., "there exists F(u) ∈ O(u) such that ...") or as a placeholder for a function (e.g., "x = y + O(z)"). When a big-O expression is used as a placeholder, it is understood that it also represents a universal function with no implicit dependencies. If Q is an event, then we write P[Q] for the probability of Q, and if X is a real-valued random variable, then we write E[X] for the expectation of X. A stochastic map from a set A to a set B is an indexed set of nonnegative real values If A and B are Hilbert spaces, then we may write AB for the tensor product A ⊗ B. Complex analysis We briefly cover some complex analysis tools that will be important in section 5. The reader can consult [3] for more details. Let C denote the set of complex numbers. We will apply addition and multiplication to C ∪ {∞} using natural rules (c + ∞ = ∞, 1/∞ = 0, etc.). For any z ∈ C and r ≥ 0, let is the open disc of radius r centered at z, and the set S(z, r) is the circle of radius r centered at z.) When z = 0 and r = 1, we may write these sets simply as D and S. Also let If Y ⊆ C ∪ {∞}, then Y denotes the closure of Y . For ease of notation we will write D(z, r) for the is an open set, then a function f : S → C is analytic if for any p ∈ S, f can be expressed as a power series on some open neighborhood of p. If f is analytic and D(z, r) is a closed disc within its domain, then the following equation always holds: Review of quantum weak coin-flipping In this section we review the common formal framework for quantum weak coin-flipping, including the mathematical construction of a point game (which is attributed to A. Kitaev). Since this framework has already seen thorough treatment in [18,1,5], we will mainly provide only definitions and statements of results here. Our terminology and notation are derived most directly from [1]. Weak coin flipping protocols The definition of a weak coin flipping protocol (which we will first sketch, and then state formally) is intended to capture a general situation where two parties with competing interests are trying to fairly flip a coin. There are two possible outcomes: 0 (or "heads") which is the desired outcome for Alice, and 1 (or "tails") which is the desired outcome for Bob. There is no trusted third party in the protocol, and thus it consists entirely of communication between Alice and Bob. At the end of the protocol, the parties report bits a and b respectively (representing what they ostensibly believe to be the outcome of the coin flip). The protocol is accomplished by Alice and Bob passing a quantum system (represented by the finitedimensional Hilbert space M) back and forth between them, while keeping private systems (represented by A and B) to themselves. In each odd round i, Alice performs a unitary operator Ui on AM followed by a binary projective measurement {Ei, IAM − Ei} on AM. If the latter measurement fails -that is, if its postmeasurement state is not in Supp Ei -then Alice aborts the protocol and simply reports her favored outcome 0. (This event is understood to mean that Alice has stopped because she suspects cheating.) On odd rounds, Bob does analogous operators with operators Ui, Ei on MB. At the end of the protocol, if neither party has aborted, they each perform a binary projective measurement on their private system and report the result (as a and b, respectively). The definition requires that if Alice and Bob behave honestly in the protocol, then the probability that a = b = 0 is 1/2, and the probability that a = b = 1 is 1/2. Definition 3.1. A weak coin-flipping protocol C consists of the following data: • Finite-dimensional Hilbert spaces A, M, B (Alice's system, the message system, and Bob's system), • A positive integer n (the number of rounds), • An initial pure state ψ0 on AMB of the form (which is referred to as the final state of the protocol) satisfies For the definition above, the states for i ∈ {1, . . . , n − 1}, are referred to as the intermediate states of the protocol. Let us suppose that Bob (whose goal is to force Alice to report a = 1) chooses to behave dishonestly in the protocol. In that case, he can apply arbitrary unitary operations V2, V4, V6, . . . on MB in place of E2U2, E4U4, E6U6, . . .. (We do not account for any measurements performed by Bob in this case, since his own output is irrelevant.) This motivates the following definition. Suppose for the moment that n is even. Then, the cheating probability for Bob (in protocol C) is the maximum of over all unitary operators V2, V4, . . . , Vn on MB. We extend this definition in the obvious way to the case where n is odd. Let P * B denote the cheating probability for Bob, and let P * A denote the cheating probability for Alice (defined analogously). Then, the bias of the weak coin-flipping protocol C is the quantity Valid point games Valid points games are elegant mathematical constructions which, as we will see, are in a near-perfect correspondence with weak coin-flipping protocols. Because they are a lot simpler to define, valid point games provide a convenient method of reduction for questions about weak coin-flipping protocols. We will give the definition of valid point games in this subection, and then explain their relationship to weak coin-flipping protocols in subsection 3.3. We use standard terminology with a few additions. Remark 3.3. When we use the words move or configuration by themselves, we will always mean a two-dimensional move or two-dimensional configuration. Given a move q, it can be helpful to visualize q by graphing its support set Supp q and writing out the values q(x, y) associated to each point (x, y) ∈ Supp q. See Figure 2 for an example of a move whose support is of size 5. A time-dependent point game is, roughly speaking, a finite sequence of two-dimensional configurations z0, . . . , zn. However, for mathematical convenience, we define time-dependent point games in terms of the moves (zi − zi−1) rather than the configurations zi. Recall that for any real-valued function f , we write f + and f − for the positive and negative parts of f , respectively. In the above definition, we refer to Next we will define valid moves. Definition 3.5. A one-dimensional move is valid if the following conditions hold: (We note that condition (18) is actually redundant, since it can be proved from (17).) Equivalently, a move is valid if and only if its inner product with any operator monotone function is nonnegative (see subsection 3.2 of [1]). We note the following useful fact, which is easily proved from Definition 3.5. Proposition 3.6. If is a valid one-dimensional move, and c > 0, then the function x → (cx) is also a valid one-dimensional move. For any two-dimensional move q, the rows of q are the functions of the form x → q(x, y) (for y ∈ R ≥0 ) and the columns of q are the functions of the form y → q(x, y) (for x ∈ R ≥0 ). For later use, we define the concept of a time-independent point game, which is simply a sequence of a two-dimensional moves with no restrictions on nonnegativity (unlike Definition 3.4). In this paper, as in previous papers on weak coin-flipping, it is only useful to consider time-independent point games that involve 2 moves, and so we confine our definition accordingly. We apply the term "valid" to time-independent point games in the obvious way: a time-independent point game (m1, m2) is valid if m1 is horizontally valid and m2 is vertically valid. Note that any timedependent point game (m1, . . . , mn) yields a time-independent point game (m1 + m3 + m5 + · · · , m2 + m4 + m6 + · · ·) (19) and the latter game is valid if (m1, . . . , mn) is valid. Remark 3.9. When we use the term point game by itself, we will always mean a time-dependent point game. The relationship between point games and coin-flipping protocols Now we will state the known results which motivate our study of valid point games. There is a close correspondence between valid point games and weak coin-flipping protocols, and this correspondence allows us to deduce assertions about coin-flipping protocols (of both existence and impossibility) by studying properties of point games. Theorem 3.10. Suppose that C is an n-round weak coin-flipping protocol with cheating probabilities P * A and P * B , and that δ > 0. Then, there exists a valid point game M = (m1, . . . , mn) with initial configuration 1 2 ( 1, 0 + 0, 1 ) and final configuration P * A + δ, P * B + δ . If M is a point game from the configuration 1 2 ( 0, 1 + 1, 0 ) to a single point [α, β], then we will naturally refer to the quantity max{α, β} − 1/2 as the bias of M . The above theorem can be understood as asserting that if an n-round weak coin-flipping protocol exists with bias , then there are n-round valid point games with bias arbitrarily close to . A proof of Theorem 3.10, which is based on semidefinite programming duality, is given in [1]. 1 (We note that Theorem 3.10 also has a converse, although we will not need it here. See Theorem 3 and Theorem 4 in [1].) Next we assert a theorem about the relationship between weak coin-flipping protocols and timeindependent point games. Theorem 3.11. Suppose that C is an n-round weak coin-flipping protocol with cheating probabilities α := P * A and β := P * B , and suppose that δ > 0. Then, there exists a valid TIPG R = (r1, r2) such that and Proof. By Theorem 3.10, there is a valid time-dependent point game M = (m1, . . . , mn) whose sum is equal to the right-hand side of equation (20). Note that since the initial configuration 1 2 0, 1 + 1 2 1, 0 of this game has 1-norm equal to one, and each move mi sums to zero, the intermediate configurations all also have 1-norm equal to one, and therefore mi 1 ≤ 2 for all i. Therefore the TIPG (r1, r2) := (m1 + m3 + m5 + · · · , m2 + m4 + m6 + · · ·) satisfies the desired conditions. We make one final note about symmetry in valid TIPGs. The profile of a move We now begin the contributions of this paper. We start by introducing the idea of a profile function of a move. If q is a two-dimensional move, then its profile function, denoted q, is a real-valued function on [1, ∞] × [1, ∞]. Profile functions have geometric features that we can use to our advantage when trying to answer questions about point games. Definition We begin with the definition of the profile function in the one-dimensional case. The one-dimensional profile function can be thought of simply as a construction that bundles together the three conditions that define a valid move (Definition 3.5). If x = 0, then let P0: [1, ∞] → R be defined by The function Px is referred to as the profile of x. Figure 3: A graph of the function P 3 (α). (Mathematica) For any one-dimensional move : R ≥0 → R, the profile of , denoted , is the function from [1, ∞] to R given by The profile function collects some useful information about . One can easily verify from the definition that For illustration, we give a graph of the function P3(α) (that is, the profile of 3 ) in Figure 3. Proof. Given Definition 3.5, this follows from equations (25) and (27) together with the observation that This completes the proof. The reader will observe that there are multiple ways that the one-dimensional profile function could have been defined to achieve the property in Proposition 4.2 (e.g., using different ranges for α and different rational expressions). This particular choice of definition will make some of the arguments in section 7 mathematically easier. One of the reasons that this definition is convenient is that the profile of 1 is simply the constant function 1 → 1. Next we define the profile of a two-dimensional move. q(x, y)Px(α)Py(β). As with the one-dimensional profile, some of the values of a two-dimensional profile q have natural expressions -for example, and q(∞, ∞) = x,y x · y · q(x, y) The basic motivation to study the two-dimensional profile function is the following proposition. Proposition 4.4. If q is a move that is either horizontally valid or vertically valid, then q ≥ 0. Proof. Suppose that q is horizontally valid. Then, the single-variable profiles of the rows of q are all nonnegative. Thus for any α, β ∈ [1, ∞], The vertically valid case is similar. As a consequence of the above proposition, if M = (m1, . . . , mn) is a valid time-dependent point game and z0, z1, . . . , zn−1, zn are its configurations, we must have for any α, β ∈ [1, ∞]. The profile function gives us an infinite family of constraints that must be satisfied by the initial and final configurations of any valid point game. We make note of some additional elementary facts for later use. Proof. We have as desired. Both factors in the summand enclosed by brackets above are nonnegative, and thus the result follows. The target profiles From subsection 3.3, we know that if there exists a quantum weak coin-flipping protocol whose bias is less than (with 0 < < 1 2 ), then there must exist a valid TIPG (m1, m2) such that m1 + m2 is equal to We therefore have a crucial interest in the moves v . However, it is mathematically simpler to instead study the family of moves and so (using Proposition 3.6), valid TIPGs for v correspond exactly to valid TIPGs for tτ via the same linear transformation. If R = (r1, r2) is a valid TIPG such that r1 + r2 = tτ , then we must have r1 + r2 = tτ . The profile function tτ can be expressed as follows. A graph of an example (with τ = 1/10) is given in Figure 4. Highly concentrated rational functions This section adapts known complex analysis techniques to prove a result that will be needed in section 7. I am grateful to Alexandre Eremenko for showing me the central method used in this section. We will be concerned with rational functions on an interval [a, b] that are highly concentrated -that is, rational functions that are significantly large at some interior point c ∈ [a, b] and are more tightly bounded outside of a small neighborhood of c. Our interest (eventually) will be in studying deductions that we can make when such a function occurs in the profile of a move. Preliminaries We will first reproduce a standard result about the logarithm of the absolute value of an analytic function. We begin with the following observation: if f is an analytic function on a neighborhood of D which has no zeroes in D, then there is a well-defined analytic function log f on a neighborhood of D such that 2 log f = f . We have (see subsection 2.1): Since the real part of log f (z) is precisely log |f (z)|, this proves the following. For the result that we will prove in subsection 5.2, we will need a similar statement that addresses the case where f is permitted to have zeroes in D and may not be analytic on S. This motivates a somewhat more intricate claim. Let us say that a real-valued function on the unit circle S is a step function if it is locally constant at all but a finite number of points in S. A proof of the following proposition is given in Appendix A.1. Proposition 5.2. Let f be a continuous function on D which is analytic on D. Suppose that b: S → R is a step function such that log |f (z)| ≤ b(z) for any z ∈ S. Then, The complex values of a highly concentrated rational function We will now prove the main result of this section. Note that for any real number δ such that 0 < δ < 1, Suppose that δ, ν ∈ (0, 1) are such that for any z ∈ [−1, 1] (−δ, δ), the inequality |f (z)| ≤ ν is always satisfied. Then, Proof. Our approach is to apply an analytic transformation which maps the unit circle S to the set and to thereby reduce the proof to an application of Proposition 5.2 above. Let G: D → H be defined by Note that this function is a one-to-one mapping. Its inverse is given by Let F : H → H be the continuous function 2 (50) 2 The rational function z → on H has two continuous square roots. We let F be the square root which maps H into itself. Then, The image of the unit circle under H consists of the unit circle, the line segment from −1 to −δ, and the line segment from δ to 1. Additionally, if we let θ ∈ [0, π/2] denote the angle of the unit-length complex number then the following hold by direct computation: The points on the unit circle which lie clockwise between e −iθ and e iθ are mapped into the real interval [δ, 1], and the points which lie clockwise between −e −iθ and −e iθ are mapped into the real interval [−1, −δ]. All other points on the unit circle remain on the unit circle under the application of H. A diagram of H is given in Figure 5. We now compute upper bounds on the function z → |f (H(z))|. Let M = max |z|=1 |f (z)|. Then, by our construction, Let T : [0, 2π] → R be defined by Then, applying Proposition 5.2, log |f (0)| = log |f (H(0))| (61) To complete the proof, it suffices to note that the quantity (55), which was used to define θ, is within distance O(δ) from the complex number i. Therefore, θ itself is within distance O(δ) from π/2. Thus we obtain Since log |f (0)| = 0 by assumption, we therefore have which yields the desired result. We give a brief discussion to show how Proposition 5.3 can be useful for our purposes. Suppose that is a one-dimensional move such that (4) = 1 (66) and that δ, ν ∈ (0, 1) are such that the inequality (α) ≤ ν is always satisfied when Then, for any α ∈ [3,5], the rational expression for (α) is given by By Proposition 5.3, there exists a unit-length complex number ζ such that Therefore, Thus we conclude that 1 ≥ v −Ω(1/δ) . Informally, this means that a one-dimensional move can only achieve a profile that is highly concentrated around x = 4 if has exponentially large coefficients. This is similar to reasoning that we will use in section 7. A lemma on random variables with bounded expectation The following elementary lemma addresses a case where two constraints on the expectation of a random variable approximately determine the value of the random variable. This lemma will be used (in the context of some artificially constructed random variables) in order to carry out an intermediate step in the main proof of section 7. Lemma 6.1. There exists a universal function A(u) ∈ O( √ u) such that the following holds. If X is any positive real-valued random variable satisfying with δ > 0, then Proof. We have and X + 1/X − 2 ≥ 0. Therefore, or equivalently, By the quadratic formula, the event on the left side of inequality (78) is equivalent to Thus with probability at least 2/3, |X − 1| is less than The function above is in O( √ δ), and this completes the proof. The 1-norm of a time-independent point game This section will perform most of the remaining technical work necessary to achieve our main result. Throughout this section, suppose that τ ∈ (0, 1), and that g is a horizontally valid move such that g + g = tτ , where tτ denotes the following move (see subsection 4.2): In this section we will show that the inequality g 1 ≥ exp(Ω(τ −1/2 )) must always hold. This is the result that will be used in section 8 to conclude that any weak coin-flipping protocol that achieves bias must involve at least exp(Ω( −1/2 )) communication rounds. For any b ≥ 0, let g b : R ≥0 × R ≥0 → R denote the move defined by (That is, g b agrees with g on the horizontal line y = b, and is zero elsewhere.) Note that since g is horizontally valid, the profile function g b of g b is always a nonnegative function. Basic properties of g By assumption, g is a horizontally valid move and its profile function g satisfies The function tτ is written out explicitly in equations (41) and (22). The following facts are easily deduced. It is easily seen from Definition 4.1 that for α ∈ [2, ∞], Thus from Fact 7.5 we have Fact 7.6. If α ∈ [2, ∞], then g(α, 2) ≤ 2 α + τ . The isolating function For any a ∈ (2, ∞), we know (Fact 7.1) that b g b (a, a) = g(a, a) = 1, and that each term g b (a, a) in the above summation is nonnegative. Our next goal is to show that the majority of the contribution to g(a, a) comes from the terms g b (a, a) for which b is close to a. This is formally stated as follows. Proposition 7.7. There is a universal function I(u) ∈ O( √ u) such that for any a ∈ [3, 5], b:|b−a|<I(τ ) We note that the choice of the interval [3,5] in this statement is somewhat arbitrary -the proof method that follows can be used to prove the same statement over any interval [p, q] for which 2 < p < q < ∞ (with a different choice of function I). The reason for making this type of restriction is that it facilitates calculations involving universal big-O error terms. Proof of Proposition 7.7. Let Y be the stochastic map from [3,5] to R>0 defined by Note that by construction, for any a ∈ [3,5] and any b > 0, the function on [1, ∞] defined by is a scalar multiple of the profile function of b (that is, β → P b (β)). Therefore, which implies where the expectations are taken over the random variable Y (a). Expanding these expressions using the definition of the profile function, we have and By Fact 7.6 and Fact 7.2, g(a, ∞) ≤ 2. If we let A(u) be the function from Lemma 6.1, then Therefore, Letting I(u) := 5A(7u) therefore yields for any a ∈ [3,5]. By the definition of the stochastic map Y , this implies the desired result. We refer to I as the isolating function. Figure 6: The move g is supported within the shaded blue region. A lower bound on g 1 We are now ready to prove a lower bound on g 1 in terms of τ . We accomplish this by studying the behavior of the part of the move g that is concentrated near the horizontal line y = 4. Precisely, we will be concerned with the move where I is the isolating function from subsection 7.2. (See Figure 6.) Proposition 7.8. For all a ∈ [3,5] such that |a − 4| ≥ 2I(τ ), we must have Proof. For any such a, the move b:|b−a|<I(τ ) has disjoint support from that of g. The sum of the profile of (108) and the profile of g is therefore upper bounded by the profile of g. By Proposition 7.7 and Fact 7.1, we have g(a, a) ≤ 1 3 , as desired. It is easy to see that the second and third factors in the summand above are each no more than 4. Therefore, |D(4 + ζ)| ≤ 16 g 1 (116) Combining the above bound with inequality (114) above yields the desired result. Main result We can now tie together results of section 3 and section 7 to achieve our main result. Further directions A natural next step would be to compute an explicit function which would serve as a lower bound for n in Theorem 8.2. This is a matter of tracing through the steps of the proof, and should not be difficult. Explicit bounds on n will open the door to searching for quantum weak coin-flipping protocols that are optimized for the number of communication rounds (at a particular bias ). One can also try to lower bound the amount of quantum memory needed to achieve weak coin-flipping for a given bias. As discussed in [1], the quantum memory used by a protocol is related to the size of the support of its point games. Some of the same techniques used in this paper might be applicable to proving lower bounds on quantum memory size. Can a related impossibility result be proved for strong quantum coin-flipping? A. Kitaev showed that any strong coin-flipping protocol must have bias at least √ 2 2 − 1 2 = 0.207 . . .. Meanwhile, Chailloux and Kerenidis [8] proved, by building on Mochon's work on quantum weak coin flipping with vanishing bias [18], that strong coin-flipping is possible with bias arbitrarily close to Lastly, I will note that although we have found that the moves (39) that define quantum coin-flipping are exponentially hard to achieve by valid point games, my experience so far suggests this is a uniquely difficult family of moves. It may be worth exploring whether there are other simple classes of moves that can be more easily achieved by valid point games, and exploring whether such classes could have applications to positive results in two-party cryptography.
2019-09-22T23:09:34.000Z
2019-09-22T00:00:00.000
{ "year": 2019, "sha1": "6250df21e2a0fb4dd7aca9b078b54db7bfe15293", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1909.10103", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6250df21e2a0fb4dd7aca9b078b54db7bfe15293", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
56052462
pes2o/s2orc
v3-fos-license
Multistage A-O Activated Sludge Process for Paraformaldehyde Wastewater Treatment and Microbial Community Structure Analysis In recent years, the effect of formaldehyde on microorganisms and body had become a global public health issue. The multistage combination of anaerobic and aerobic process was adopted to treat paraformaldehyde wastewater. Microbial community structure in different reaction stages was analyzed through high-throughput sequencing. Results showed thatmultistage A-O activated sludge process positively influenced polyformaldehyde wastewater. The removal rates of formaldehyde were basically stable at more than 99% and those of CODwere about 89%. Analysis of themicrobial diversity index indicated that themicrobial diversity of the reactor was high, and the treatment effect was good. Moreover, microbial community had certain similarity in the same system. Microbial communities in different units also showed typical representative characteristics affected by working conditions and influent concentrations. Proteobacteria, Firmicutes, and Bacteroidetes were the dominant fungal genera in the phylum level of community composition. As to family and genus levels, Peptostreptococcaceae was distributed at various stages and the dominant in this system.This bacterium also played an important role in organic matter removal, particularly decomposition of the acidifiedmiddle metabolites. In addition, Rhodobacteraceae and Rhodocyclaceae were the formaldehyde-degrading bacteria found in the reactor. Introduction Formaldehyde is a basic chemical raw material widely used in plastics, chemical, leather, resin, and other production processes.Formaldehyde is soluble in water; the emissions of aqueous solution of this compound can bring serious pollution to water and even kill water creatures.Meanwhile, formaldehyde can produce irritating effect on human body, damage the immune system, and cause cancer.Therefore, China has launched a series of emission regulatory controls for formaldehyde wastewater.For example, the secondary emission standard of formaldehyde must not exceed 2 mg/L in the standard of integrated wastewater discharge, and the formaldehyde content of centralized surface water for domestic and drinking water must not exceed 0.9 mg/L in the standard of surface water environment quality [1,2].The large source and quantity of formaldehyde wastewater cause certain difficulties to wastewater treatment.But considering economic or reality reasons, we cannot ban the application of formaldehyde on the whole.Therefore, we need to treat formaldehyde wastewater from industrial production properly.Physical, chemical, and biological methods are mainly used for wastewater treatment.Physical methods include steam blow-off and adsorption.Blowing can be used as a pretreatment process.The effect of formaldehyde wastewater treatment on adsorption is satisfactory but presents limitations in adsorbent recycling.Chemical methods, including advanced oxidation and condensation/precipitation, are expensive.Biological methods are characterized by low cost, simple operation, and low pollution.Most microbes can use formaldehyde as carbon source and degrade wastewater [3].Hidalgo [4] used Rhodococcus erythropolis UPV-1 in formaldehyde wastewater treatment.Both continuous dosing and intermittent dosing can form stable colony.The formaldehyde and chemical oxygen demand (COD) removal rates are 90% and 56%, respectively.Wang et al. [5] used activated sludge process in the treatment of formaldehyde wastewater.The results showed that the initial concentration of formaldehyde is 400 mg/L and the sludge concentration is 4 g/L after 10 h; in addition, the removal rates of formaldehyde and COD reach more than 99% and 83%, respectively.Methylobacillus flagellatus [6], Pseudomonas putida [7], Ralstonia eutropha [8], and Candida maltose [9] have been reported in formaldehyde degradation.Nevertheless, the degradation effect of these strains shows differences.Most of the strains can only degrade low formaldehyde concentration.Suitable degradation strains should still be determined for highconcentration formaldehyde produced in industrial process.The present study adopted multistage A-O activated sludge process in the treatment of polyformaldehyde wastewater; microbes in the sludge can use formaldehyde as carbon source and degrade wastewater.The process was complex, and the hydraulic retention time was short.This in-depth process can be reproducible in view of high concentration and complex wastewater.We used high-throughput sequencing technology to analyze the change in microbial community, ecological information of colony, and degradation function relationship in different reaction steps in depth.We hoped that our work can provide certain technical and theoretical support for the actual project.Therefore, we can treat formaldehyde wastewater better and reduce its harm to environment and human body. Overview of the Reactor. This study was based on the polyformaldehyde wastewater treatment in the production process in a chemical plant.The experiment process is as follows: raw water → iron and carbon microelectrolysis → one-step anaerobic (A1) → two-step anaerobic (A2) → threestep anaerobic (A3) → one-step aerobic (O1) → hydrolytic acidification → two-step aerobic (O2).The process flow diagram and the flooding water quality and sludge properties in the field are shown in Figure 1 and Table 1. Sample Collection. The collection date was June 8, 2015, and the sludge samples were obtained from A1, A2, A3, O1, hydrolysis acidification, and O2.We collected the samples in the reactor when it was starting.The sludge sample number information is shown in Table 2. Chemical Analysis. Conventional water quality indicators, such as pH, temperature, and COD, were analyzed using the national standard method [10].The dissolved oxygen (DO) and formaldehyde concentration were analyzed using the DO instrument.Table 3 shows the testing index and method of activated sludge. DNA Extraction and High-Throughput Sequencing.DNA was extracted via phenol-chloroform extraction and was purified by purification kit (TIANquick Midi Purification Kit, Tiangen).DNA samples were detected of fragment length through agarose gel electrophoresis detection after purification.The concentration and purity were determined by Nanodrop.PCR (Polymerase Chain Reaction) amplification was carried out, and the amplification products were used for DNA sequencing [11].The library of sequencing DNA was constructed by TruSeq kit (Illumina, USA) and was determined by Illumina Miseq2500 high-throughput sequencing machine for sequencing. Results and Discussion 3.1.HCHO Removal Performance of the Reactor.Figure 2 shows the formaldehyde removal from the reactor. Figure 2 shows that influent formaldehyde concentration is 635-1164 mg/L, and the effluent formaldehyde concentration in secondary sedimentation tank is about 5 mg/L.The fluctuation of influent formaldehyde concentration is large because of the complex working conditions in the field.However, the removal rate is basically stable at more than 99%, with the highest at 99.5%.This rate shows that the performance of the technology is ideal, and such technology can respond to the change in external conditions. COD Removal Performance of the Reactor.Figure 3 shows the COD removal from the reactor. Figure 3 shows that influent COD concentration is 4000-5800 mg/L, and the effluent COD concentration in secondary sedimentation tank is 510-670 mg/L.The removal rate of COD is about 89%, and the highest can reach 90.36%.The different removal rates of formaldehyde and COD show that formaldehyde degradation and its degradation products are not completely synchronous [12,13].Longer time is needed before it becomes fully biodegradable. The system has a total of six units of series reactor, which includes three stages of anaerobic reaction, two stages of aerobic reaction, and a hydrolysis acidification phase.This process is relatively complicated, which can be carried out in in-depth degradation processing in the view of high COD concentration and complex wastewater.Table 4 presents the quality index of polyformaldehyde wastewater. First, high-concentration wastewater underwent three continuous anaerobic reaction systems with long anaerobic reaction time.Both the flora in graded response and the differentiation of ecological level are abundant.At this stage, most of the carbon sources that can be used are degraded by anaerobic microbes.Thus, most of the reactor formaldehyde and COD are removed during the anaerobic phase, and the removal rate reaches 62.6% and 73.3%, respectively.The biochemical substance content in the effluent water at anaerobic phase is relatively small.Most of these substances are materials that are difficult to use via anaerobic microbes.The organic matter is degraded and mineralized in depth in aerobic phase.Macromolecular organic matter and intermediate metabolites are decomposed sequentially in the hydrolysis acidification phase; these materials are then translated thoroughly into harmless substance and discharged after secondary aerobic phase. Microbial Numbers and Diversity. The results of operational taxonomic unit (OTU), abundance (Chao 1), and diversity (Shannon) index of microbial community were obtained by high-through sequencing.The microbial community diversity in the sludge sample of the reactor is shown in Table 5.The Shannon index of the sediment samples changes in the range of 5.44-6.74.Shannon index is low at the three-step anaerobic, hydrolysis, and acidification stages.Such low value may be attributed to that as the reaction continues; the bacteria, which are not adapted to the environment, gradually lose activity, age, and die because of the continuous change of DO and nutrition matrix.When the hydrolysis acidification phase is reached, the sludge activity is reduced because the life cycle of anaerobic microbes is longer than those of aerobic ones, and the sludge accumulated in the bottom causes inadequate contact with wastewater and poor microbial diversity [14]. Chao 1 and OTU index had a similar change rule with Shannon index.These three indicators describe the microbial diversity and its relationship with the effect of wastewater treatment.The results also reflect that the microbial diversity is high, and the treatment effect is relatively good in the reaction pool [15]. Phylum Level of Community Composition in Sludge Samples.The microbial classification in sludge samples according to phylum is shown in Figure 4. Comparing with sequence in the library construction, we can determine the kinds of microbial communities.Proteobacteria, Firmicutes, Bacteroidetes, Actinobacteria, Chloroflexi, Planctomycetes, and Thermotogae are the dominant bacterial communities in the samples.Proteobacteria (30.1%-67.2%)are dominant in all of the samples [16].Proteobacteria are one of the largest bacterial categories, which belong to gram-negative bacteria.Their outer membrane is mainly composed of lipopolysaccharide.The metabolic type is different among different members, and most of them is facultative or obligate anaerobic.Proteobacteria are the main groups of bacteria in the wastewater treatment system and plays an important role in the removal of organic matter from wastewater [17,18].In wastewater treatment process, the amount of Proteobacteria decreases from 67.2% to 30.1% of hydrolysis acidification phase and subsequently increases to 52.1%.This result might be because Firmicutes and Bacteroidetes begin to multiply and occupy some ecological niches of Proteobacteria [19] through anaerobic fermentation. Firmicutes is gram-positive bacteria.The peptidoglycan content accounts for 50%-80% of the total quality of cell walls.Firmicutes is absolutely dominant in hydrolysis acidification phase [20,21].Its proportion gradually increases from 9.8% to 37.1% of hydrolysis acidification phase in the reactor. Family and Genus Levels of Community This process system is further complicated.Different processing units have different functions in the system.The anaerobic phase is relatively different between aerobic phases.Microbial communities in different units show typical representative characteristic affected with working conditions and influent concentration. Rhodobacteraceae is the dominant microbes in the onestep anaerobic process.These bacteria can accumulate phosphorus in denitrification [24][25][26] degradation [27].Approximately 60% of the formaldehyde is decomposed during the one-step anaerobic process (Figure 4).Both the numbers of Rhodobacteraceae and Rhodocyclaceae obviously decline with formaldehyde degradation.Therefore, Rhodobacteraceae and Rhodocyclaceae should be the main formaldehyde degradation bacteria in the reactor.Most of formaldehyde and COD have been degraded in the one-step anaerobic process, and thus two-and threestep anaerobic processes are the main procedures of in-depth anaerobic treatment.The microbial community structures of these two processes are relatively similar, and Peptostreptococcaceae and Phycisphaerales are the dominant bacteria in the system.Phycisphaerales belongs to Planctomycetes and is the anaerobic ammonia oxidation bacterium.These bacteria can create nitrite-oxidizing ammonium and produce nitrogen under anoxic conditions, which contribute to denitrification [28,29]. After entering the aerobic reaction stage, the diversity of microbial community structure increases in the reactor because the bacterial aerobic metabolism grows quickly, including the increase of nitrifying bacteria, which has high DO demand. Peptostreptococcaceae is the dominant microbes during hydrolysis acidification.This microbe belongs to Firmicutes and typically uses little or no sugar.It can decompose protein to produce acetic acid [19].Peptostreptococcaceae reaches about 27.2% in this process, which illustrates that it has already reached the vigorous stage of acid production.Peptostreptococcaceae is also distributed at all stages in the system and has an obvious advantage in the middle of four stages.This microbe is a facultative aerobic bacterium and plays an important role in the removal of organic matter, specifically the acidification decomposition of middle metabolites. In addition, Azospirillum is distributed at all stages in the system; these nitrogen-fixing microbes belong to Proteobacteria and can fix nitrogen with cereals and Gramineae [30].Azospirillum is also part of the denitrifying bacteria groups and can translate nitrate into N 2 O or N 2 under enzyme catalysis.This microbe also has a role in the nitrogen cycle [31]. Candidatus Xiphinematobacter grows with the reaction and belongs to Thermotogae.This microbe is a kind of nitrifying bacteria, and little information is available regarding it. RDA (Redundancy Analysis ) of Dominant Bacterium in Sludge Samples. Figure 6 shows the RDA of dominant bacterium and major environmental factor in sludge samples.Rhodobacteraceae and Rhodocyclaceae have a high correlation with formaldehyde removal rate, which is consistent with the conclusions mentioned above. Piscirickettsiaceae and Alphaproteobacteria BD7-3 also have a high correlation with COD removal rate.Piscirickettsiaceae belongs to -Proteobacteria and uses organic matter as the main carbon source.It also plays an important role in COD removal process [28].Alphaproteobacteria BD7-3 belongs to -Proteobacteria and has not been named.Most of -Proteobacteria is saprophytic heterotrophic bacteria, and their main carbon source is the organic matter.Therefore, these bacteria are important in the removal process of COD [30]. Conclusions (1) Multistage A-O-activated sludge process has good treatment effect on polyformaldehyde wastewater.Under the initial formaldehyde concentration of 635-1164 mg/L and COD concentration of 4000-5800 mg/L, the removal rates of formaldehyde are basically stable at more than 99% and those of COD are about 89%.This method can remove pollutant effectively.We hoped that our work can provide certain technical support for the actual project. (2) The ecology of activated sludge in different reaction stages through high-throughput sequencing was analyzed.The analysis results of microbial diversity indices (Shannon, Chao 1, and OTU) indicated that the microbial diversity of the reactor is high, and the treatment effect is good.Microbial community has certain similarity in the same system.Microbial communities in different units show typical representative characteristic affected with working conditions and influent concentration. (3) The microbial community structure of the sludge samples was also analyzed.Proteobacteria, Firmicutes, and Bacteroidetes are dominant fungal genera in the phylum level of community composition.Peptostreptococcaceae, Phycisphaerales, and Rhodobacteraceae are dominant fungal genera in the family and genus levels of community composition.Peptostreptococcaceae is distributed at various stages and dominant in this system.This bacterium also plays an important role in organic matter removal, particularly the decomposition in middle metabolite acidification.Rhodobacteraceae and Rhodocyclaceae are formaldehydedegrading bacteria in the reactor. Figure 1 : Figure 1: Process flow diagram in the field. Figure 4 : Figure 4: Classification of microbes in sludge samples according to phylum. Figure 5 : Figure 5: Microbial classification in the sludge samples according to family and genus. Table 1 : Flooding water quality and sludge properties in the field. Table 2 : Sludge sample number information in the reactor. Table 3 : Testing index and method of activated sludge. Table 4 : Quality index of polyformaldehyde wastewater. Table 5 : Microbial community diversity in the sludge sample of the reactor. . Rhodobacteraceae and Rhodocyclaceae play an important role in organic matter
2018-12-10T02:17:32.780Z
2016-11-13T00:00:00.000
{ "year": 2016, "sha1": "0f4085d5c322bf52d1f96f720c2f08f591f6b20e", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jchem/2016/2746715.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0f4085d5c322bf52d1f96f720c2f08f591f6b20e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
220633364
pes2o/s2orc
v3-fos-license
Spectrally-resolved point-spread-function engineering using a complex medium Propagation of an ultrashort pulse of light through strongly scattering media generates an intricate spatio-spectral speckle that can be described by means of the multi-spectral transmission matrix (MSTM). In conjunction with a spatial light modulator, the MSTM enables the manipulation of the pulse leaving the medium; in particular focusing it at any desired spatial position and/or time. Here, we demonstrate how to engineer the point-spread-function of the focused beam both spatially and spectrally, from the measured MSTM. It consists in numerically filtering the spatial content at each wavelength of the matrix prior to focusing. We experimentally report on the versatility of the technique through several examples, in particular as an alternative to simultaneous spatial and temporal focusing, with potential applications in multiphoton microscopy. Temporal control of ultrashort pulses is a cornerstone of ultrafast optics. Pulse shaping refers to technologies that enable programmable reshaping of ultrafast optical waveforms, with control of phase, amplitude, and polarization [1]. It is now widely used in laser control over molecular and material responses [2], but also for wavelength selective switches [3] or in multiphoton microscopy to adjust the image contrast or resolution [4]. In general, the method relies on spatially dispersing a pulse with a diffraction grating in order to manipulate separately its frequency components. Recent advances in very different research fields have shown that the ability to manipulate the full spatiospectral or spatio-temporal structure of laser pulses, i.e. to introduce spatio-spectral couplings, can open new possibilities to control light propagation [5] and light-matter interaction [6][7][8][9]. Another important example of the usefulness of spatio-temporal couplings is given by simultaneous space-time focusing [10,11], now widely used in multiphoton excitation for neuroscience [12]. Although some extensions of temporal shapers to spatio-temporal control have been demonstrated [13][14][15], such schemes are optically very complex, and their use has thus not become widespread. A general-purpose versatile method for spatio-spectral pulse control is still elusive. Scattering of broadband light in disordered materials randomly mixes the spatial and spectral modes of the incident pulse. When a coherent ultrashort pulse travels through a multiply scattering medium its optical wavefront is spatially distorted and forms a speckle pattern [16]. When the bandwidth of the laser ∆λ laser is broader than the spectral correlation bandwidth of the medium δλ m , the speckle depends on the wavelength λ. Therefore, the scattering medium acts as a dispersive optical element for ultrashort pulses of light. In this regime, the optical transformation of the field induced by the * antoine.boniface@lkb.ens.fr medium is very complex but still remains linear and deterministic, hence controllable. Owing to the availability of spatial light modulators (SLMs), several techniques based on wavefront shaping were developed to experimentally characterize this process. A recurrent application is to find the incident wavefront that counterbalance the effects of scattering and thus re-compress the pulse to its initial duration and focus it on a diffraction limited spot. For instance it can be achieved by iteratively optimizing the incident wavefront [17,18] but also by using digital phase conjugation [19], spectral pulse shaping [20], or time-gating techniques [21]. Another and more global approach to describe and manipulate the outgoing broadband light consists of measuring the multi-spectral transmission matrix (MSTM) [22]. The MSTM is a set of N λ = ∆λ laser /δλ m monochromatic transmission matrices (TMs); each TM linearly relates the input field to the output field of the medium [23] for a given spectral component of the pulse. The full set of matrices provides both spatial and spectral/temporal control of the transmitted pulse; in particular enhancing a single spectral component of the output pulse or focusing it at a given time can be performed [24][25][26]. The key point here is that these techniques manipulate both spatial and spectral degrees of freedom of the pulse by only using a single SLM. This is possible thanks to the spatio-spectral coupling resulting from the propagation through the medium. This is what we will exploit here, now to implement 3D spatio-spectral control, relying on a single SLM. Although pulse control in complex media has already been studied in the last years, to our knowledge, the spatio-temporal degrees of freedom of a scattering medium have never been used to spectrally engineer the point-spread function (PSF) of an ultrashort pulse. Here, we exploit the MSTM in conjunction with a single SLM to perform arbitrary spatio-spectral PSF engineering. It consists in (i) measuring the MSTM to characterize light scattering induced by the medium and (ii) numerically filtering a virtual pupil function with a spectrally-dependent mask prior to (iii) focusing. Importantly, nearly any arbitrary mask (phase and/or amplitude) can be applied onto the pupil function. This versatility is experimentally shown through the generation of two different spatio-spectral PSFs that both aim at decoupling axial confinement from lateral extent, with multiphoton microscopy applications envisioned. First, we revisit traditional temporal focusing (TF) and show that our approach is not restricted to disperse the pulse along only one spatial dimension as offered by diffraction gratings. Secondly, we report on another TF-like PSF that benefits from the high transverse resolution of a bessel beam but with a better axial confinement. Corresponding spatio-temporal profiles are characterized with a 2-photon fluorescence process. Focusing light spatially implies that all incident contributions, or k-vectors, arrive at the focal point with the same phase. Stated differently, the Fourier transform of the field in the focal plane -henceforth referred to as the pupil function -has a flat phase. Any modification of this pupil function (in phase and/or amplitude) impacts on the PSF of the optical system. Without a scattering medium, the pupil function can be engineered by displaying a phase mask onto a SLM placed in a plane conjugate to the back aperture of the illumination objective. In the presence of a scattering medium, this operation is pointless since all the information would be scrambled before reaching the focal plane. In this regime, we need to combine wavefront shaping and PSF engineering. A schematic representation of the experimental optical system is shown in Fig. 1. It allows to measure the MSTM, generate the multi-spectral PSF and characterize it in three dimensions. A Ti:Sapphire laser source (MaiTai, Spectra Physics, 120 fs pulselength, 800 nm center wavelength, 12.7 nm spectral bandwidth) is used either as a tunable monochromatic source, for characterization, or as a pulsed source with its full bandwidth. Here, the MSTM is measured by sweeping the wavelength as in Ref. [22], but can also be performed using hyper-spectral imaging [26]. The beam is split into two distinct paths: a reference path and a probe path. In the probe path, a phase-only SLM (512 × 512 pixels, Meadowlark) subdivided in 64 × 64 macropixels is conjugated with the back focal plane of a microscope objective (Olympus Plan N, 20×, NA 0.40), which illuminates a scattering medium made of ZnO nanoparticles (thickness ∼ 100 µm). The transmitted speckle is collected with another microscope objective (Zeiss EC Plan-Neofluar, 40×, NA 1.3). The probe beam leaving the medium is recombined with the reference on a beam splitter (BS) and the hologram is recorded on a CCD camera (CAM1, Allied Vision, Manta G-046). The MSTM elements are obtained by phase-step interferometry of the probe with the reference arm. As the reference and signal arms do not share a common path, phase drifts and fluctuations between them due to temperature gradients and airflow have to be considered. To account for this, the phase drift was monitored and corrected for as shown previously [27]. Once measured, the MSTM was stable for few hours after acquisition. This operator, in conjunction with the SLM, enabled us to focus light both spatially and spectrally. The corresponding spatio-spectral PSF was characterized by looking at the scattered pulse on CAM1 as in Fig. 2. To infer on the temporal properties of the PSF ( Fig. 4 and 5), we used a 2-photon fluorescence process. For this purpose, a solution made of fluorescein is placed right behind the scattering medium. The fluorescent sample, together with the collection objective, are mounted on a translation stage in order to characterize the beam profile along the z-axis. The 2-photon fluorescence is recorded with an EMCCD camera (CAM2, Andor iXon 3), placed after a dichroic mirror to separate the probe beam from the 2-photon fluorescence, together with additional filters to filter out the strong SHG signal emitted by the ZnO medium (longpass, FELH0450, Thorlabs) and to block the probe beam (shortpass, 2× FESH0650, Thorlabs). To engineer the multi-spectral PSF of transmitted light, we build a new operator from the experimentally measured MSTM by numerically filtering the pupil (see Fig. 2a), as previously shown with monochromatic light [28]. Briefly, we compute the spatial Fourier transform of the experimentally measured output fields, for all the N SLM input modes and N λ spectral components. Then the pupil is engineered with a spectrally dependent mask M (k x , k y , λ), allowing for multi-spectral PSF engineering through thick scattering media. We must specify here that both phase and/or amplitude of the mask are tunable. Finally, an inverse Fourier transform of the filtered pupil function is performed to return to real space. The latter corresponds physically to the focal plane where the MSTM was measured. This last operation generates a filtered multi-spectral TM, denoted in the following MSTM filt . In Fig. 2 we demonstrate the MSTM filt capability through a proof-of-concept experiment. The scattering medium used here provides N λ = 8 spectral degrees of freedom, and thus a set of 8 monochromatic TMs are measured to describe the pulse propagation through the medium. For the sake of demonstration, we filtered only two TMs over the N λ available, respectively with, a flat (λ = 796 nm) and a spiral (λ = 804 nm) phase mask. Phase conjugation of the two TMs at λ = 796 nm and λ = 804 nm enables arbitrary spatial focusing of the two wavelengths on either the same spatial position or on two separate positions. Here the two wavelengths are spatially focused on the same position. As detailed in [24] the two input fields calculated from phase conjugation are algebraically summed and the resulting phase is displayed onto the SLM. As shown in Fig. 2c, such shaping focuses the transmitted pulse on the camera. A scan in wavelength exhibits a diffraction-limited focus at λ = 796 nm and a donut-like shape at λ = 804 nm in agreement with the masks applied in k-spaces of the corresponding matrices. For the other spectral components, no specific focus is obtained: at these corresponding wavelengths the SLM hologram generates a speckle pattern. As an example, we report the output intensity at λ = 800 nm. Similar results were also obtained using a metasurface but it offers a much lower spectral resolution [29]. One advantage of our approach is its high degree of reconfigurability; with the same medium and by simply changing the incident wavefront with the SLM, a new spatio-spectral PSF can be generated. In microscopy, a widespread application of pulse shaping is simultaneous spatial and temporal focusing (TF) [30,31]. It consists in shaping the beam in such a way that its pulse length varies along the propagation direction, and that the shortest length occurs only close to the focal plane of the illumination objective (Fig. 3a). It improves significantly the axial sectioning of the 2-photon fluorescence excitation, with various applications in neuroscience [32,33], since the 2-photon signal is inversely proportional to the pulse length. The key optical part in temporal focusing is its dispersive element which generates the spatio-spectral coupling. Generally, a diffraction grating is used as dispersive element [12] but a digital micromirror device has also been used [34]. In both cases, the spatial dispersion is done along a single axis (see Fig. 3a), with an objective lens focusing all spectral components at its focal plane, where all wavelengths overlap and the shortest pulse length is reached. As such, the technique only applies in free space where the spatio-spectral coupling induced by the grating is known. In our method, presented in Fig. 3b, the diffraction grating is replaced by a thick scattering medium which naturally provides the spatio-spectral coupling. Its control is then achieved with a SLM and knowledge of the MSTM. Once the matrix is measured, the pupil function is numerically filtered with a spectrally-dependent mask M (k x , k y , λ). The mask is obtained by dividing the pupil (corresponding to the entire accessible k-space) into N λ sub pupils, each defining also a spectral component of the pulse. For instance, the mask M may have an helical shape as represented in Fig. 3b. Importantly, this scheme exploits the entire two-dimensional k-space compared to To show the versatility of MSTM filt we report on the experimental realization of two different temporally focused beams which both aim at decoupling axial and lateral resolution. In a first example, we implement the MSTM-TF introduced in Fig. 3b that improves the axial confinement. In a second example, we reduce the axial extension of a Bessel-like beam with a different pupil mask. As a first experimental demonstration, we generate TF using thick scattering media. As explained in Fig. 3b, our approach consists in filling the pupil, or entire k-space, with N λ sub-pupils. The idea is to spectrally combine the spatial properties of two PSFs, coined High NA and Low NA, into a third one, MSTM-TF, which aims at achieving temporal focusing. More specifically, these three PSFs are obtained from the following pupil functions: High NA, the pupil mask fills the whole aperture (see top panel Fig. 4a). This mask is applied on all the TMs, regardless their wavelength. This mask produces a beam with waist w 0 and Rayleigh length z R . The two are related through the numerical aperture NA ∝ w 0 /z R . Low NA, the same filtering is done with sub-pupil, three times smaller than the pupil (see middle panel Fig. 4a). Compared to the previous situation this mask produces a beam with larger waist and Rayleigh length. MSTM-TF, sub-pupils (same sizes as Low NA ones) are centered at a different position for different wavelengths, inside the pupil. Sub-pupils are positioned in such a way that they cover the pupil (see bottom panel Fig. 4a). This mask produces a beam with a waist comparable to Low NA but a reduced Rayleigh length. The three cases are experimentally compared in Fig. 4. The scattering medium is characterized by δλ m 1.8 nm. Consequently, for our probe spectrum we have N λ = 6, hence a set of 6 monochromatic TMs were measured to control the full propagation of the pulse spectrum through the medium. Comparison of lateral (projection onto the z-axis) and axial profiles (projection onto the y-axis) in each case highlights the desired final effect. Quantitatively, we estimate lateral and axial sizes with the 2 nd order cumulant of the corresponding distribution profiles. With the same MSTM we repeat the procedure and focus light on three other output spatial positions. Results are plotted on Fig. 4f demonstrating the interest of using such pupil mask function: whereas the lateral size of the MSTM-TF filtering is very similar to Low NA, its axial confinement is improved. The 2-photon signal through thick scattering media is very weak, leading to long exposure time and limiting the number of positions one can measure within the medium stability time. The full tridimensional characterization of a single PSF (which corresponds to a single point in the graph in Fig. 4f) took approximately ∼ 40min. In a second experiment, we combine two other PSFs coined High NA and Bessel, and present another example, Bessel-TF, for decoupling the lateral and axial profiles of the beam. These three PSFs are obtained from the following pupil functions: High NA, a large pupil mask is used to filter all the TMs (see top panel Fig. 5a). Bessel, the mask corresponds to an annulus with controllable inner and outer radius (see middle panel Fig. 5a). This mask generates a Bessel-like beam whose central lobe is narrower than High NA beam. But this is at the cost of the axial confinement. Bessel-TF, one TM is filtered with a High NA mask and the other one with a Bessel mask (see bottom panel Fig. 5a). Such pupil creates a tight focus spot with improved confinement compare with the Bessel case. The three cases are experimentally compared in Fig. 5. Their lateral and longitudinal extensions are retrieved with the same post-processing method. As one can notice, the PSF Bessel (middle panel) is very noisy. The associated numerical filtering only retains the high spatial frequencies: all k > 0.67k 1 , where k 1 is the radius of the pupil. Most of the light (at low spatial frequencies) is rejected and the resulting PSF has a very low signal-tobackground ratio (this has been further studied in the Supplementary Material of [28]). In theory, such PSF is diffraction free, meaning that the longitudinal extent is very large. Due to the background speckle, this expected property is really depreciated. Another downside is that we cannot accurately estimate its lateral and axial sizes (blue crosses in Fig. 5f). However, while a somewhat weaker effect is found for this second example of temporally-focused light shaping, it appears clearly that the spectral combination of High NA and Bessel PSFs provides independent controllable lateral and longitudinal properties. Since the generation of these PSFs requires only two independent spectral components, we opted for a thinner medium than the one used in the previous experiment, with a δλ m 7 nm, corresponding to N λ = 2, and controlled with a set of 2 monochromatic TMs. For the PSF Bessel-TF, the two wavelengths are not equally filtered in terms of energy: only the highest spatial frequencies are used at one λ whereas the full pupil is taken at the other one. To compensate for this, we weighted the sum of the two input fields at each wavelength to ensure equal intensity at the focus, which is an additional degree of control allowed by our technique. Since the medium is thinner, more light is transmitted, which allows speeding up the acquisition. Several conditions should be met to successfully and efficiently engineer spatio-temporal PSFs through scattering media. A first important point is the spatio-spectral coupling induced by the scattering material. Our ability to tune the propagation of the beam strongly relies on the number of degrees of freedom, both spatial and spectral, in the system. On the one hand, the number of controlled spatial modes with the SLM translates directly into the quality of the generated beam in the spatial domain. Here we measured the MSTM for a basis of N SLM = 4096 orthogonal modes (Hadamard basis), which corresponds to an acquisition time of ∼ 2 min for a single TM at a given λ. To retrieve the full MSTM, this operation must be repeated N λ times. This is not a major issue here since the medium proved to be stable for few hours. Note that the measurement can be sped up considerably using hyperspectral techniques [26] or swept-wavelength interferometry [35]. On the other hand, the spectral bandwidth of the medium (and consequently the number of spectral channels N λ ) is a property of the scattering medium itself. N λ scales as ∝ (∆λ laser L 2 )/(λ 2 l t ) and can be adjusted with the thickness of the medium L or the transport mean free path l t [36,37]. However, these quantities also affect the total transmission of the light T ∝ l t /L in the diffusive regime. In Fig. 4 and 5, we characterized the spatio-spectral PSFs by measuring the 2-photon fluorescence signal, which scales as the square of the excitation intensity. Simply put, doubling the thickness of the medium increases N λ by a factor 4 but on the other hand the total 2-photon signal is 4 times lower. Therefore in thick scattering media, such as our layers of white paint, there is a trade-off between the transmitted intensity and the number of independent spectral components one can control, which limits the technique to the generation of relatively simple spectral PSFs. However, our approach is very general and may apply to other complex media, such as multimode fibers (MMFs). In a MMF, interference among the guided modes creates wavelength-dependent speckle patterns upon illumination with a coherent source. The spectral correlation width of the speckle δ(λ) scales inversely with the length of the fiber for a fixed numerical aperture [38,39], with almost no penalty on the transmitted intensity. Since optical fibers have been optimized for long distance transmission with minimal loss, long fibers can be used to provide very small δ(λ) without altering the total transmission. Such property has already been extensively exploited for turning fibers into high resolution spectrometer [40,41] but may be amenable to spatio-spectral pulse shaping with similar spectral resolution. In conclusion, we have reported on the formulation of an operator, built upon the experimentally measured MSTM, that enables deterministic spatio-spectral focusing of any arbitrary PSF after propagation through a multiple scattering sample. The spectral resolution is given by the dispersion of the medium and the focusing efficiency by the number of controlled pixels on a single input SLM. We have illustrated the strength of this technique by characterizing their transverse and longitudinal properties in a temporal focusing application. The method we propose can readily be extended to other complex media, in particular to multimode fibers that have much higher transmission with similar spectral properties. The possibility of arbitrarily generating complex multispectral PSF through multiply scattering media opens up new opportunities in several fields, in particular for microscopy as well as coherent control and nanophotonics. FUNDING INFORMATION This research has been funded by the European Research Council ERC Consolidator Grant (Grant SMAR-TIES -724473). SG is a member of the Institut Universitaire de France. HBdA was supported by the LabEx ENS-ICFP: ANR-10-LABX-0010/ANR-10-IDEX-0001-02 PSL*
2020-07-20T01:01:02.760Z
2020-07-17T00:00:00.000
{ "year": 2020, "sha1": "13f03363ab1b74af2692d1b4a61102ec26710151", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.403578", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "13f03363ab1b74af2692d1b4a61102ec26710151", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
260224467
pes2o/s2orc
v3-fos-license
Research Progress Related to Aflatoxin Contamination and Prevention and Control of Soils Aflatoxins are potent carcinogenic compounds, mainly produced by fungi species of the genus Aspergillus in the soil. Because of their stability, they are difficult to remove completely, even under extreme conditions. Aflatoxin contamination is one of the main causes of safety in peanuts, maize, wheat and other agricultural products. Aflatoxin contamination originates from the soil. Through the investigation of soil properties and soil microbial distribution, the sources of aflatoxin are identified, aflatoxin contamination is classified and analysed, and post-harvest crop detoxification and corresponding contamination prevention measures are identified. This includes the team’s recent development of the biofungicide ARC-BBBE (Aflatoxin Rhizobia Couple-B. amyloliquefaciens, B. laterosporu, B. mucilaginosus, E. ludwiggi) for field application and nanomaterials for post-production detoxification of cereals and oilseed crops, providing an effective and feasible approach for the prevention and control of aflatoxin contamination. Finally, it is hoped that effective preventive and control measures can be applied to a large number of cereal and oilseed crops. Introduction Aflatoxin is produced by Aspergillus flavus, which is classified as a Group 1 carcinogen by the International Agency for Research on Cancer (IARC).It is one of the most toxic compounds known, acting mainly on human and animal liver tissues, capable of inducing cancer of the liver (primarily), as well as the pancreas, kidney, bladder and other organs.Aflatoxin may also lead to malnutrition, immunosuppression, and other pathologies with mutagenic, hepatotoxic, and nephrotoxic outcomes [1,2].Aflatoxin mainly contaminates grain and oil crops, feed, nuts, Chinese herbs and other crops, and then contaminates meat, eggs, milk and other by-products after being ingested by animals.The import and export of agricultural and sideline products all over the world have strict limits on aflatoxin, thus restricting industrial development and export trade.Aflatoxin contamination not only causes huge economic losses to food crops, but also has a negative impact on the health of consumers.According to the data from the Food and Agriculture Organization of the United Nations (FAO), about 25% of crops worldwide are contaminated with moulds and their toxins each year, while about 2% of agricultural products lose their value due to excessive toxin contamination [3], such as the 100,000 turkey deaths that first occurred in the 1960s in the UK, and the high annual economic losses caused by aflatoxin contamination in peanuts in Georgia, USA [4]. There have been many cases of human and animal mass poisonings caused by aflatoxin contamination of agricultural products and foodstuffs all over the world.Aflatoxin is highly toxic to the liver and central nervous system of humans and animals.It can cause acute poisoning or even death in humans and animals when ingested in large amounts at one time and can be teratogenic, mutagenic, and carcinogenic when ingested in small doses over a long period of time [5,6].According to the IARC, about 500 million people in the developing world alone are still at risk of aflatoxin exposure [7].The European Union, one of the economies with the best food safety management systems today [8], has strict limits for fungi toxin contamination in food and feed, and China also has strict limits for aflatoxin B1 in food (Table 1).The Chinese GB 2761-2017 "National Standard for Food Safety Limits for Mycotoxins in Food" requires that the maximum limits for aflatoxin B1 (AFB1) in different cereal products range between 5-20 µg/kg, while the maximum limit for AFB1 and aflatoxin M1 (AFM1) in special dietary foods is 0.5 µg/kg and should not be detected in infant diets [9].Therefore, research on the prevention, control and detoxification of aflatoxins in food and feed has become one of the most important aspects of food safety and has attracted widespread attention.In order to prevent and control aflatoxin contamination from the source of crop production, improve the quality and safety of agricultural products in China, and ensure consumer safety and healthy development of the agricultural industry, the source, nature and contamination pathways of aflatoxin, and the current effective methods to deal with aflatoxin in crop production were summarized in this paper.It is expected that the emerging new technologies for aflatoxins control in soils will be widely used in crop production. Aspergillus flavus in Agricultural Soils There are an estimated 7000 species of fungi that inhabit the soil [10].Luo et al. [11] studied the rhizosphere soil fungi community composition of camellia and explored the correlation between rhizosphere soil fungi and soil environmental factors, concluding that camellia diseases could be prevented by regulating soil environmental factors.Wu et al. [12] studied the fungi community structure in the rhizosphere soil of Rehmannia varieties and found that changes in the number of some common fungi pathogens such as A. flavus and Aspergillus niger might be the cause of soilborne diseases in the soil, which suggests that the Rehmannia root system had a certain plastic ability to the number, composition and species of fungi in rhizosphere soil. So far, there are few reports on soil fungi of grain and oil crops.Due to the limitation of separation and detection technology in soil, only species suitable for artificial environments can be isolated from soil, so it cannot fully reflect the real soil colony environment.Fungi isolated from soil can be cultured.Only propagules capable of growing and sporing on the isolated medium used can be detected, and only about 17% of known fungi species can be successfully grown in the culture at present [13]. A single fungus may produce multiple mycotoxins, and a toxin may also be produced by multiple fungi; there are over 150 species of fungi that can produce one or more of Toxins 2023, 15, 475 3 of 12 300 potential mycotoxins.As fungal growth is geographically specific, the predominant mycotoxins vary from region to region, e.g., in subtropical and tropical regions, agricultural products and feed are mainly contaminated with aflatoxins and certain ochratoxins.A. flavus was first used by LINK in 1809 as a generic term for saprophytic moulds in soils [14].It has a wide range of hosts and has been reported in agriculture on maize, rice, wheat, cottonseed, peanuts and nuts, with peanuts and maize being the most affected [15,16].The aflatoxin-producing fungi in the soil are diverse, and the distribution characteristics of different toxin-producing A. flavus occur differently.So far, the infestation pathways, effects and field distribution characteristics of A. flavus as the source of aflatoxin production in soil have not been systematically studied. Aspergillus flavus is widely present in soil.According to the data from FAO, A. flavus is one of the most important contaminating fungi of cereals worldwide [4].The optimum growth temperature for A. flavus ranges from 12 • C to 34 • C, while the optimum toxicityproducing temperature ranges from 20 • C to 30 • C, within 45 • C latitude [17,18].A. flavus is mostly distributed in the soils of the Yangtze River basin and has the greatest risk of contamination [19].Studies on the distribution of soil microbial flora and toxin contamination have also been carried out all over the world.Soil type is also related to aflatoxin pollution to some extent.According to different soil, the distribution of microbial flora is different, and the degree of mycotoxin pollution is also different, so targeted prevention and control measures can be taken.Wei et al. [20] found the existence of non-toxic and toxin-producing A. flavus in peanut soil.Based on the distribution characteristics of the strains, the risk of toxin contamination in different producing areas of China was evaluated.Zhang et al. [21] studied the genetic characteristics of A. flavus in peanut soil, providing a technical basis for the later screening of non-toxic strains and the development of aflatoxin biocontrol fungi. In recent years, only the isolation and screening of aflatoxin in peanut root soil has been reported.Yang et al. [22] predicted the aflatoxin contamination of postpartum peanuts based on the number of aflatoxin colonies in the soil of four peanut-producing areas in China, so as to ensure the prevention and control of aflatoxin in peanuts in the later period.Zhang et al. [19], Zhu et al. [23] studied the distribution, toxin production and aflatoxin infection of A. flavus in the soil in the main peanut-producing areas of China, which provided a theoretical basis for the establishment of a model for the prevention and control of aflatoxin in China.According to the analysis of aflatoxin and its virulence in 11 producing areas of China, the Yangtze River Basin has the largest distribution of aflatoxin and the greatest risk of aflatoxin pollution.Because of the unique climatic conditions and geographical environment of the Yangtze River Basin, Hubei province has also become the largest peanut production area in China.Zhu et al. [24] also studied the distribution and toxic characteristics of aflatoxin in the soil of typical peanut growing areas in Hubei Province, providing a theoretical basis for the establishment of the early warning and prevention model of aflatoxin pollution of peanuts in Hubei Province.Zhang et al. [25] first discussed the relationship between soil types and A. flavus colonies in the peanut production area of Xiangyang, Hubei province.This work suggested that the number of A. flavus groups in the clay loam was higher and the virulence was higher than that in the sandy loam, while the sandy loam had a smaller distribution density and infection risk of A. flavus under appropriate irrigation conditions.The results of this study have important guiding significance for field fertilization, irrigation, A. flavus control and other agronomic management in the local peanut planting process. Aflatoxin Contamination So far, the content of mycotoxin detected in the soil is all in the µg range.For example, the maximum content for zearalenone is 72.1 µg/kg, for deoxynivalenol is 32.1 µg/kg, for ochratoxin A is 23.7 µg/kg, for nivalenol is 6.7 µg/kg, and for aflatoxin is 5.5 µg/kg.The retention of mycotoxins in soil is affected by soil type.Clay soil is easy to absorb toxin compounds, but sandy soil has the potential to leachate compounds [26].In 1980, C 14 was used to label aflatoxins to analyze the decomposition rate of aflatoxins in soil. Since microbial degradation function existed in soil, AFB1 could not be detected after 77 days [27].Hence, the pollution risk of aflatoxin in soil was low, and its main risk was in the storage period after harvest.In 1997, the first report on Aspergillus oryzae detected in water storage tanks showed that although it was not drinking water, there was still a risk of potential mycotoxin contamination [28].Although mycotoxins in freshwater samples have been increasingly reported, no reports of mycotoxins in detected sediments were found.Accinelli et al. [29] studied aflatoxin residues in soil and corn crops and proposed that AFB1 could degrade quickly in a 28.8 • C soil environment (half-life is 5 days), and AFB1 was mainly produced by the residues of corn crops on the soil surface.Corn residues may be an important source of aflatoxin pollution in soil.Therefore, if maize returning to the field in late harvest is effectively controlled, the aflatoxin pollution will be greatly reduced.In general, soil and sediment are still under-represented in the study of mycotoxin potential for environmental contamination. Aflatoxins are mainly produced by toxic fungi such as A. flavus, as well as Aspergillus parasiticus and Aspergillus nomius.Aflatoxin-contaminated soil and agricultural products, especially grain and oil crops and nuts, are at the greatest risk of contamination.Khan et al. [30] believed that soil was the main source of aflatoxin contamination of crops.Tran-dinh et al. [31] studied A. flavus in Vietnamese soil and found that all the isolated A. flavus came from cultivated soil.Aflatoxin pollution is mainly concentrated in A. flavus (Table 2), which comes from the soil.Soil fungi are an important part of soil microorganisms.Horn et al. [32] proposed that climate and crop composition affect colony density and aflatoxin toxicity.A. flavus exists in soil in the form of conidia, sclerotia and mycelia, as the main inoculum for direct infection of peanuts or above-ground crops.Peanut is a crop with a lot of aflatoxin infection since the peanut shells are in direct contact with the soil.Aflatoxin pollution in peanuts mainly comes from the soil Aspergillus.In the study of the rhizosphere soil, through the dynamic analysis of the soil Aspergillus, it is of great significance to discuss the prenatal prevention and control of aflatoxin pollution. Many studies have shown that soil is the main source of aflatoxin pollution in most rhizosphere crops [33][34][35], and the direct contact between soil and plant roots and the exchange of nutrients have a great impact on the occurrence of aflatoxin pollution in crops [36].There are many research studies on soil microbial flora, but the investigation of A. flavus in soil is less.It was also reported that different soil types and A. flavus had different distributions, and the virulence of the strain was also very different, thus affecting the aflatoxin pollution of crops [37,38].Coupling studies can change the population structure through crop rotation and management methods [39].Horn and Dorner [40] studied the A. flavus strains in the soil of peanut planting in some areas where cotton is widely grown in the United States.In some studies, it has been possible to control aflatoxin contamination by adjusting crop rotations and changing soil temperatures.Aflatoxin is a secondary metabolite produced by multiple Aspergillus species.It is colourless, odourless, and extremely toxic.It has been well studied that its chemical structure includes coumarin and difuran rings, and it has many derivatives and isomers that have been well studied [46].B aflatoxins are so named because they fluoresce blue, while G aflatoxins fluoresce green when exposed to long-wave UV light (365 nm) [4].Only 50% of A. flavus strains produce aflatoxins and B aflatoxins only, whereas almost all A. parasiticus strains produce both group B and G aflatoxins (Figure 1) [47].The main forms of aflatoxins present in crops are AFB1, AFG1, AFB2 and AFG2, with toxicity being AFB1 > AFG1 > AFB2 > AFG2.Among them, AFB1 has the most stable structure and AFB1 is classified as a class I carcinogen [22].International European standards have limits of AFB1 ≤ 2 µg/kg and total aflatoxin (AFT) must not exceed 4 µg/kg [48].According to Chinese standard GB2761-2017 [9] "Food Safety National Standard Food Mycotoxin Limits", the maximum contain limitation for AFB1 is 20 µg/kg in peanuts, corn and their products, 10 µg/kg in rice and oils, and 5 µg/kg in grain, beans, fermented foods and condiments, etc. [44] fumonisins Fusarium spp.maize [45] Aflatoxin is a secondary metabolite produced by multiple Aspergillus species.It is colourless, odourless, and extremely toxic.It has been well studied that its chemical structure includes coumarin and difuran rings, and it has many derivatives and isomers that have been well studied [46].B aflatoxins are so named because they fluoresce blue, while G aflatoxins fluoresce green when exposed to long-wave UV light (365 nm) [4].Only 50% of A. flavus strains produce aflatoxins and B aflatoxins only, whereas almost all A. parasiticus strains produce both group B and G aflatoxins (Figure 1) [47].The main forms of aflatoxins present in crops are AFB1, AFG1, AFB2 and AFG2, with toxicity being AFB1 > AFG1 > AFB2 > AFG2.Among them, AFB1 has the most stable structure and AFB1 is classified as a class I carcinogen [22].International European standards have limits of AFB1 ≤ 2 µg/kg and total aflatoxin (AFT) must not exceed 4 µg/kg [48].According to Chinese standard GB2761-2017 [9] "Food Safety National Standard Food Mycotoxin Limits", the maximum contain limitation for AFB1 is 20 µg/kg in peanuts, corn and their products, 10 µg/kg in rice and oils, and 5 µg/kg in grain, beans, fermented foods and condiments, etc.According to the relationship between virulence and the size of its sclerotia, A. flavus can also be divided into L and S types.The S-type strains produced numerous small (<400 µm) sclerotia, while the L-type strains produced fewer, larger sclerotia [49].Most of the L types are non-aflatoxigenic, while most of the S-type strains are highly toxic [50,51].Crop rotation and soil temperature can also affect the distribution of fungi community structure.Ramon et al. [52] found that the number of A. flavus and the proportion of S-type strains increased with soil temperature.Therefore, we can control A. flavus pollution by changing soil temperatures and crop rotation. Aflatoxin Pollution Prevention and Control Measures Aflatoxin contamination of crops predominantly comes from the soil and is not uncontrollable.Relevant prevention and control measures have also been studied and reported in the past two years.One approach involves a post-harvest perspective, whereby rapid detoxification and use of the product reduce the loss of marketable agricultural products.This would use physicochemical and biological methods to degrade or adsorb aflatoxin (Table S1), such as the adsorption method, radiation method, ultraviolet method, According to the relationship between virulence and the size of its sclerotia, A. flavus can also be divided into L and S types.The S-type strains produced numerous small (<400 µm) sclerotia, while the L-type strains produced fewer, larger sclerotia [49].Most of the L types are non-aflatoxigenic, while most of the S-type strains are highly toxic [50,51].Crop rotation and soil temperature can also affect the distribution of fungi community structure.Ramon et al. [52] found that the number of A. flavus and the proportion of S-type strains increased with soil temperature.Therefore, we can control A. flavus pollution by changing soil temperatures and crop rotation. Aflatoxin Pollution Prevention and Control Measures Aflatoxin contamination of crops predominantly comes from the soil and is not uncontrollable.Relevant prevention and control measures have also been studied and reported in the past two years.One approach involves a post-harvest perspective, whereby rapid detoxification and use of the product reduce the loss of marketable agricultural products.This would use physicochemical and biological methods to degrade or adsorb aflatoxin (Table S1), such as the adsorption method, radiation method, ultraviolet method, fumigation method and/or microbial enzyme degradation method [53].These methods are relatively simple, quick, easily replicated and efficient; however, there is a risk of waste should the detoxification be unsuccessful.An alternative perspective, pre-harvest prevention, reduces aflatoxin pollution through (1) improvement of the soil microenvironment, thereby reducing the distribution of aflatoxin-producing strains, or (2) establishment of a mechanism of control in advance of planting aflatoxin-susceptible crops.Pre-harvest control is achieved through the use of biological agents acting on the soil, thereby changing the proportion of microbial strains in the soil.For example, adding ARC-BBBE biofungicide to the soil can reduce the distribution of aflatoxin colonies in peanut soil, thus reducing the total aflatoxin content of peanuts after production [51].Biological control methods are better for maintaining the original raw material's nutritional value and are mild, irreversible and economically viable.However, the living organisms used as agents can be influenced by the environment and have the potential to alter the soil environment in unwanted ways.A third perspective involves the establishment of an early warning model to be used by growers of crops affected by aflatoxin contamination ahead of planting, so that growers know if there is a potential risk of aflatoxin contamination.The modelling system is safe and effective in the long-term; however, there are limitations, such as being restricted by regions with geographical differences.Currently, there are very few aflatoxin modelling systems in use [54,55]. Aflatoxin Prevention and Control Using Biological Agents Some soil biological control agents use competitive growth, or the secretion of secondary metabolites, to inhibit growth and/or toxin production by A. flavus.Examples of effective microorganisms include fungi such as Aspergillus niger and non-aflatoxigenic A. flavus, as well as lactic acid bacteria (Lactobacillus spp.) [56]. Non-Aflatoxigenic Aspergillus Strains Dorner et al. [57] started to study the feasibility of non-toxic producing fungi for the control of aflatoxin contamination in peanut cultivation in 1992 with satisfactory results.Researchers such as Horn [58], Cotty [59] and Abbas [60] investigated the effectiveness of different non-aflatoxigenic A. flavus strains as formulations for the biological control of aflatoxin in peanut, cotton and maize fields, respectively.In these studies, the mechanism of control reportedly used by non-aflatoxigenic A. flavus strains to inhibit the growth of toxin-producing Aspergillus strains was competitive exclusion.Mark et al. [61] have found that field inoculation with inhibitory strains can reduce the probability of A. flavus contamination in both pre-and post-harvest.In the field, spore preparation of 11.2-22.4kg/hm 2 can inhibit aflatoxin in peanut crops by up to 90%, and this inhibition effect can still be sustained.In this method, non-aflatoxigenic A. flavus strains are used to inhibit the growth of toxin-producing A. flavus strains in soil, while non-aflatoxigenic strains in crops have a certain protective effect on crops after harvest [62].Liu et al. [63], Xing et al. [64] and Zhang et al. [65] studied several strains of non-aflatoxigenic A. flavus and their roles in aflatoxin degradation, and the inhibition levels in the laboratory reached 98%.No finished formulations have yet been applied to field soils, and research has mainly focused on the screening and optimisation of biocontrol fungi in the laboratory and the investigation of control mechanisms. Yeasts In 2022, Natarajan et al. [66] isolated 45 strains of yeast from the soil to inhibit the growth of A. flavus, and the inhibition rate reached 99% in the laboratory, but it was not applied in the field.Biological control of yeasts is widely used for post-production Toxins 2023, 15, 475 7 of 12 detoxification, using their adsorption capacity to remove aflatoxins from food, and there are no live strain preparations that have been applied in actual field trials [67]. Bacteria In 1985, Coallier-Ascah et al. [68] inoculated Lactococcus lactis into a culture of aflatoxigenic A. flavus spores and did not detect aflatoxin after shaking bed incubation.In 2008, Petchkongkaew et al. [69] used Bacillus subtilis and Lactobacillus licheniformis to inhibit the growth and toxin production of aflatoxin and both achieved good control results.In recent years, Zhou et al. [51] studied ARC-BBBE biological bacteriological agents and applied three species of Bacillus to the rhizosphere soil.During three years of field demonstration in major peanut-producing areas in China, the abundance of toxic A. flavus in soil decreased by 66.5%, the detection rate of aflatoxin in post-harvest peanuts was also greatly reduced, and nodulation with nitrogen fixation of root system was discovered unexpectedly.Large field demonstration trials have achieved substantial yield increases. New Material Detoxification Methods With the rapid development of materials, biology, environment and energy science, it is the direction of future efforts to find a low-cost, fast, safe, efficient and stable green technology for aflatoxin detoxification in grain and oil by combining the above technologies effectively.Nanomaterial scales are equivalent to 10 to 100 atoms packed tightly together.It has been reported that nanomaterials can be used for the elimination of aflatoxins [70].By modifying nanomaterials with surface modification, nanomaterials with specific adsorption of aflatoxins can be utilized effectively.Liang et al. [71] analysed the feasibility of the magnetic nanoparticles selective adsorption method and tested it for the detoxification of aflatoxin in peanut oil.In the later stage, Mao et al. [70,72,73] studied the semiconductor material g-C3N4, which could be degraded to carbon dioxide and water after the adsorption of AFB1 and 2 h of sunlight irradiation.They also prepared z-composites at a later stage and designed effective photocatalysts to reduce secondary contamination by aflatoxin toxicity tests on cells. Early Warning Models The crop aflatoxin production is influenced directly by environmental temperature and humidity changes at the planting, harvesting, storage, transportation and processing stages.To reduce the risk of aflatoxin pollution, early warning models are often established according to the relationships between environmental temperature, humidity changes and aflatoxin production.As early as 1990, Thai et al. [74] studied the process dynamics of aflatoxin pollution under drought conditions and established the relationship model between soil temperature and aflatoxin, but it has not been applied in practice.In 1998, the CROPGRO-peanut model was released in the United States, which comprehensively introduced the relationship between environmental parameters and the growth of A. flavus [75].It was also applied to the risk warning of aflatoxin in Niger and was also well applied to the prediction of aflatoxin content in Mali [76]. Li et al. [77] established Boltzmann and logistic models to explore the relationship between temperature and humidity during storage and aflatoxin, effectively preventing contamination.Jiang et al. [78] put forward recommended measures for the whole-process prevention and control of aflatoxins based on GMP standards by pre-harvest investigations of the peanut varieties, soil, planting methods, pest control and field irrigation before flower production, as well as the harvesting equipment and post-harvest considerations like receiving time, drying and cleaning, transportation conditions and storage environment.Zhang et al. [79] analysed the causes of product hazards, critical control points and control measures from three aspects, and used HACCP to study the whole process control of aflatoxin production in exported peanuts.Wu et al. [80] studied the pre-harvest, postharvest and whole-process early warning methods of aflatoxin, which collects data in different locations and uses different links to build models, and is also the development direction of aflatoxin early warning technology in the future. The existing research shows that it is feasible to carry out the whole early warning of peanut aflatoxin.However, different regions of our country, different climates, impact factors and key control points are different.It is a long-term and efficient method to control aflatoxin pollution by systematically studying the critical control points of aflatoxin pollution in different producing areas and different links, establishing an early warning model, and early detection and early prevention. Conclusions and Prospects The strong toxicity and carcinogenicity of aflatoxin is a serious threat to human health and food safety.It is important to find a green, environmentally friendly and efficient means to effectively prevent and control aflatoxin contamination.In the long term, to ensure the quality and safety of agricultural products, it is also necessary to establish a full range of aflatoxin contamination early warning technology to control all aspects of the crop, to achieve the controllability of aflatoxin contamination, to achieve the full range of prevention and control from farm to fork, and to prevent and control mycotoxin contamination of agricultural product quality and safety.However, the model for China is established late, which needed to be based on different countries, different regions, different links, and different crops.The key control points are different, and the operability is more difficult.Although there are many ways for aflatoxin controlling, the biological control method has great advantages in terms of nutrients not being destroyed and not causing a lot of pollution.For example, the method for the introduction of biofungal agents in the soil not only effectively decreases the abundance of toxic aflatoxin-producing fungi, but also maintains the original quality of agricultural products.Moreover, it has more advantages of high safety, high efficiency and long persistence. At present, a lot of research has been done on aflatoxin contamination control measures.Among them, the study of the interaction pattern between soil, plant and inter-root microorganisms provides new hints for the innovation of biological control methods of aflatoxin contamination in soil.The new direction of biological control measures for A. flavus in soil includes studying the distribution of microflora in different soil environments and resolving the interactions among inter-root microorganisms.Through risk assessment and early warning of contamination risk for crops in different growing regions, more data will be obtained to support the precise biological control of aflatoxin contamination in crops, thus improving the applicability of aflatoxin control mechanisms and reducing the losses caused by aflatoxin contamination. Figure 1 . Figure 1.Chemical structures for six main types of aflatoxin. Table 1 . Acceptable limits of aflatoxin in crops in several countries. Table 2 . Sources and contamination of toxins in food crops.
2023-07-28T15:09:51.776Z
2023-07-25T00:00:00.000
{ "year": 2023, "sha1": "4b0f82474aaa7070cbd8a8bf1830f32818e8aa60", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8fa9c55543386dcbce5dcf950a011af5e3593bd5", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
221517016
pes2o/s2orc
v3-fos-license
Weighted spectral cluster bounds and a sharp multiplier theorem for ultraspherical Grushin operators We study degenerate elliptic operators of Grushin type on the $d$-dimensional sphere, which are singular on a $k$-dimensional sphere for some $k<d$. For these operators we prove a spectral multiplier theorem of Mihlin-H\"ormander type, which is optimal whenever $2k \leq d$, and a corresponding Bochner-Riesz summability result. The proof hinges on suitable weighted spectral cluster bounds, which in turn depend on precise estimates for ultraspherical polynomials. Introduction In this paper we continue the study of spherical Grushin-type operators started in [CCM1] with the case of the two-dimensional sphere.The focus here is on a family of hypoelliptic operators {L d,k } 1≤k<d , acting on functions defined on the unit sphere S d in R 1+d , i.e., on for some d ≥ 2. As it is well known, the groups SO(1 + r) with 1 ≤ r ≤ d can be naturally identified with a sequence of nested subgroups of SO(1 + d) and correspondingly they act on S d by rotations.We denote by ∆ r the (positive semidefinite) second-order differential operator on S d corresponding through this action to the Casimir operator on SO(1 + r).The operators ∆ r commute pairwise and ∆ d turns out to be the Laplace-Beltrami operator on S d .The operators we are interested in are defined as with k = 1, . . ., d − 1.By introducing a suitable system of "cylindrical coordinates" (ω, ψ) on S d , where ω ∈ S k and ψ = (ψ k+1 , . . ., ψ d ) ∈ (−π/2, π/2) d−k (see Section 3.3 below for details), one can write L d,k more explicitly as 2010 Mathematics Subject Classification.33C55, 42B15, 43A85 (primary); 53C17, 58J50 (secondary). The first and the second author were partially supported by GNAMPA (Project 2018 "Operatori e disuguaglianze integrali in spazi con simmetrie") and MIUR (PRIN 2016 "Real and Complex Manifolds: Geometry, Topology and Harmonic Analysis").Part of this research was carried out while the third author was visiting the Dicea, Università di Padova, Italy, as a recipient of a "Visiting Scientist 2019" grant; he gratefully thanks the Università di Padova for the support and hospitality.The authors are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). where the Y r and their formal adjoints Y + r (with respect to the standard rotationinvariant measure σ on S d ) are vector fields only depending on ψ, to wit, , (1.4) and V : (−π/2, π/2) d−k → R is given by (1 + tan 2 ψ j ) − 1. (1.5) Since V(ψ) vanishes only for ψ = 0, the formulae above show that each L d,k is elliptic away from the k-submanifold S k × {0} of S d ; the loss of global ellipticity is anyway compensated by the fact that each L d,k is hypoelliptic and satisfies subelliptic estimates, as shown by an application of Hörmander's theorem for sums of squares of vector fields [Hö1].Indeed the expression (1.3) reveals the analogy of the operators L d,k with certain degenerate elliptic operators G d,k on R d , given by where x, y are the components of a point in R d−k x × R k y and ∆ x , ∆ y denote the corresponding (positive definite) partial Laplacians. In light of [G1, G2], the operators G d,k are often called Grushin operators; sometimes they are also called Baouendi-Grushin operators, since shortly before the papers by V. V. Grushin appeared, M. S. Baouendi introduced a more general class of operators containing also the G d,k [Ba].In these and other works (see, e.g., [FGW, RoSi, DM]), the coefficient |x| 2 in (1.6) may be replaced by a more general function V (x).As prototypical examples of differential operators with mixed homogeneity, operators of the form (1.6) have attracted increasing interest in the last fifty years; we refer to [CCM1] for a brief list of the main results, focused on the field of harmonic analysis.More recently, the study of Grushin-type operators began to develop also on more general manifolds than R n , from both a geometric and an analytic perspective [BFI1,BFI2,Pe,BoPSe,BoL,GMP1,GMP2]. In this article, we investigate L p boundedness properties of operators of the form F ( L d,k ) in connection with size and smoothness properties of the spectral multiplier F : R → C; here L p spaces on the sphere S d are defined in terms of the spherical measure σ, and the operators F ( L d,k ) are initially defined on L 2 (S d ) via the Borel functional calculus for the self-adjoint operator L d,k .The study of the L p boundedness of functions of Laplace-like operators is a classical and very active area of harmonic analysis, with a number of celebrated results and open questions, already in the case of the classical Laplacian in Euclidean space (think, e.g., of the Bochner-Riesz conjecture).Regarding the spherical Grushin operators L d,k , in the case d = 2 and k = 1 a sharp multiplier theorem of Mihlin-Hörmander type and a Bochner-Riesz summability result for L d,k were obtained in [CCM1].Here we treat the general case d ≥ 2, 1 ≤ k < d, and obtain the following result. Let η ∈ C ∞ c ((0, ∞)) be any nontrivial cutoff, and denote by L q s (R) the L q Sobolev space of (fractional) order s on R. (i) For all continuous functions supported in (ii) For all bounded Borel functions F : R → C such that F | (0,∞) is continuous, Hence, whenever the right-hand side is finite, the operator F ( L d,k ) is of weak type (1, 1) and bounded on L p (S d ) for all p ∈ (1, ∞). Part (i) of the above theorem and a standard interpolation technique imply the following Bochner-Riesz summability result.It is important to point out that weaker versions of the above results, involving more restrictive requirements on the smoothness parameters s and δ, could be readily obtained by standard techniques.Indeed the sphere S d , with the measure σ and the Carnot-Carathéodory distance associated to L d,k , is a doubling metric measure space of "homogeneous dimension" Q = d + k, and the operator L d,k satisfies Gaussian-type heat kernel bounds.As a consequence (see, e.g., [He2,CoSi,DOSi,DzSi]), one would obtain the analogue of Theorem 1.1 with smoothness requirement s > Q/2, measured in terms of an L ∞ Sobolev norm, and the corresponding result for Bochner-Riesz means would give L p boundedness only for δ > Q|1/p − 1/2|.Since Q > D > D − 1, the results in this paper yield an improvement on the standard result for all values of d and k. As a matter of fact, in the case k ≤ d/2, the above multiplier theorem is sharp, in the sense that the lower bound D/2 to the order of smoothness s required in Theorem 1.1 cannot be replaced by any smaller quantity.Since L d,k is elliptic away from a negligible subset of S d , and D = d is the topological dimension of S d when k ≤ d/2, the sharpness of the above result can be seen by comparison to the Euclidean case via a transplantation technique [Mi, KeStT]. The fact that for subelliptic nonelliptic operators one can often obtain "improved" multiplier theorems, by replacing the relevant homogeneous dimension with the topological dimension in the smoothness requirement, was first noticed in the case of sub-Laplacians on Heisenberg and related groups by D. Müller and E. M. Stein [MüS] and independently by W. Hebisch [He1], and has since been verified in multiple cases.However, despite a flurry of recent progress (see, e.g., [MMü2,CCM1,DM,MMüN] for more detailed accounts and further references), the question whether such an improvement is always possible remains open.The results in the present paper can therefore be considered as part of a wider programme, attempting to gain an understanding of the general problem by tackling particularly significant particular cases. In these respects, it it relevant to point out that Theorem 1.1 above can be considered as a strengthening of the multiplier theorem for the Grushin operators G d,k on R d proved in [MSi]: indeed a "nonisotropic transplantation" technique (see, e.g., [M2,Theorem 5.2]) allows one to deduce from Theorem 1.1 the analogous result where S d and L d,k are replaced by R d and G d,k . The structure of the proof of Theorem 1.1 broadly follows that of the analogous result in [CCM1], but additional difficulties need to be overcome here.An especially delicate point is the proof of the "weighted spectral cluster estimates" stated as Propositions 5.1 and 5.2 below, essentially consisting in suitable weighted L 1 → L 2 norm bounds for "weighted spectral projections" associated with bands of unit width of the spectrum of L d,k .These can be thought of as subelliptic analogues of the Agmon-Avakumovič-Hörmander spectral cluster estimates for the elliptic Laplacian ∆ d , which are valid more generally when √ ∆ d is replaced with an elliptic pseudodifferential operator of order one on a compact dmanifold [Hö2], and are the basic building block for a sharp multiplier theorem for elliptic operators on compact manifolds and related restriction-type estimates [So1,So2,SeeSo,FSab]. Thanks to pseudodifferential and Fourier integral operator techniques, estimates of the form (1.8) can be proved for elliptic operators in great generality, but these techniques break down when the ellipticity assumption is weakened.Nevertheless alternative ad-hoc methods may be developed in many cases, based on a detailed analysis of the spectral decomposition of the operator under consideration, often made possible by underlying symmetries. In the case of the spherical Grushin operator L d,k , as a consequence of its spectral decomposition in terms of joint eigenfunctions of the operators ∆ d , . . ., ∆ k , the integral kernel of the "weighted projection" in (1.7) involves sums of (d − k)-fold tensor products of ultraspherical polynomials.This is a substantial difference from the case considered in [CCM1] (where d − k = 1) and requires new ideas and greater care.Section 5 of this paper is devoted to the proof of these estimates.As in [CCM1], here we make fundamental use of precise estimates for ultraspherical polynomials, which are uniform in suitable ranges of indices.These estimates, which are consequences of the asymptotic approximations of [O1,O2,O3,BoyD], could be of independent interest, and their derivation is presented in an auxiliary paper [CCM2]. In the context of subelliptic operators on compact manifolds, "weighted spectral cluster estimates" were first obtained in the seminal work of Cowling and Sikora [CoSi] for a distinguished sub-Laplacian on SU(2), leading to a sharp multiplier theorem in that case; their technique was then applied to many different frameworks [CoKSi,CCMS,M2,ACMM].However, the general theory developed in [CoSi], based on spectral cluster estimates involving a single weight function, does not seem to be directly applicable to the spherical Grushin operator L d,k (which, differently from the sub-Laplacian of [CoSi], is not invariant under a transitive group of isometries of the underlying manifold).For this reason, here we take the opportunity to establish an "abstract" multiplier theorem, which applies to a rather general setting of self-adjoint operators on bounded metric measure spaces, satisfying the volume doubling property, and extends the analogous result in [CoSi] to the framework of a family of scale-dependent weights. It would be of great interest to establish whether Theorem 1.1 is sharp when k > d/2 or alternatively improve on it.The corresponding question for the Grushin operators G d,k on R d has been settled in [MMü1]; based on that result, one may expect that Theorem 1.1 and Corollary 1.2 actually hold with D replaced by d.However, when the dimension k of the singular set is larger than the codimension, the approach developed in this paper, which is based on a "weighted Plancherel estimate with weights on the first layer", does not suffice to obtain such result and new methods (inspired, for instance, to those in [MMü1] and involving the "second layer" as well) appear to be necessary. The paper is organised as follows.In Section 2 we state our abstract multiplier theorem, of which Theorem 1.1 will be a direct consequence; in order not to burden the exposition, we postpone the proof of the abstract theorem to an appendix (Section 7).In Section 3 we introduce the spherical Laplacians and the Grushin operators on S d .A precise estimate for the sub-Riemannian distance ̺ associated with the Grushin operator L d,k is also given.Moreover, we introduce a system of cylindrical coordinates on S d which is key to our approach.In Section 4 we recall the construction of a complete system of joint eigenfunctions of ∆ d , . . ., ∆ k on S d , in terms of which we explicitly write down the spectral decomposition of the Grushin operator L d,k = ∆ d − ∆ k .We also prove some Riesz-type bounds for L d,k , and we state the refined estimates for ultraspherical polynomials, which are the building blocks in the joint spectral decomposition.Section 5 is devoted to the proof of the crucial "weighted spectral cluster estimates" for the Grushin operators L d,k .In Section 6 we use the Riesz-type bounds and the weighted spectral cluster estimates to prove "weighted Plancherel-type estimates" for the Grushin operator L d,k .After this preparatory work, the proof of Theorem 1.1, which boils down to verifying the assumptions of the abstract theorem, concludes the section. Throughout the paper, for any two nonnegative quantities X and Y , we use X Y or Y X to denote the estimate X ≤ CY for a positive constant C. The symbol X ≃ Y is shorthand for X Y and Y X.We use variants such as k or ≃ k to indicate that the implicit constants may depend on the parameter k. An abstract multiplier theorem We state an abstract multiplier theorem, which is a refinement of [CoSi,Theorem 3.6] and [DOSi,Theorem 3.2].The proof of our main result, Theorem 1.1, for the operator L d,k will follow from this result. As in [CoSi, DOSi], for all q ∈ [2, ∞], N ∈ N \ {0} and F : R → C supported in [0, 1], we define the norm F N,q by Moreover, by K T we denote the integral kernel of an operator T . Since the subject is replete with technicalities, which could weigh on the discussion, we defer the proof of the abstract theorem to an appendix (Section 7). Let us just observe that Assumption (b) only requires a polynomial decay in space (of arbitrary large order) for the heat kernel; hence this assumption is weaker than the corresponding ones in [DOSi], where Gaussian-type (i.e., superexponential) decay is required, and in [CoSi], where finite propagation speed for the associated wave equation is required (which, under the "on-diagonal bound" implied by (2.3), is equivalent to "second order" Gaussian-type decay [Si]), and matches instead the assumption in [He2] (see also [M2,Section 6]). Another important feature of the above result, which is crucial for the applicability to the spherical Grushin operators L d,k considered in this paper, is the use of a family of weight functions, where the weight π r may depend on the scale r in a nontrivial way; this constitutes another important difference to [CoSi], where the weights considered are effectively scalar multiples of a single weight function (compare Assumptions (d) and (e) above with [CoSi,Assumptions 2.2 and 2.5]). The attentive reader will have noticed that it is actually enough to verify Assumptions (c) and (d) for scales r = 1/N for N ∈ N \ {0} (indeed, one can redefine π r as π 1/⌊1/r⌋ when 1/r / ∈ N); the slightly redundant form of the above assumptions is just due to notational convenience. Spherical Laplacians and Grushin operators 3.1.The Laplace-Beltrami operator on the unit sphere.For d ∈ N, d ≥ 1, let S d denote the unit sphere in R 1+d , as in (1.1).The Euclidean structure on R 1+d induces a natural, rotation-invariant Riemannian structure on S d .Let σ denote the corresponding Riemannian measure, and ∆ d the Laplace-Beltrami operator on the unit sphere S d in R 1+d .It is possible (see, e.g., [Ge]) to give a more explicit expression for ∆ d , namely, where Indeed the rotation group SO(1 + d) acts naturally on R 1+d and S d ; via this action, the vector fields Z j,r (0 ≤ j < r ≤ d) correspond to the standard basis of the Lie algebra of SO(1 + d), and ∆ d corresponds to the Casimir operator.The commutation relations are easily checked and correspond to those of the Lie algebra of SO(1 + d). 3.2. A family of commuting Laplacians and spherical Grushin operators.By (3.2), the operator ∆ d commutes with all the vector fields Z j,r (this corresponds to the fact that the Casimir operator is in the centre of the universal enveloping algebra of the Lie algebra of SO(1 + d)); in particular it commutes with each of the "partial Laplacians" for r = 1, . . ., d. Assume that d ≥ 2. We now observe that, for r = 1, . . ., d − 1, we can identify SO(1 + r) with a subgroup of SO(1 + d), by associating to each A in SO(1 + r) the element A 0 0 I of SO(1 + d).Via this identification, the operator ∆ r corresponds to the Casimir operator of SO(1 + r), and therefore it commutes with all the operators ∆ s for s = 1, . . ., r. In conclusion, the operators ∆ 1 , . . ., ∆ d commute pairwise, and admit a joint spectral decomposition.In what follows we will be interested in the study of the Grushin-type operator r} is the family of vector fields appearing in the sum (3.4), then it is easily checked that, for all z ∈ S d , (3.5) On the other hand, the commutation relations (3.2) give that for all j, j ′ = 0, . . ., d − 1; in particular the vector fields in Z d,k , together with their Lie brackets, span the tangent space of S d at each point.In other words, the family of vector fields Z d,k satisfies Hörmander's condition and (together with the Riemannian measure σ) determines a (non-equiregular) 2-step sub-Riemannian structure on S d with the horizontal distribution H d,k described in (3.5).The corresponding sub-Riemannian norm on the fibres of H d,k is given, for all p ∈ S d and v ∈ H d,k p , by For more details on sub-Riemannian geometry we refer the reader to [ABB, BeRi, CaCh, Mo]. Cylindrical coordinates. In order to study the operator L d,k , it is useful to introduce a system of "cylindrical coordinates" on S d that will provide a particularly revealing expression for L d,k in a neighbourhood of the singular set E d,k .For all ω ∈ S d−1 and ψ ∈ [−π/2, π/2], let us define the point ⌊ω, ψ⌉ ∈ S d by ⌊ω, ψ⌉ = ((cos ψ)ω, sin ψ). (3.7) Away from ψ = ±π/2, the map (ω, ψ) → ⌊ω, ψ⌉ is a diffeomorphism onto its image, which is the sphere without the two poles; so (3.7) can be thought of as a "system of coordinates" on S d , up to null sets.In these coordinates, the spherical measure σ on S d is given by where σ d−1 is the spherical measure on S d−1 .Moreover, the Laplace-Beltrami operator may be written in these coordinates as where ∆ d−1 , given by (3.3), corresponds to the Laplace-Beltrami operator on S d−1 (see, e.g., [V,§IX.5]). We now iterate the previous construction.Let k ∈ N such that 1 ≤ k < d be fixed.Starting from (3.7), we can inductively define the point In these coordinates, the spherical measure σ on S d is given by where σ k is the spherical measure on S k .Moreover, starting from (3.8), we get inductively that where again ∆ k is the operator given by (3.3). In particular, the Grushin operator L d,k = ∆ d − ∆ k on S d may be written in these coordinates as in (1.3), where the vector fields Y r and the function V : (−π/2, π/2) d−k → R are defined by (1.4) and (1.5) respectively.Note that V(ψ) vanishes only for ψ = 0, corresponding to the singular set E d,k .We also remark that 1 cos The formula (1.3) for the sub-Laplacian corresponds to a somewhat more explicit expression for the sub-Riemannian norm (3.6) on the fibres of the horizontal distribution, which is better written by identifying, via the "coordinates" (3.9), the tangent space T ⌊ω,ψ⌉ and, for all (v, w) ∈ H d,k ⌊ω,ψ⌉ , its sub-Riemannian norm satisfies 3.4.The sub-Riemannian distance.Thanks to (3.13), we can obtain a precise estimate for the sub-Riemannian distance ̺ associated with the Grushin operator L d,k .This is the analogue of [RoSi,Proposition 5.1], that treats the case of "flat" Grushin operators on R n , and [CCM1, Proposition 2.1], that treats the case of L 2,1 on S 2 .In the statement below we represent the points of the sphere in the form ⌊ω, ψ⌉ for ω ∈ S k , ψ ∈ [−π/2, π/2] d−k , as in (3.9).We also denote by ̺ R,S k and ̺ R,S d the Riemannian distances on the spheres S k and S d . Consequently, the σ-measure V (⌊ω, ψ⌉ , r) of the ̺-ball centred at ⌊ω, ψ⌉ with radius Note now that the expression in the right-hand side of (3.14) defines a continuous function Φ : Hence, in order to prove the equivalence (3.14), it is enough to show that Φ and ̺ are locally equivalent at each point p 0 ∈ Ω d,k , and then apply ).The associated horizontal distribution H G and sub-Riemannian metric are given by . By the equivalence of norms, up to shrinking A, we may assume that , where the norms in (3.19) are those determined by the Riemannian structures of S k and R k ; similarly, we may also assume that for all p, p ′ ∈ U , where the latter equivalence readily follows from (3.18) and (3.20). A complete system of joint eigenfunctions Let d, k ∈ N with 1 ≤ k < d.In this section we briefly recall the construction of a complete system of joint eigenfunctions of ∆ d , . . ., ∆ k on S d .This will give in particular the spectral decomposition of the Grushin operator This construction is classical and can be found in several places in the literature (see, e.g., [V, Ch. IX] or [EMOT, Ch. XI]), where explicit formulas for spherical harmonics on spheres of arbitrary dimension are given, in terms of ultraspherical (Gegenbauer) polynomials.The discussion below is essentially meant to fix the notation that will be used later. 4.1.Spectral theory of the Laplace-Beltrami operator.We first recall some well known facts about the spectral theory of ∆ d (see, e.g., [StW,Ch. 4] or [AxBR,Ch. 5]).The operator ∆ d is essentially self-adjoint on L 2 (S d ) and has discrete spectrum: its eigenvalues are given by where ℓ ∈ N d , and (4.4) The corresponding eigenspaces, denoted by H ℓ (S d ), consist of all spherical harmonics of degree ℓ ′ = ℓ − (d − 1)/2, that is, of all restrictions to S d of homogeneous harmonic polynomials on R 1+d of degree ℓ ′ ; they are finite-dimensional spaces of dimension for ℓ ∈ N d (the last identity only makes sense when d > 1), and in particular (this estimate is also valid when d = 1, provided we stipulate that 0 0 = 1).Since ∆ d is self-adjoint, its eigenspaces are mutually orthogonal, i.e., Here the normalization constant c ℓm is chosen so that that is, by means of [Sz,(4.3.3)], Then, for all (ℓ, m) ∈ I d , we obtain an injective linear map ), which is an isometry with respect to the Hilbert space structures of L 2 (S d−1 ) and L 2 (S d ), and a decomposition [V, p. 466, eq.(1)]).The summands in the right-hand side of (4.12) are joint eigenspaces of ∆ d and ∆ d−1 of eigenvalues λ d ℓ and λ k m respectively; hence they are pairwise orthogonal in L 2 (S d ). 4.3.Joint eigenfunctions of ∆ d , . . ., ∆ k .We go back to the general case 1 ≤ k < d and we look for a complete system of joint eigenfunctions of ∆ d , . . ., ∆ k . We note that the operators of the form (4.17) include those in the functional calculus of the Grushin operator where 4.4.Riesz-type bounds.In this section we prove certain weighted L 2 bounds involving the joint functional calculus of ∆ d , . . ., ∆ k , which, in combination with the weighted spectral cluster estimates in Section 5 below, play a fundamental role in satisfying the assumptions on the weight in the abstract theorem and proving our main result.A somewhat similar estimate was obtained in [CCM1, Lemma 2.5] in the case d = 2 and k = 1.Differently from [CCM1], the estimate in Proposition 4.1 below is proved for arbitrarily large powers of the weight; this prevents us from using the elementary "quadratic form majorization" method exploited in the previous paper, and requires a more careful analysis, based on the explicit eigenfunction expansion developed in the previous sections. For later use, it is convenient to reparametrise the functions X d ℓ,m defined in (4.8): namely, we introduce the functions where (ℓ, m) ∈ I d and c ℓm is given by (4.10).Let t d,d : In particular, for all (ω, ψ) where ψ = (ψ k+1 , . . ., ψ d ).Finally, we set, for 1 ≤ k < d, (4.23) Proof.By interpolation, it is enough to prove the estimate in the case N ∈ N. 4.5. Estimates for ultraspherical polynomials.In this section we present a number of estimates for the functions X d ℓ,m (or rather, their reparametrisations X d ℓ,m from (4.21)), which will play a crucial role in the subsequent developments. We first state some basic uniform bounds that follow from the previous discussion (see (4.7) and (4.12)).In the statement below, we convene that 0 0 = 1. /2 for all (ℓ, m) ∈ I d .More refined pointwise estimates can be derived from asymptotic approximations of ultraspherical polynomials in terms of Hermite polynomials and Bessel functions, obtained in works of Olver [O3] and Boyd and Dunster [BoyD] in the regimes m ≥ ǫℓ and m ≤ ǫℓ respectively, where ǫ ∈ (0, 1).Here and subsequently, for all ℓ, m ∈ R with ℓ = 0 and 0 ≤ m ≤ ℓ, a ℓ,m and b ℓ,m will denote the numbers in [0, 1] defined by b ℓ,m = m ℓ (4.32) and The points ±a ℓ,m ∈ [−1, 1] play the role of "transition points" for the functions In the case d = 2, the derivation of the estimates in Theorem 4.3 from the asymptotic approximations in [O3,BoyD] is presented in [CCM1, Section 3]; a number of variations and new ideas are required in the general case d ≥ 2, and we refer to [CCM2] for a complete proof (indeed, in [CCM2] a stronger decay is proved in the regime m ≥ ǫℓ for |x| ≥ 2a ℓ,m than the one given in (4.34)).Here we only remark that combining the above estimates yields the following bound. ) Proof.Let ǫ ∈ (0, 1) be a parameter to be fixed later.If m ≤ ǫℓ, the desired estimates immediately follow from (4.35), by taking any c ≤ log 2 (indeed, note that (1 + m) 4/3 ≥ 1 + m). Weighted spectral cluster estimates As a consequence of the estimates in Section 4.5, we obtain "weighted spectral cluster estimates" for the Grushin operators where X d r,s has been defined in (4.21).We are interested in bounds for suitable weighted sums of the X d,k ℓ,m for indices ℓ, m such that the eigenvalue λ d,k ℓ,m of L d,k ranges in an interval of unit length (whence the name "spectral cluster").The bounds that we obtain are different in nature according to whether m ≤ ǫℓ or m ≥ ǫℓ for some fixed ǫ ∈ (0, 1), and are presented as separate statements.We remark that, in the case m ≤ ǫℓ, the eigenvalue λ d,k ℓ,m of L d,k is comparable with the eigenvalue λ d ℓ of ∆ d ; consequently, the range m ≤ ǫℓ will be referred to as the "elliptic regime", while the range m ≥ ǫℓ will be called the "subelliptic regime".Proposition 5.1 (subelliptic regime).Let ǫ ∈ (0, 1) and d where X d,k ℓ,m was defined in (5.1).Analogous estimates are proved in [CCM1, Section 4] in the case d = 2 and k = 1; in that case, each of the products in (5.1) reduces to a single factor.Treating the general case, with multiple factors, presents substantial additional difficulties, as one may appreciate from the discussion below. 5.1.The subelliptic regime.Here we prove Proposition 5.1.To this aim, we first present a couple of lemmas that will allow us to perform a particularly useful change of variables in the proof. Lemma 5.4.Let w ∈ R n and define the matrix M (w) = (m i,j (w)) n i,j=1 by Proof.Observe that m i,j (w) = δ i,j + ρ i,j w j , where Consequently, if S n denotes the group of permutations of the set {1, . . ., n} and ǫ(σ) denotes the signature of the permutation σ, then where S c = {1, . . ., n} \ S. We note that (ρ l,m ) |S| l,m=1 is a skewsymmetric matrix, so its determinant vanishes when |S| is odd; if |S| is even, instead, its determinant is the square of its pfaffian, and using the Laplace-type expansion for pfaffians (see, e.g., [Ar,§III.5,p. 142]) one can see inductively that the determinant is 1. Lemma 5.5.Let Ω = {v ∈ R n : vj = −1 for all j = 1, . . ., n}, where Let v → w be the map from Ω to R n defined by Moreover, for all ǫ ∈ (0, 1), the map v → w is injective when restricted to Proof.From the definition it is immediate that where M (w) = {m j,s (w)} n j,s=1 is the matrix defined in Lemma 5.4, so and the desired expression for the determinant follows from Lemma 5.4. Note that, if v ∈ Ω ǫ , 0 ≤ v j , |v j | ≤ j v j ≤ ǫ < 1, so 1 + vj > 0 and Ω ǫ ⊆ Ω.In addition, the equations w j = v j /(1 + vj ) are equivalent to v j − w j vj = w j , that is, Since w j = v j /(1 + vj ) ≥ 0, from Lemma 5.4 it follows that det M (w) ≥ 1, so the matrix M (w) is invertible and the above equation is equivalent to v = M (w) −1 w; in other words, if v ∈ Ω ǫ , then v is uniquely determined by its image w via the map v → w, that is, the map restricted to Ω ǫ is injective. Proof of Proposition 5.1.We start by observing that, for all (ℓ, k) ∈ I and in particular ℓ j ≃ ℓ 1, for all j ∈ {k, . . ., d}. (5.6) We also note that since it will suffice to apply (5.7) times, with i replaced by i, i + 1, . . ., i + h − 1, respectively.Due to (4.2), we may restrict without loss of generality to x ∈ [0, 1] d−k .In addition, for each fixed i, the sum in the left-hand side of (5.7) is finite, since ℓ d − ℓ k 1 and therefore ) then shows that the estimate (5.7) is trivially true for each fixed i (with a constant dependent on i), and therefore it is enough to prove it for i sufficiently large. It is convenient to reindex the above sum.Let us set and let us write (5.8) in particular (5.9) The condition ǫℓ d ≤ ℓ k is then equivalent to ∈ (0, 1), and implies, by (5.5), q j ≤ ǭ4 (p + qj ) (5.10) for j = 1, . . ., d− k.As previously discussed, it will be enough to prove the estimate for i sufficiently large; in the following we will assume that Let us first consider the range (5.12) In light of (4.34), the inequalities (5.13) hold for all j ∈ {1, . . ., d − k}.Moreover, for one of the quantities the better bound |x| −1/2 exp(−cp|x| 2 ) holds for some c > 0, thanks to the second estimate in (4.34) and to (5.6).As a consequence, we obtain for arbitrarily large N ∈ N. Note then that the conditions (5.11) and (5.9) imply since i|x| 1 and k − 2α > 0, provided N is large enough.Note that, in estimating the sum in p, we used the fact that the interval Let us now discuss the range (5.14) We first note that (5.14) implies Note that, by (5.9), for all j = 1, . . ., d − k, (5.15)where ϕ(w) = 4w/(1 + w) 2 .Note that the map ϕ : [0, 1] → [0, 1] is an increasing bijection, such that w ≤ ϕ(w) ≤ 4w; its derivative is given by ϕ ′ (w) = 4 1−w (1+w) 3 and vanishes only at w = 1.As a consequence, setting xj = ϕ −1 (x 2 j ), with j ∈ {1, . . ., d − k}, one has xj ≃ |x j |; moreover, in light of (5.15) and (5.10), In particular, in this range, by (4.34), for all j = 1, . . ., d − k, where Ξ is defined as in (5.4).Then where x = (x d , . . ., xk+1 ), q = (q 1 , . . ., q d−k ) and It is easily seen that |∂ p (1/p)|, |∂ p (q j /(p + qj ))| 1/p for all j = 1, . . ., d − k, on the range of summation (note that q j + |q j | ≤ Q ≤ ǭ2 i 2 /Q ≤ ǭ2 p and ǭ < 1, whence p + qj ≃ p q j 1).Thus, by Lemma 5.3, where the change of variables p = u 2 /Q was used, and It is easily checked that for all j = 1, . . ., d − k, on the range of summation (note that |q j |Q ≤ Q 2 ≤ ǭ2 i 2 ≤ ǭ2 u 2 and ǭ < 1, so u 2 + qj Q ≃ u 2 ), and therefore, by Lemma 5.3 and the Leibniz rule, Hence, by [DM,Lemma 5.7], where the change of variables q j = pv j was used, and and the fact that the interval [i/ √ V , (i + 1)/ √ V ] has length V −1/2 was used.We can now use the change of variables observing that w j ≃ v j for all j ∈ {1, . . ., d − k}, and (see Lemma 5.5 below) on the domain of integration (here we use the fact that v j , |v j | ∈ [0, ǭ2 ] for all j ∈ {1, . . ., d − k} and ǭ < 1), so In order to conclude, it is enough to bound the last integral with a multiple of min{i, |x| −1 } k−2α .To do this, it is convenient to split the domain of integration according to whether w j is larger or smaller than 2x 2 d−j+1 for each j = 1, . . ., d − k, and according to which j corresponds to the maximum component w j of w.In other words, where J c = {1, . . ., d − k} \ J.We estimate separately each summand, depending on the choice of j * ∈ {1, . . ., d − k} and J ⊆ {1, . . ., d − k}, noting that, in the respective domain of integration, |x for all j ∈ J. Suppose first that j * ∈ J, and set which is the desired estimate.Here we used that and we are done. 5.2.The elliptic regime.We now discuss the proof of Proposition 5.2.We first observe that a straightforward iteration of Proposition 4.2(i) yields the following estimate. Proof of Proposition 5.2.Due to the symmetry property of Jacobi polynomials (4.2), we may restrict to d , it suffices to prove the estimate , so (5.3) follows by applying (5.17 We first consider the terms in the sum with ℓ k = 0 (observe that this may happen only for k = 1).The condition ℓ 2 d ∈ [i 2 , (i + 1) 2 ] uniquely determines the value of ℓ d .Using the estimate in Proposition 4.2(ii) to bound X k+1 ℓ k+1 ,0 (x k+1 ) in the left-hand side of (5.17) and then applying Lemma 5.6, we obtain In what follows, we shall therefore assume ℓ k > 0. Define y j := 1 − x 2 j for j = k + 1, . . ., d.Let us first consider the case where for some j * ∈ {k + 1, . . ., d}. By (4.36), in this case, , where we also extended the sum in ℓ j * −1 to all N j * −1 .As already observed, due to the condition , for a fixed ℓ k i the sum over ℓ d essentially contains only one term and ℓ d ≃ i.Thus, by applying Lemma 5.6 first to the sum over ℓ j * , • • • , ℓ d−1 and then to the sum over ℓ j * − 2, . . ., ℓ k , we get From here on, we shall assume for all j ∈ {k + 1, . . ., d}. Note that the above inequality implies that ℓ j−1 ≃ ℓ j y j for all j > k * , and moreover, by Corollary 4.4, for k * < j ≤ d, where Ξ was defined in (5.4).Assume first that k * > k.Then whence, by Corollary 4.4, Note now that, for j = k * + 1, . . ., d, in the range of summation; moreover the interval [ i 2 + ℓ 2 k , (i + 1) 2 + ℓ 2 k ] has length ≃ 1 and its endpoints are ≃ i, because ℓ k i.Hence, in view of Lemma 5.3, we can apply [DM,Lemma 5.7] to the inner sum and obtain The change of variables t j−1 = ℓ j−1 /(ℓ j y j ), j = k * + 1, . . ., d, then gives where Lemma 5.6 was applied to the sum in (ℓ k * , . . ., ℓ k ) and the fact that k * ≥ k + 1 ≥ 2 was used. We now consider the case k * = k.Here, by (5.18), where the last inequality follows from [CCM1,Lemma 4.1] together with Lemma 5.3, the fact that in the range of summation and the fact that (since The change of variables u = ℓ 2 d − ℓ 2 k in the inner integral then gives where ℓ = (ℓ d−1 , . . ., ℓ k ), y = (y d , . . ., y k+1 ), and We now observe that, since u ∈ [i, i + 1], and on the range of summation.Thanks to Lemma 5.3, we can apply [DM,Lemma 5.7] to majorize the inner sum with the corresponding integral and obtain that The change of variables ℓ j = uy j+1 • • • y d τ j , j = k, . . ., d − 1, then gives Finally, the change of variables and we are done. Proof.Due to the compactness of S d , both (6.7) and (6.8) are obvious for r ≥ 1. In the following we assume therefore that r < 1. .16) The implicit constants may depend on ε.Proof.Note that the sub-Riemannian distance ̺ and the Riemannian distance ̺ R,S d are locally equivalent far from the singular set E d,k : since H d,k p = T p M for all p ∈ S d \ E d,k (see (3.5)), and the Riemannian and sub-Riemannian inner products on T p M depend continuously on p, it is enough to apply [CCM1, Lemma 2.3] by choosing as M and N the Riemannian and sub-Riemannian S d respectively, and as F the identity map restricted to any open subset U of S with compact closure not intersecting orthogonal projection π d ℓ d ,...,ℓ k of L 2 (S d ) onto the joint eigenspace of ∆ d , . . ., ∆ k of eigenvalues λ d ℓ d , . . ., λ k ℓ k is given by 4.29) which is (4.23) in the case k = d − 1.Let now 2 ≤ r ≤ d.By the discussion in Section 3, up to null sets we can identify S d with S r × [−π/2, π/2] d−r with coordinates (ω, ψ) and measure cos ψ d−1 d • • • cos ψ r r+1 dψ dω.Consequently the space L 2 (S d ) is the Hilbert tensor product of the spaces L 2 (S r ) and L 2 ([−π/2, π/2] d−r , cos ψ d−1 d • • • cos ψ r r+1 dψ).Hence the inequality (4.29), applied with d = r, yields a corresponding inequality on the sphere S d , namely we assume ǫℓ ≤ m, then, for all (ℓ d , . . ., ℓ k ) ∈ J (k) d with ℓ d = ℓ and ℓ k = m, ǫℓ j+1 ≤ ℓ j , j ∈ {k, . . ., d − 1}, (5.5) and, for j = 1, . . ., d − k,ℓ d−j+1 + ℓ d−j = p + qj , ..,d−k} |x d−j+1 |.
2020-09-08T01:01:13.304Z
2020-09-07T00:00:00.000
{ "year": 2020, "sha1": "2c9189b38b45f6fd47a6a93a3d686098069029f3", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/imrn/advance-article-pdf/doi/10.1093/imrn/rnab007/38884518/rnab007.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "2c9189b38b45f6fd47a6a93a3d686098069029f3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
248098
pes2o/s2orc
v3-fos-license
Risk of venous thromboembolism in people admitted to hospital with selected immune-mediated diseases: record-linkage study Background Venous thromboembolism (VTE) is a common complication during and after a hospital admission. Although it is mainly considered a complication of surgery, it often occurs in people who have not undergone surgery, with recent evidence suggesting that immune-mediated diseases may play a role in VTE risk. We, therefore, decided to study the risk of deep vein thrombosis (DVT) and pulmonary embolism (PE) in people admitted to hospital with a range of immune-mediated diseases. Methods We analysed databases of linked statistical records of hospital admissions and death certificates for the Oxford Record Linkage Study area (ORLS1:1968 to 1998 and ORLS2:1999 to 2008) and the whole of England (1999 to 2008). Rate ratios for VTE were determined, comparing immune-mediated disease cohorts with comparison cohorts. Results Significantly elevated risks of VTE were found, in all three populations studied, in people with a hospital record of admission for autoimmune haemolytic anaemia, chronic active hepatitis, dermatomyositis/polymyositis, type 1 diabetes mellitus, multiple sclerosis, myasthenia gravis, myxoedema, pemphigus/pemphigoid, polyarteritis nodosa, psoriasis, rheumatoid arthritis, Sjogren's syndrome, and systemic lupus erythematosus. Rate ratios were considerably higher for some of these diseases than others: for example, for systemic lupus erythematosus the rate ratios were 3.61 (2.36 to 5.31) in the ORLS1 population, 4.60 (3.19 to 6.43) in ORLS2 and 3.71 (3.43 to 4.02) in the England dataset. Conclusions People admitted to hospital with immune-mediated diseases may be at an increased risk of subsequent VTE. Our findings need independent confirmation or refutation; but, if confirmed, there may be a role for thromboprophylaxis in some patients with these diseases. Introduction Venous thromboembolism (VTE), which comprises deep vein thrombosis (DVT) and pulmonary embolism (PE), is thought to account for an estimated 25,000 deaths annually in the UK [1], with a recent study suggesting that the figure might be as high as 60,000 [2]. VTE is a common complication during and after hospitalisation for acute medical illness or surgery [3]. PE accounts for 5 to 10% of deaths in hospitalised patients, making VTE the most common preventable cause of in-hospital death [3]. Risk factors for VTE include immobility, age and obesity [1]. Although VTE has traditionally been considered a surgical condition, the vast majority of hospitalised patients with symptomatic VTE have not undergone recent surgery [4]. Indeed, up to 80% of in-hospital fatal PEs occur in nonsurgical patients [4]. Patients with inflammatory bowel disease [5], rheumatoid arthritis [6], type 1 diabetes [7] and systemic lupus erythematosus (SLE) [8] are at an increased risk of VTE, suggesting that there may be a more general association between immune-mediated diseases and VTE. To investigate this further, we undertook a record linkage study to determine the risk of VTE in individuals with selected immune-mediated diseases using the long-standing Oxford Record Linkage Study (ORLS) [9] and the more recent English national linked Hospital Episode Statistics (HES) dataset. Population and data The Oxford Record Linkage Study (ORLS) [9] includes brief statistical abstracts of records of all hospital admissions (including day cases) in National Health Service (NHS) hospitals, and all deaths regardless of where they occurred, in defined populations within the former Oxford NHS Region. The original ORLS covered the years 1963 to 1998, and is referred to here as ORLS1. The hospital data were collected routinely in the NHS as the region's hospital statistics system and were similar to English national hospital episode statistics (HES). A second dataset, referred to as ORLS2, has been linked and built as the Oxford regional subset of the English national Hospital Episode Statistics (HES), and runs from 1999 to 2008. The data items available for linkage changed between 1998 and 1999 and the two datasets are not themselves linked together. We have also used the complete dataset of the English national HES (1999 to 2008). Death data in each of the three datasets derive from death certificates. The population covered by ORLS1 gradually expanded over time from an initial population of part of Oxfordshire (an approximately 300,000 resident population) to all four counties of the former Oxford NHS region (resident population 2.5 million); and the ORLS2 covers the same four counties. The population of England is 50 million. The datasets used in this study (versions m6v2 for ORLS and v08a for ORLS2 and England) have been constructed by staff in the Oxford Unit of Healthcare-Epidemiology. The basic methods were the same for the analysis of each disease and are described for rheumatoid arthritis and VTE. A cohort of people with a record of admission or day case care for rheumatoid arthritis was constructed for those with a principal diagnosis of rheumatoid arthritis, as the reason for hospital care, by identifying the first admission, or episode of day case care, for the condition in an NHS hospital during the study period of 1963 to 1998 in ORLS1, 1999 to 2008 in ORLS2, and 1999 to 2008 for the whole of England. The International Classification of Disease (ICD) codes used for each immune-mediated disease can be found in Table 1 for each individual with various other, mainly minor medical and surgical, conditions (listed in Table 2 footnotes). This is based on a 'reference' group of conditions that has been used in other studies of associations between diseases [10][11][12]. In its design, the standard epidemiological practice was followed, when hospital controls are used, of selecting a diverse range of conditions, rather than relying on a narrow range (in case the latter are themselves atypical in their risk of subsequent disease). As a check, we have studied the risk of VTE in the control conditions within the reference cohort to ensure that the reference cohort does not include control conditions that have atypically high or low VTE rates. People were included in the rheumatoid arthritis or reference cohort if they did not have an admission for VTE either before or at the same time as the admission for rheumatoid arthritis or the reference condition. We then searched the database for any subsequent NHS hospital care for, or death from, VTE in these cohorts. We considered that rates of VTE in the reference cohort would approximate those in the general population of the region while allowing for migration in and out of it (data on migration of individuals were not available). Statistical methods We calculated rates of VTE based on cohort analysis. For each broad age group of people with rheumatoid arthritis, and for VTE, we took "date of entry" into each cohort as the date of first admission for rheumatoid arthritis, or reference condition, and "date of exit" as the date of first record of VTE, death, or the end of the data file (31 December 1998 for ORLS1; 31 March 2008 for ORLS2 and English HES), whichever was the earliest. In the ORLS cohorts, in comparing the rheumatoid arthritis cohort with the reference cohort, we first calculated rates for VTE, stratified and then standardised by age (in five-year age groups), sex, calendar year of first recorded admission, and district of residence, to ensure that the results of group comparisons were equivalent in these respects. We used a similar approach to standardisation in the England dataset, stratifying by age (in fiveyear age groups), sex, calendar year of first recorded admission, region of residence, and quintile of patients' Index of Deprivation score (as a measure of socio-economic status). We used the indirect method of standardisation, using the combined rheumatoid arthritis and reference cohorts as the standard population. We applied the stratum-specific rates in the combined rheumatoid arthritis and reference cohorts to the number of people in each stratum in the rheumatoid arthritis cohort, separately, and then to those in the reference cohort. We calculated the ratio of the standardised rate of occurrence of VTE in the exposure cohorts relative to that in the reference cohort. The confidence interval for the rate ratio and χ 2 statistics for its significance were calculated as described elsewhere [13]. Calculated in this way, the rate ratio provides a measure of relative risk of VTE in the rheumatoid arthritis cohort, compared with the reference cohort. The fact that there is unmeasured migration in the populations covered by the study, and the use of an internal reference cohort for comparison with the VTE rates in the rheumatoid arthritis cohort (and others), preclude meaningful calculation of absolute risks. We analysed the occurrence of an admission for VTE within 90 days of the admission for each immunerelated disease, and at 91 days and more, to help establish whether any elevated risk of VTE was confined to the short-term after the episode of hospitalisation or was more prolonged. In the analysis of diabetes mellitus, we used hospital admission for diabetes mellitus when aged under 30 as a proxy for type 1 diabetes, as the type of diabetes is not well recorded in routine hospital statistics. We analysed the data for males and females separately, as well as together, to ascertain whether or not there were differences between them in risk of VTE. Table 1 shows the number of people in the study who were admitted to the hospital with each of the selected immune-mediated diseases; it also shows the percentage of these who were female. The number of people in each of the three corresponding comparison cohorts were: 313,716 (47% female) for ORLS1, 187,609 (46% female) for ORLS2, and 3,707,315 (41% female) for England. Results There were elevated risks of VTE after hospital admission for the following individual immune-mediated diseases, in all three of the populations studied: autoimmune haemolytic anaemia, chronic active hepatitis, dermatomyositis/polymyositis, type 1 diabetes mellitus, multiple sclerosis (MS), myasthenia gravis, myxoedema, pemphigus/ pemphigoid, polyarteritis nodosa, psoriasis, rheumatoid arthritis, Sjogren's syndrome, and SLE (Table 2 and Appendix). In the much larger population of England, we also found elevated risks for VTE in people admitted with Addison's disease, ankylosing spondylitis, coeliac disease, Goodpasture's syndrome, Hashimoto's thyroiditis, idiopathic thrombocytopenia purpura, pernicious anaemia, primary biliary cirrhosis, scleroderma and thyrotoxicosis ( Table 2). High levels of risk, substantially higher than the risks associated with some of the other diseases, were found for SLE and polyarteritis nodosa. The rate ratios for SLE were 3.61 (2.36 to 5.31) in the ORLS1 population, 4.60 (3.19 to 6.43) in ORLS2 and 3.71 (3.43 to 4.02) in the English national dataset. Those for polyarteritis nodosa were, respectively, 2.88 (1.70 to 4.55), 4.36 (not quite significant with CIs of 0.90 to 12.8) and 3.53 (2.76 to 4.44). Consistently high levels of risk were also found for chronic active hepatitis, dermatomyositis and polymyositis, diabetes mellitus, MS and Sjogren's syndrome. Males and females On the whole, the rate ratios for males and females were similar although some of the numbers in ORLS1 and ORLS2 became rather small when subdivided by sex. In the England dataset, there was a significant elevation of VTE risk in males with coeliac disease (RR 1.47; 1.26 to1.70), but not in females (1.09; 0.97 to 1.23). Similarly, the VTE rate ratio for males with Hashimoto's thyroiditis was significantly elevated (2.98; 1.86 to 4.51), but the rate ratio in females was not (1.23; 0.97 to 1.54). There were no other significant differences between males and females. Short (0 to 90 days) and long-term (91+ days) associations with VTE We studied the occurrence of VTE in time intervals after admission. However, subdividing the results into 0 to 90 days and 91+ days since immune-mediated disease admission did reduce power. Only MS, myxoedema, psoriasis, rheumatoid arthritis and SLE had sufficient numbers for meaningful analysis. Table 3 shows that, in general, associations with increased VTE risk were found for these diseases at both short and long intervals. Absolute risk Although we could not calculate meaningful absolute risks for the ORLS1 and ORLS2 cohorts, because of migration into and out of the Oxford region, approximate absolute risks can be calculated in the English national data. For example, there were 81,950 people in the MS cohort of whom 1,509 had an admission for VTE (1.8%) during the 10-year period covered by the study. As an approximation, in people with MS there was one case of VTE per 264 person-years at risk. Similar calculations show that 2.7% of people with SLE had an admission for VTE in the period covered by the study, and that the event rate was one case of VTE per 169 person-years at risk. These calculations assume that migration was small and that its effect was unimportant. Discussion The ORLS1 (1963 to 1998) and ORLS2 (1999 to 2008) cover broadly the same population but at different times. The English linked dataset is completely independent of ORLS1 and it covers a far larger population in the same timeframe as ORLS2. The results of the analyses in the three datasets corroborate each other. Previous studies have found elevated risks of VTE in people with rheumatoid arthritis [6], type 1 diabetes mellitus [7], SLE [8] and inflammatory bowel disease [5]. We have not reported on inflammatory bowel diseases here as they are included in a different epidemiological study that we hope to publish separately. These previous findings, combined with our own, suggest that there may be a general association between immune-mediated diseases and the risk of subsequent VTE. This risk is not solely associated with the short term after hospital admission: for the diseases with large enough numbers to study, it was sustained over time. The increased risks of VTE may have different underlying causes in each disease. The elevated risks may be a reflection of patients with more extreme cases of the immune-mediated diseases, in that the populations in our study are those admitted to hospital. Immobility [14], effects of treatment (corticosteroids promote haemostasis) [15] or a true effect of inflammation on coagulation [16] could all be implicated in the associations shown. In 1856, Virchow proposed three precipitants for venous thrombosis: venous stasis, increased coagulability of the blood, and damage to the vessel wall [17]. Inflammation is a key determinant of endothelial function in both arteries and veins and results in changes in expression of selectins and cellular adhesion molecules [16]. Studies have shown that patients with VTE were more likely to have elevated plasma IL-8, IL-6, MCP-1 and TNF-α levels [18], that inflammation influences clotting factor levels [19], that an inflammatory gene is associated with VTE [20] and acute inflammation does contribute to VTE [21]. Taken together, inflammation is likely to contribute to some extent to the initiation of venous thrombus formation. Methodological issues: datasets, population, and multiple comparisons Strengths of the datasets include their size, with large numbers of fairly uncommon diseases. The ORLS1 data provide long duration of follow-up; the English data provide a much larger and more recent population but with shorter follow-up. The risk of VTE was therefore studied for a large number of immune-mediated diseases, all within a single population and using the same methodology. Accordingly, levels of risk associated with different immune-mediated diseases can be directly compared within the same study populations. Our study should be regarded as exploratory rather than definitive. We studied a wide range of immunemediated diseases with differing aetiologies. We did so because, as an exploratory study, with very large linked datasets in which many diseases can be studied, we saw no reason to be restrictive in our selection of diseases. The datasets have limitations. The cohorts are based on prevalent cases, the first recorded hospital admission or episode of day case care for each person with each condition, rather than being cohorts with follow-up from the date of first diagnosis. Data are not recorded on patients who move out of the area covered by data collection or who are treated in hospitals outside the area (mainly affecting ORLS). The datasets are limited to people who were admitted to hospital, or who received day case specialist care. This would not capture all people with each immune-mediated disease, although it should identify the great majority with subsequent diagnosed VTE. These factors are part of our reasoning for including a comparison cohort of patients admitted to hospital, or in receipt of day case care, from the same database and for 'matching', through stratified analysis, for area of treatment and for year of first recorded diagnosis as well as for age and sex. The two Oxford datasets are not linked to each other due to changes in the data items available for linkage between 1998 and 1999. Consequently, it is likely that some people have been recorded as having a 'first admission' for each immunemediated disease in each of the time periods studied. We lack clinical and laboratory data. We lack treatment data for the immune-mediated diseases; and elements of their treatment could themselves influence VTE risk. There is very limited information on potential confounding factors such as socioeconomic status, and none on smoking or ethnicity. As we comment above, our results should be regarded as speculative rather than definitive: they represent results from what can be done using very large-scale, routinely collected administrative data. They need further work, in different study designs, to confirm or refute the findings, although epidemiological studies involving direct patient contact may be quite formidable logistical undertakings on the scale required. Our rate ratio for VTE in people admitted to hospital with rheumatoid arthritis in the English dataset was 1.75, which is comparable with a previous study that reported a relative risk of 1.99 [6]. Our rate ratio for VTE in people aged under 30 with diabetes was 2.58 in the English dataset, which compares with a reported figure of 1.73 in hospitalised patients with diabetes aged 20 to 29 in the USA [7]. Although the literature is sparse on VTE in people with immune-mediated diseases, our findings seem broadly comparable with findings of others. We used age at admission under 30 as a proxy for type 1 diabetes mellitus, as the type of diabetes is not routinely recorded on hospital admission records. Although this will mostly consist of people with type 1 diabetes, there may be a few people with type 2 diabetes in the cohort as well. We studied a large number of associations between diseases. The effect of making multiple comparisons needs to be considered. For this reason, we have given exact P-values, as well as confidence intervals, so that the reader can judge the degree of significance of each immune-mediated disease and subsequent VTE. It is possible that some of the associations that are significant at a level of P < 0.05 or P < 0.01 may result from making multiple comparisons and the play of chance. This may particularly be so where there is no prior hypothesis to support the finding. On the other hand, even in a study with the number of comparisons that we have made, findings where the significance level is <0.001 or less, are unlikely to be attributable to chance alone. There were differences in levels of risk between the diseases studied: for example, the rate ratios for SLE were significantly and substantially higher than those for myxoedema. Even if the fairly low levels of elevation of risk, such as those associated with myxoedema and thyrotoxicosis, are considered unimportant, the high levels of risks associated with SLE, polyarteritis nodosa and diabetes mellitus are striking. The large numbers of findings with highly significant results in the England cohort needs comment. They no doubt in part reflect the very large numbers of patients in the cohort, such that many differences are significant, even if fairly small (for example, coeliac disease, thyroiditis, myxoedema, thyrotoxicosis), as a result of high statistical power. Conclusion This is an exploratory study into the risk of VTE in people admitted to hospital with a range of immunemediated diseases. Further studies are needed, of individual immune-mediated diseases, in greater depth, to confirm or refute our findings. Given that a large proportion of hospitalised patients are at risk for VTE, and that there is, generally, a low rate of appropriate prophylaxis, our data suggest that patients with selected immune-mediated disease may need to be considered for thromboprophylaxis. This may be particularly warranted in people with diseases at relatively high risk of VTE, such as SLE and polyarteritis nodosa. However, the risk of VTE for any immune-mediated disease we studied is at least an order of magnitude lower than the risk of VTE after surgery [22]. Further investigation is required, prospectively, to determine the thromboprophylaxis status of patients with these immune-mediated diseases who experience VTE. If a substantial proportion of such patients are found not to have been receiving thromboprophylaxis, current recommendations on thromboprophylaxis may need to be re-examined, as the disorders we studied here were not included as risk factors in the recent NICE guidelines on reducing the risk of VTE [23]. Prospective studies are needed to determine the predictive value of inflammatory markers for VTE, but this may also aid in the identification of individuals who strongly warrant preventative therapy.
2016-05-16T03:37:34.815Z
2011-01-10T00:00:00.000
{ "year": 2011, "sha1": "355a1afe555a3a5cd2bce6dac46285e4f4c92cb3", "oa_license": "CCBY", "oa_url": "https://bmcmedicine.biomedcentral.com/track/pdf/10.1186/1741-7015-9-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "355a1afe555a3a5cd2bce6dac46285e4f4c92cb3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270523191
pes2o/s2orc
v3-fos-license
Magnetic resonance imaging connectivity features associated with response to transcranial magnetic stimulation in major depressive disorder Transcranial magnetic stimulation (TMS) is an FDA-approved neuromodulation treatment for major depressive disorder (MDD), thought to work by altering dysfunctional brain connectivity pathways, or by indirectly modulating the activity of subcortical brain regions. Clinical response to TMS remains highly variable, high-lighting the need for baseline predictors of response and for understanding brain changes associated with response. This systematic review examined brain connectivity features Introduction Major depressive disorder (MDD) affects 13 % of people during their lifetime (Kessler et al., 2015).It is a leading cause of disability worldwide (World Health Organisation, 2017).Medications or psychotherapy are effective for many but at least a third of people do not respond to first-line treatments (National Health Service Digital, 2020;Rush et al., 2006).Minimally invasive neuromodulation therapies represent another treatment option for MDD (Conroy and Holtzheimer, 2021).An increasingly used neuromodulation therapy is repetitive transcranial magnetic stimulation (rTMS), which delivers magnetic pulses to temporarily modulate the excitability of a target cortical area, leading to longer-term neuroplastic changes manifested in altered brain connectivity and changes in the activity of connected deeper, brain areas (Duprat et al., 2022;Klomjai et al., 2015;Oathes et al., 2021;Tik et al., 2023;To et al., 2018).rTMS is performed whilst a patient is awake and alert, and is usually well-tolerated, with transient side effects including discomfort or pain at the stimulation site, headache, and light-headedness (Li et al., 2021).Treatment usually consists of daily stimulation sessions across 4-6 weeks, although "accelerated" approaches delivering up to ten sessions per day across a single week, are gaining in popularity (Cole et al., 2020). There are two types of rTMS in common clinical use: standard rTMS involving trains of pulses at a specific frequency, and "theta burst stimulation" (TBS), involving bursts of pulse triplets repeating at a slower frequency (5 Hz).TBS can be intermittent ("iTBS", usually two seconds of bursts followed by eight seconds of rest) or continuous ("cTBS").iTBS and cTBS exert opposite effects on immediate electrophysiological measures of cortical excitability (Huang et al., 2005).Whilst iTBS sessions can be much shorter than standard rTMS sessions, they exhibit similar efficacy (Blumberger et al., 2018).The optimal stimulation target, stimulation pattern, and intensity are not known.Most studies aim to stimulate the left dorsolateral prefrontal cortex (DLPFC), a functionally defined area often approximated by the centre of middle frontal gyrus (Dosenbach et al., 2008).Targeting may use scalp-based co-ordinates (for example, the "F3" location of the international 10-20 system (Herwig et al., 2003), or approximations of this), anatomical magnetic resonance imaging (MRI), or potentially functional magnetic resonance imaging (fMRI) (Fitzgerald, 2021;Trapp et al., 2020). Identifying brain connectivity relationships that are modified by effective TMS (i.e., connectivity relationships that change in association with treatment outcomes) could help suggest new treatment targets.Moreover, some people improve rapidly with TMS, whilst most improve gradually over a period of weeks and then reach a plateau, and some do not respond at all (Kaster et al., 2019).Baseline predictors of response to a given type of TMS could minimise the experience of multiple failed treatment trials, which can contribute to hopelessness (Papakostas et al., 2003), and minimise the period of untreated depression, which is associated with poorer clinical outcomes and greater disability (Ghio et al., 2015). We conducted a systematic review of studies that have examined baseline connectivity features, or changes in such features, associated with improvement of depressive symptoms following a course of TMS in people with MDD.We chose to focus on brain connectivity measured with magnetic resonance imaging (MRI) techniques, as: (1) MRI is widely clinically available; (2) MRI techniques have been the focus of most of the research literature; and (3) MRI techniques are able to give spatially precise measurements of connectivity changes between specific brain areas. MRI can measure "structural" (white matter) connectivity (SC) between brain regions.A recent meta-analysis found that MDD was associated with widespread lower SC, particularly evident in the corona radiata, and genu and body of the corpus callosum (Schmaal et al., 2020).MRI can also be used to quantify "functional connectivity" (FC) between brain regions.FC is defined by the correlation of the activity time courses of two regions (greater correlation implying greater communication or co-ordination between regions).Intrinsic FC can be assessed during the task-free (resting) state, or during periods of activity induced by tasks.This popular technique has yielded descriptions of networks of brain areas with separable functions, such as the executive control network ("ECN"), involved in working memory and decision making (the DLPFC is mostly considered part of this network, though it contains portions of other networks) (Dosenbach et al., 2008); the salience network ("SN"), involved in assigning importance to internal and external stimuli (Seeley et al., 2007); and the "default mode network" (DMN), involved in rumination and other internally directed mental activity (Raichle, 2015).MDD is associated with increased FC between the DMN and both the ECN and SN, as well as reduced FC between the SN or ECN and a limbic network (Brandl et al., 2022;Kaiser et al., 2015).It is conceivable that some of these relationships could represent optimal treatment targets for TMS.An increasingly popular "accelerated" protocol, the "SAINT" protocol, which delivers fifty, ten-minute, stimulation sessions within a week, intends to modulate a core component of the limbic network, the subgenual anterior cingulate cortex (sgACC), indirectly via its connectivity with the DLPFC (Cole et al., 2020) (it has not yet been compared to an equivalent protocol without connectivity targeting).Alternatively, our "BRIGhTMIND" trial, comparing standard rTMS to connectivity-guided iTBS, based its DLPFC target on connectivity with the anterior insula (mostly part of the SN) (Morriss et al., 2024). To the authors' knowledge, there has been one review to date that has explored brain connectivity changes in MDD following TMS (Schiena et al., 2021).This narrative synthesis of thirteen studies indicated FC changes after TMS amongst regions of the sgACC, DMN, SN and ECN, and increases in SC within the frontal lobe.That review only searched one database, included only left DLPFC high frequency rTMS protocols, and predominantly focused on changes in connectivity, rather than baseline predictors of improvement.Our work will extend this, therefore, by examining: 1. Baseline connectivity features associated with clinical improvement following a course of TMS (to any brain area, with any stimulation pattern); 2. Changes in brain connectivity features that are associated with improvement. Study identification, inclusion, and exclusion criteria This systematic review and meta-analysis was completed according to the 2020 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (Page et al., 2021).The study was registered in the "International Prospective Register of Systematic Reviews" (PROSPERO) in July 2022 (CRD42022346262). A comprehensive database search was completed using Embase, MEDLINE and PsychINFO, from inception up to 26th May 2022.The search was repeated on 11th April 2023 to identify additional studies published in the intervening period.The search strategy included the following: (functional connectivity/ OR functional connect* OR effective connect* OR fmri connect* OR connectom* OR structural connect* OR dynamic connect*) AND (depress* OR depression/) AND (transcranial magnetic stimulation/ OR repetitive transcranial magnetic stimulation/ OR theta burst stimulation.mpOR theta burst*).Reference lists from relevant studies and reviews were also examined to add further studies meeting the eligibility criteria.We included peerreviewed reports of original research and excluded reviews, metaanalyses, conference proceedings, unpublished theses, case series and non-peer-reviewed articles.Bibliographic data was managed in Zotero software. Studies were included if they met all the following criteria: (1) Major Depressive Disorder (MDD) without comorbid physical illness diagnosis.As per our pre-specified protocol, we included studies that contained patients with a depressive episode in the context of bipolar disorder, if patients with bipolar disorder made up no more than 20 % of the study's overall sample size; (2) A minimum of 10 sessions of any TMS protocol, delivered with the intent of improving clinical symptoms in a current MDD episode; (3) Structural or functional connectivity measured using an MRI methodology, prior to the first TMS session and optionally following the completion of TMS treatments.No language restrictions were applied. Study screening and data extraction Step 1) Each title and abstract were assessed for inclusion by two reviewers (PB, LW, HO, and CB), with initial agreement between reviewers at 93 %.In doubt or if consensus was not agreed at this stage, then abstracts were included for full article review. Step 2) Full-text articles were split between the four reviewers and assessed for inclusion, independently checked by a second reviewer.Agreement at this stage was 88 %, with disagreements resolved by discussion between the reviewers. Step 3) All four reviewers were involved in the data extraction process, with one reviewer extracting data from each of the studies.20 % of studies also had data extracted by a second reviewer to assess for agreement, with no significant discrepancies noted.Studies from the rerun search on 11th April 2023 were assessed for inclusion by PB and LW. Details for each included study were inputted into a data extraction sheet consisting of sample characteristics, study design, TMS protocol, location of TMS target, neuroimaging details, measures of clinical response, relationships between baseline connectivity or change in connectivity and clinical response, limitations of studies, other potentially relevant studies found in the reference list, funding source and potential conflicts of interest.Related, previously published, articles were referred to where necessary to obtain this information.We sought to identify sample overlap between studies and indicate such overlap in the Tables. Assessment of study quality Quality assessments were completed by PB or LW.Imaging quality for all studies was assessed with a modified version of the 13-item tool used by Xu et al. (2021).Xu et al.'s item 8 ("Parcellation template clearly reported, reproducible") was modified to: "Parcellation template or regions of interest clearly reported, reproducible".Xu et al.'s item 9 ("Calculation of edge weights are clearly reported and are reproducible") was modified to: "Method for calculating edge weights, functional connectivity, or structural connectivity clearly reported and reproducible".General study quality was assessed by the NIH National Heart, Lung, and Blood Institute (NHLBI) quality assessment for controlled intervention studies for those following a randomised controlled trial (RCT) protocol, with all other studies assessed by the NIH NHLBI quality assessment tool for Before-After (Pre-Post) studies with no control group (https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools).Where a criterion was not relevant to a specific study, it was scored as "low risk".Where a criterion could not be determined from available information, it was scored as "high risk".Criteria that were only partially met were scored as "high risk".Initially, a study was considered "good" quality if at least 80 % of criteria were met, and "fair" quality if at least 50 % of criteria were marked as low risk.However, studies with a sample size insufficient to detect a moderate effect with 80 % power (for example, minimum N = 82 for correlation analyses, determined using G*Power 3.1.9.7) could at most be considered as "fair" quality in the general study/intervention quality assessment. Strategy for data analysis and synthesis For reporting in the Tables, we extracted baseline connectivity features, or changes in connectivity features, that showed significant correlations with improvement on a measure of depressive symptoms or were significantly different between a group of responders/nonresponders, remitters/non-remitters, or improvers/non-improvers. We extracted information on network assignment of regions involved in each connectivity feature where this was given.Where standard MNI co-ordinates were given, we assigned regions to networks using the nearest node in the widely used Power et al. (2011) atlas.To guide our synthesis of findings, we then considered the extracted relationships in terms of relationships between these networks (sgACC, amygdala, hippocampus, and the striatum remained as separate regions). For baseline connectivity features, we next considered connectivity relationships between networks (or sgACC etc.) reported in at least two studies (i.e., relationships that have been studied at least twice), regardless of the effect direction in those studies.From this we produced Fig. 2, which was used to guide our narrative synthesis in the Results.A similar approach for change in connectivity features was trialled but there were few relationships that appeared in at least two studies.Relationships regarding change in connectivity features are discussed alongside baseline relationships involving the same networks.We also include discussion of key structural connectivity studies, and all studies that used a target within the DMPFC, so that these important, underresearched, avenues are given attention. As per our study protocol, we did attempt Co-ordinate Based Random Effects Size (CBRES) meta-analyses of the baseline FC findings of studies that targeted the left DLPFC, using the ClusterZ algorithm implemented in the NeuRoi software (Tench et al., 2017).For each seed region/network (i.e., sgACC, ECN, DMN, SN), a list of studies that found a significant relationship involving that region/network was given, alongside co-ordinates of the connected regions and the effect directions and sizes.Due to heterogeneity in reported relationships, these analyses did not identify significant clusters. Overview of included studies Following the literature search and review of abstracts, 117 full-text articles were assessed for eligibility.Forty-one studies met inclusion criteria (Fig. 1, exclusion reasons in Table S4).These included a total of 1097 patients with a current major depressive episode, excluding likely sample overlap.Fifteen of the patients (1 %) had a history of bipolar disorder.Most studies quantified clinical improvement using the Hamilton Depression Rating Scale (HDRS-17, N = 27), followed by the Montgomery-Åsberg Depression Rating Scale (MADRS, N = 8).Twenty-eight studies delivered 10 Hz rTMS and fifteen delivered iTBS (these numbers include five studies that delivered either 10 Hz rTMS or iTBS, and one study that delivered iTBS and cTBS), two studies delivered 20 Hz, one 5 Hz rTMS.In thirty-seven studies, TMS targeted the left DLPFC.The remaining four studies targeted bilateral DMPFC.Of the thirty-seven DLPFC studies, 33 reported functional connectivity ("FC") measured using resting-state fMRI (rsfMRI) or, in one case, arterial spin labelling (Table 1), the remainder reported white-matter structural connectivity (SC) derived from diffusion tensor imaging MRI (N = 4, Table 2).The four DMPFC studies all reported FC measured with rsfMRI (Table 3).Due to the paucity of studies examining SC, the focus of the remainder of this report is on FC. Quality of included studies Using the imaging quality assessment tool, adapted from Xu et al. (2021), 33 studies were graded as good quality (80 % of criteria met) and 8 as fair quality (50 % of criteria met).Twenty-three studies did not meet the resolution and motion criterion (at least 3.5 mm 3 voxels / 12 DTI directions and detailed motion thresholds), fifteen studies did not explore the impact of any potential confounding variables, nine studies did not correct for multiple comparisons where required.Using the intervention quality assessment tools (adapted from the NHLBI tools), for the nineteen studies that used an RCT approach, all were graded as fair quality (five were downgraded from "good" due to insufficient sample size to detect a medium effect).None reported a pre-specified analysis protocol.For the twenty-three studies that did not use an RCT approach, one was regarded as good quality, the remainder were regarded as fair quality (eight were downgraded from good due to insufficient sample size).One study incorporated multiple follow-up time points; one had an adequate sample size.Further details are given in the Supplement.It should be noted that the choice of using the RCT or non-RCT quality assessment tool was based on the nature of the underlying dataset used by a given study.Many studies did not analyse this data in such a way as to distinguish sham versus active effects.This is indicated where the study is discussed below. Functional connectivity studies targeting left DLPFC Fig. 2 illustrates relationships between baseline functional connectivity and clinical improvement.It includes connectivities, within or between networks (or sgACC), that were reported in at least two studies.These connectivities are the focus of this section.An equivalent figure was not possible for relationships between change in connectivity and clinical improvement due to a lack of studies reporting the same connectivity relationships.Findings for that analysis are included below where relevant.2013), 1-4 previous antidepressant trials).The opposite pattern was found for rostral ACC.These relationships were present immediately, as well as three months after the end of treatment.Likewise, Rosen et al. ( 2021) (1.23) found that the mean target location for TMS responders (>50 % improvement) on the HDRS-17 was in left ECN and showed negative baseline FC with sgACC (N = 23, severe depression, 2+ antidepressant trials).They delivered stimulation 6 cm anterior to the motor cortex hand area and obtained MRI with a marker at the target point, which was then projected onto the cortical surface to determine the targeted brain co-ordinates.The mean target for non-responders was in DMN and did not show significant sgACC FC (they used a sham-controlled design, finding that the target location was significantly different between responders and non-responders to active stimulation, but did not differ between "responders" and "non-responders" to sham).Cash et al. (2019a) (1.10) found that greater negative FC between sgACC and the left DLPFC target was associated with improvement on the MADRS (N = 47, moderate-severe depression, 2+ antidepressant trials).Their average target location was within the ECN.Cash et al. (2021) (1.20) also showed that closer proximity of the stimulation target to an "ideal" target, defined as the point that showed greatest baseline anticorrelation with the sgACC, was predictive of response, a finding also shown by Kong et al. (2022) (1.27) using a similar approach (N = 18, moderate), and by Stöhrmann et al. (2023) (1.32), using a bilateral stimulation protocol with iTBS to left DLPFC and cTBS to right DLPFC (N = 15 active treatment, moderate depression, 2+ antidepressant trials).Stöhrmann et al. used a sham-controlled design but did not compare baseline predictors between active and sham (N = 5) groups. Connectivity between the sgACC and ECN In contrast, in the second-largest included study, Hopman et al. (2021) (1.22) did not find significant baseline FC differences between short-term (immediately after twenty sessions) responders and non-responders (on the MADRS) to once-daily 10 Hz rTMS but did find differences when examining response two months post-treatment (N = 63, moderate depression, 1-4 previous antidepressant trials).Longer-term responders showed greater baseline FC between sgACC and the left DLPFC target as well as between sgACC and frontal/parietal ECN. Hopman et al. speculate on differences in participant ethnicity as the cause of their discrepant findings (Hong-Kong Chinese sample in their study versus primarily Caucasian samples).A recent study by Elbau et al. (2023) (1.31), and the largest included in this review (N = 295, moderate-severe, 80 % taking antidepressants, 1+ antidepressant trial), used data from the THREE-D clinical trial (conducted in Canada) to address sources of variability in relationships between baseline target-sgACC FC and clinical improvement.This confirmed an association between clinical improvement (measured with Quick Inventory of Depressive Symptomatology, although consistent results when HDRS-17 used) and more negative baseline FC between the sgACC and left DLPFC target (mean location in ECN).However, the effect was weaker than in the prior, smaller sample studies.They showed that substantial between-study variability in the size (and direction) of relationships between baseline FC and clinical improvement would be expected if small sample sizes were used.In their full sample, they found a significant relationship only when the sgACC seed was individualised for each patient based on anatomically informed modelling of the distribution of current from the TMS coil, and only when the overall, "global", brain signal was regressed out of the data (a commonly used approach for studying negative FC relationships, but an approach that creates challenges for interpretation (Murphy and Fox, 2017)).They further showed that the relationship was strongest in those patients with greatest fluctuation in this global brain signal, and specifically in those with signal fluctuations consistent with a "burst" breathing pattern (Lynch et al., 2020).They speculate that this finding reflects either: there is a sub-group of patients with a tendency to burst breathing patterns and for whom baseline sgACC FC strongly determines outcomes with rTMS; or that burst breathing occurs at the time of certain time-lagged, high-amplitude, fMRI signal events that may make negative FC relationships more apparent. Baseline predictors of symptom improvement. In the only study using arterial spin labelling (ASL) data, Wu et al. (2022) (1.29) derived a measure of baseline connectivity (covariance of cerebral blood flow) between sgACC and left DLPFC targets, which were primarily in the DMN.They found this connectivity to be predictive of HDRS-17 improvement (N = 41, moderate, no current psychotropics apart from regular benzodiazepines, 1+ antidepressant trial).They used a sham-controlled crossover design, and distinguished responders to active versus sham stimulation in the analysis.Their methodology did not allow them to distinguish positive from negative connectivity. Returning to fMRI data, Liston et al. (2014) (1.2), with an open-label design, found more positive baseline FC between sgACC and DMN regions was associated with greater improvement to 25 sessions of once-daily 10 Hz rTMS (N = 17, three depression in context of bipolar disorder, severe, 2/3 taking antidepressants/mood stabilisers, 2+ antidepressant trials).Baeken et al. (2014) (1.1) reported conflicting results using 20 Hz rTMS in an accelerated protocol (four days of five sessions/day).They found more negative baseline FC between sgACC and an anterior DMN (aDMN) or an SN region of superior frontal gyrus in responders than non-responders, and responders showed greater increase in sgACC-aDMN FC post-stimulation (N = 20, severe, taking only regular benzodiazepines, 3+ antidepressant trials).Baeken et al. used a sham-controlled crossover design, but their FC analyses collapsed across both active and sham time points.The opposite direction of their effect may be due to differences in rTMS frequency, medication status (most taking antidepressants in the former studies versus none in Baeken et al.), or number of sessions per day (once daily versus an accelerated protocol). Associations between change in connectivity and symptom improvement.Philip et al. (2018) (1.8), in an open-label design, used an average of 33 once-daily sessions of 5 Hz rTMS and examined people with MDD and co-morbid post-traumatic stress disorder (PTSD).They found that reductions in sgACC FC with the precuneus/posterior cingulate (posterior DMN), dACC (SN) and sensorimotor network were associated with improvement on the Inventory of Depressive Symptomatology (N = 33, severe, 2/3 taking antidepressants, 1+ prior antidepressant trial).DMN and ECN 3.3.3.1.Baseline predictors of symptom improvement.Four studies used once-daily 10 Hz rTMS, in people with MDD, the majority of whom were taking antidepressants.Two open-label studies examining patients with moderate-severity MDD on average, reported consistent results.Ge et al. (2020) (1.17, N = 50, moderate severity, 1-4 antidepressant trials), reported above, found that greater baseline FC between rostral ACC (DMN) and inferior parietal lobule (ECN) was associated with greater improvement on the HDRS-17 following 20-30 sessions.Hopman et al. (2021) (1.22,N = 63, moderate, 1-4 antidepressant trials), reported above, found that MADRS responders at two months following twenty sessions showed more positive baseline FC between the left DLPFC target (mean co-ordinates in ECN) and a posterior DMN region. Connectivity between the One study examining patients with, on average, severe MDD, reported opposing results.Rosen et al. (2021) (1.23, N = 23, severe, 2+ antidepressant trials), reported above, found that TMS responders showed more negative baseline FC between their (on average, ECN) left DLPFC target and clusters of the anterior and posterior DMN (but more positive FC between the target and SN, ECN, and a sensorimotor network).They used a sham-controlled study design, although this analysis did not contrast effects with those of sham stimulation. Associations between change in connectivity and symptom improvement.Moreno-Ortega et al. (2020a) (1.18), in an open-label study, found that HDRS-17 improvement following 36 sessions targeted to an ECN region of left DLPFC was associated with increase in FC between the target and a DMN parcel, in non-responders to previous rTMS (N = 10, severe, 1+ antidepressant trial).P.M. Briley et al. 3.3.4. Connectivity between the SN and ECN 3.3.4.1.Baseline predictors of symptom improvement.Consistent results were obtained in open-label studies that reported FC between the SN and ECN and used either 10 Hz rTMS or iTBS.Fu et al. (2021) (1.21) found that greater improvement on the HDRS-17 was associated with more positive FC at baseline between anterior insula (SN) and an ECN left DLPFC region (r s = 0.66, N = 27, moderate, no current antidepressants and minimal treatment history).Of note, Fu et al. also found that greater baseline white-matter structural connectivity (quantified using "fractional anisotropy") between these regions was also associated with greater improvement (r s = 0.46).Rosen et al. (2021) (1.23), reported above, also found that more positive baseline FC between the left DLPFC target and clusters of the SN was associated with greater HDRS-17 improvement (N = 23, severe, 2+ antidepressant trials; used a sham-controlled study design but this analysis did not contrast effects with those of sham stimulation.).Iwabuchi et al. ( 2019) (1.13) found that greater "net outflow" from rAI to left DLPFC at baseline was associated with greater improvement on the HDRS-17 following sixteen once-daily sessions of iTBS or 10 Hz rTMS (N = 27, moderate, 75 % taking antidepressants, 1+ prior antidepressant trials, participants were randomised to either stimulation type, but there was no sham control).The left DLPFC target was identified from within the ECN.Net outflow was defined as directed, "effective", connectivity (EC), from right AI to left DLPFC minus EC from left DLPFC to right AI. Unlike FC, which is calculated from the correlation of the activity time courses between brain regions, EC refers to the directed influence of one region on another, calculated using Granger Causality Analysis. Connectivity within the SN 3.3.5.1.Baseline predictors of symptom improvement.Two studies that used once-daily 10 Hz rTMS, in people with MDD, most of whom were taking antidepressants, obtained consistent results regarding FC within the SN.Ge et al. (2017) (1.7), in an open-label design, found that responders on the HDRS-17 after twenty sessions had shown higher baseline FC within the SN (anterior insula, dorsal anterior cingulate) (N = 18, moderate, 1-3 antidepressant trials).Fan et al. (2019) (1.12) found that improvement on the MADRS following twenty sessions was associated with greater functional segregation of the SN at baselinethat is, greater within-network FC and lower FC between the SN and all other networks (N = 32, mild, 1+ antidepressant trial).Fan et al. used a sham-controlled design but did not identify significant differential predictors of response to sham or active treatment. In contrast, Iwabuchi et al. (2019) (1.13), reported above, found that improvement following sixteen sessions of 10 Hz rTMS or iTBS (targeted at the voxel within ECN left DLPFC showing greatest baseline effective connectivity from right anterior insula), was associated with lower baseline within-network connectivity of the salience network (N = 27, moderate, 1+ antidepressant trial).This study differs in TMS type (10 Hz rTMS versus iTBS) and targeting method (pre-selected MNI co-ordinates versus connectivity-based targeting). Associations between change in connectivity and symptom improvement.Godfrey et al. (2022) (1.26) found that improvement on the MADRS after twenty sessions of open-label 10 Hz rTMS was associated with reductions in FC within the SN (N = 26, moderate, 2+ antidepressant trials).In contrast, in people with puerperal-onset MDD, Zhang et al. (2022) (1.30) found that improvement on the Edinburgh Postnatal Depression Scale (EPDS) was associated with increase in FC between left and right insula (SN), following open-label iTBS delivered in an accelerated protocol (ten sessions/day, five consecutive days, targeting based on sgACC connectivity) (N = 31, fifty sessions, severe, not taking antidepressants). 3.3.6.Connectivity within the DMN 3.3.6.1.Baseline predictors of symptom improvement.Mixed results have been obtained regarding baseline within-network DMN connectivity, in people with MDD, most of whom were taking antidepressants.Using the same data as Cash et al. (2019aCash et al. ( ), (2019b) ) (1.11) found that lower baseline FC within a DMN network (and within an "affective" network) was associated with greater improvement on MADRS following 20 once-daily sessions of open-label 10 Hz rTMS (N = 47, moderate-severe, 2+ antidepressant trials).Similarly, Taylor et al. (2018) (1.9) found that baseline FC between posterior (posterior cingulate cortex) and anterior (inferior frontal gyrus) DMN was lower for HDRS-17 responders than non-responders to twenty once-daily sessions of 10 Hz rTMS (N = 32, mild, 1+ antidepressant trial).Responders also showed lower baseline FC between posterior DMN and SN (right AI).Although Taylor et al. used a sham-controlled cross-over design, their baseline analysis collapsed across active and sham phases.Ge et al. (2017) (1.7), discussed above, produced opposing results, finding that HDRS-17 responders showed higher within-DMN FC at baseline than non-responders (N = 18, moderate, 1-3 antidepressant trials, open-label)..3.6.2.Associations between change in connectivity and symptom improvement.Tang et al. (2021) (1.24) found that response to open-label iTBS delivered in an accelerated protocol (ten sessions per day, five consecutive), was associated with pre-to post-stimulation increase in FC within the DMN (as well as decrease in FC between the ECN and SMN) (N = 15, severe, all with suicidal ideation).In contrast, Moreno-Ortega et al. (2020a) (1.18), discussed above, found that HDRS-17 improvement in a group of TMS-naïve participants, was associated with FC decrease between the (on average, DMN) left DLPFC target and a DMN atlas parcel following treatment (N = 22, severe, 1+ antidepressant trial, open-label).pre-frontal cortex, whose baseline connectivity distinguished responders and non-responders in a graph theoretic analysis (N = 47, including 9 bipolar, moderate, 1+ antidepressant trial).There appeared to be an effect of laterality in their findings.Responders on the HDRS-17 showed more positive baseline FC between VMPFC and ECN (DMN-ECN, consistent with 1.17 and 1.22 in the studies that stimulated left DLPFC), but also VMPFC and SN, and primarily left-sided DMN areas.Whereas responders showed more negative baseline FC between VMPFC and other SN regions, as well as primarily right-sided DMN areas. Connectivity studies targeting DMPFC Salomons et al. (N = 25, including 4 bipolar, 2+ antidepressant trials) examined FC with a DMPFC seed region, finding that more positive baseline FC between the DMPFC and a cluster incorporating the sgACC and VMPFC was associated with improvement on the HDRS-17.Reduction in FC between DMPFC and sgACC from pre-to posttreatment was also associated with improvement.Interestingly, more positive baseline FC between sgACC and ECN/SN regions of the DLPFC was associated with improvement (opposite to most studies that targeted left DLPFC). In two sham-controlled studies with overlapping samples, Persson et al. (2020) (3.3) andStruckmann et al. (2022) (3.4) studied functional connectivity changes following ten days of twice-daily iTBS to DMPFC bilaterally.Persson et al. found that improvement on the MADRS was associated with more negative baseline FC between sgACC and precuneus (posterior DMN), as well as between precuneus and the DMPFC target (N = 23, including 2 bipolar, moderate, approx.90 % taking antidepressants).Increase in FC between precuneus and the target from pre-to post-stimulation was associated with improvement, an effect present only following active TMS.Finally, Struckmann et al. (N = 34,including 3 bipolar) found that greater improvement on the MADRS was associated with greater reduction in FC between an SN left DLPFC area and a sensorimotor left insula area post-stimulation.Again, this effect was present only following active TMS. Discussion Most included studies stimulated left DLPFC with a 10 Hz rTMS or iTBS protocol, and measured improvement using the HDRS-17.Nevertheless, there was considerable heterogeneity in the findings.No consistent relationships were identified in the attempted meta-analysis.This may have been due to: in some cases, a focus on specific brain regions; low statistical power (most studies included between 17 and 63 participants); methodological and sample differences.The absence of pre-published analysis protocols also raises concerns around false positive findings.Connectivity relationships involving the sgACC featured prominently and are considered first below. Connectivity of the sgACC There is evidence from positron emission tomography (PET) studies that depression is associated with elevated metabolic activity (increased cerebral blood flow and glucose metabolism) within the sgACC and that effective antidepressant treatment is associated with reductions in this metabolic activity (Drevets et al., 2008).More recently, Argyelan et al. (2016) studied a measure of brain activity derived from rsfMRI -"fractional amplitude of low frequency fluctuations" (fALFF) -which has been shown to be correlated with PET-measured glucose metabolism (yet has higher spatial resolution) (Aiello et al., 2015).They found greater fALFF within the sgACC in people with depression than healthy controls at baseline, which normalised across a course of electroconvulsive therapy treatment. Due to the PET (and later fALFF) findings, the demonstration by Fox et al. (2012) that TMS efficacy is related to more negative baseline FC between the sgACC and the stimulated region of left DLPFC within the ECN, suggested a plausible mechanism of the effects of TMS in MDD.Under this rationale (and assuming that negative baseline FC is underlain by negative ECdirected influencefrom left DLPFC to sgACC), excitation of the superficial (and, thus, TMS accessible) left DLPFC target could exert suppressive effects on the deeper sgACC (which is inaccessible to TMS), normalising its activity.The relationship was initially demonstrated using connectivity calculated from separate, large-scale, connectome data, rather than individual-level connectivity.Such studies do not meet inclusion criteria for this review.Later work showed the relationship using individual-level connectivity data, as included in the review (Cash et al., 2021(Cash et al., , 2019a;;Elbau et al., 2023;Ge et al., 2020;Rosen et al., 2021), but the relationship is not reported in all studies that use a whole brain approach, and there is opposing data (Hopman et al., 2021). There has been complexity in interpreting the findings around metabolic activity of the sgACC.There are reports of reduced metabolic activity in MDD measured with PET, which have been attributed to inadvertently measuring from surrounding tissue areas due to a reduction in sgACC size in MDD (a so-called "partial volume" artefact) (Drevets et al., 2008).This could remain an issue in fMRI analyses using fixed-volume regions-of-interest across participants.In addition, the sgACC lies close to an air-filled sinus, and air-tissue interfaces create artefacts in MRI.Further, there is ongoing debate about the meaning of anti-correlations in FC analyses (Murphy and Fox, 2017).This is because they primarily (although not exclusively (Chai et al., 2012;Chang and Glover, 2009)) arise when removal of the overall, "global", brain signal is employed as a pre-processing step (Murphy et al., 2009).This step serves to reduce the influence of widespread non-neuronal signals (such as changes in arterial CO 2 concentration).Mathematically, the step enforces zero-centring of connectivity values across the brain.Modelling studies indicate this can introduce spurious anti-correlations (Saad et al., 2012), yet anti-correlated BOLD signals were, for the most part, associated with anti-correlated neuronal signals when measured from the same people in a study using recordings from subdural electrodes that had been implanted for seizure monitoring (Keller et al., 2013). The most recent (and largest) included study, from Elbau et al. (2023), confirmed the relationship between clinical improvement and more negative baseline FC between sgACC and the left DLPFC target, but found this was weaker than reported in studies with smaller sample sizes.Strong relationships were observed in patients showing substantial variation in overall, global, brain signal, which was associated with a specific, burst, breathing pattern.They speculate that: either the global signal variations and burst breathing patterns are associated with fMRI signal (blood oxygenation level dependant, "BOLD") events that facilitate the measurement of negative (anti-correlated) functional connectivity relationships, or that there is a subset of patients, who have this specific breathing pattern, for whom sgACC FC is most critical to rTMS efficacy.Further work is clearly needed to understand the implications of their findings. Despite the underlying rationale of modulating the sgACC via stimulating left DLPFC (a motivation for the targeting approach in the increasingly popular SAINT protocol (Cole et al., 2020)), there has been limited demonstration that the sgACC is effectively modulated by the approach.Recent TMS/fMRI studies have examined immediate changes in the BOLD response from sgACC following single pulses of left pre-frontal stimulation.In two studies, suppression of sgACC activity was observed, which, in people with MDD, was greater when targeting cortical regions with more positive baseline FC with sgACC (Duprat et al., 2022;Oathes et al., 2021).In a third study, targeting the DLPFC location with greatest negative baseline FC with the sgACC led to increases in sgACC activity (Tik et al., 2023).Given that suppression of sgACC over-activity is desired, these findings from single-pulse studies stand in contrast to findings that rTMS to the DLPFC location with greatest negative baseline FC with the sgACC is associated with greatest clinical improvement.One explanation could be that repeated induced excitation of sgACC via DLPFC may, over the course of an rTMS session or over multiple rTMS sessions, lead to longer-term sgACC suppression.Alternatively, multiple sessions of rTMS may first act by increasing connectivity between DLPFC and sgACC, before then suppressing the activity of the sgACC itself.There was some evidence for an association between clinical improvement and modulations of sgACC connectivity in the studies identified in this review (Ge et al., 2020;Philip et al., 2018). Interestingly, one included study that targeted a DMPFC region within the DMN found that more positive baseline FC between the target (which was within the DMN) and the sgACC was associated with greater response (Salomons et al., 2014).In studies using left DLPFC targets, better response was associated with more positive baseline FC between sgACC and DMN regions in a non-accelerated paradigm (Liston et al., 2014), but more negative baseline FC in an accelerated paradigm (Baeken et al., 2014).Understanding the dynamics of sgACC modulation within and across treatment sessions, and how this relates to treatment parameters and baseline FC between the sgACC and the targeted region, or between the sgACC and other ECN/DMN regions, would appear a valuable next step in understanding the variability in findings and in understanding how to optimise treatment approaches. Other connectivity relationships The recently completed BRIGhTMIND trial of connectivity-guided iTBS versus standard rTMS for treatment-resistant MDD (Morriss et al., 2024) used connectivity involving a region of the anterior insula, based on preceding pilot work (Iwabuchi et al., 2019(Iwabuchi et al., , 2017(Iwabuchi et al., , 2014) ) and meta-analytic evidence that the activity of the anterior insula, which resides within the SN, could distinguish responders to different treatment options for depression (McGrath et al., 2013).There was some evidence in this review that greater baseline FC between the SN and ECN was associated with greater response to (ECN) left DLPFC targeted treatment (Fu et al., 2021;Rosen et al., 2021) (also found for structural connectivity (Fu et al., 2021)), and that greater baseline FC between the SN and DMN was associated with greater response to (DMN) DMPFC targeted treatment (Downar et al., 2014).The findings of our BRIGhT-MIND trial (analysed and published after the inclusion cutoff date for this review) suggest that the balance of influence (difference in effective connectivity), at baseline, between the SN (anterior insula) and left DLPFC target may predict treatment outcomes (Morriss et al., 2024).More positive effective connectivity from the left DLPFC target to the anterior insula, in particular, was associated with greater response.Evidence identified in this review of associations between clinical improvement and baseline FC within the SN, or changes in FC within SN, highlight the potential importance of modulating the activity of the insula and its associated network (albeit again with inconsistency of effect directions) (Fan et al., 2019;Ge et al., 2017;Godfrey et al., 2022;Iwabuchi et al., 2019;Zhang et al., 2022).Thus, anterior insula and the associated salience network may represent alternative indirect treatment targets for cortical TMS. One of the most consistent functional connectivity differences in MDD identified in a large meta-analysis was abnormally elevated connectivity between the ECN and DMN, a finding that could reflect intrusion of DMN-mediated internal-world processing and rumination on ECN-mediated external world processing and task performance (Kaiser et al., 2015).Included left DLPFC studies showed evidence for associations between baseline ECN-DMN FC, and change in ECN-DMN FC, and clinical improvement (Ge et al., 2020;Hopman et al., 2021;Moreno-Ortega et al., 2020b;Rosen et al., 2021), as well for associations between improvement and baseline FC within the DMN or change in within-DMN FC (Cash et al., 2019b;Ge et al., 2017;Moreno-Ortega et al., 2020b;Tang et al., 2021;Taylor et al., 2018).The BRIGhTMIND trial found that greater reduction in ECN-DMN FC (specifically, left DLPFC and DMPFC) from baseline to follow-up was associated with greater improvement on self-reported measures of depression (Morriss et al., 2024).It remains to be determined whether such changes are driven by other aspects of clinical improvement, such as improvement in concentration and attention. It should be noted that several studies identified relationships between clinical improvement and baseline FC, or change in FC, involving subcortical regions such as the striatum or amygdala (Avissar et al., 2017;Chen et al., 2020;Du et al., 2017;Kang et al., 2016;Philip et al., 2018;Salomons et al., 2014).Whilst reported in the Tables, these have not been the focus of our Results or Discussion due to specific connectivity relationships being reported in isolated studies.Given the role of the striatum in reward processing (Delgado, 2007), and of the amygdala in the experience of fear and anxiety (Davis, 1992), approaches that successfully modulate these regions might be able to treat specific facets of MDD.Further work is needed on the relevance of connectivity relationships involving these areas to the effectiveness of TMS.Our recent systematic review of connectivity features associated with anxiety symptoms in people with MDD suggested that re-establishing connectivity between the amygdala and other key brain networks may be an important treatment goal (Briley et al., 2022).Perhaps in line with the few relationships identified involving amygdala connectivity in the current review, there is evidence that left DLPFC rTMS or iTBS has less effect on anxiety symptoms than other symptoms associated with MDD (Kaster et al., 2023). Finally, studies that move beyond pair-wise connectivity metrics and use graph theoretic approaches to capture the complexity of network changes following stimulation are needed.Only two included studies (1.14/2.1),examining left DLPFC stimulation, used such approaches (Caeyenberghs et al., 2018;Klooster et al., 2019) (one additional study, 3.1, examining left DMPFC stimulation, used a graph theoretic measure to determine the region-of-interest for subsequent analyses (Downar et al., 2014)). Limitations Whilst searching for studies examining connectivity measured with any MRI imaging approach, and either task or rest paradigms, the included studies are dominated by those examining resting-state FC with fMRI.Despite the large amount of information on brain network, and network abnormalities, that has been provided by resting-state studies, "rest" is not a homogeneous state.Connectivity during tasks may provide greater interpretability (Finn, 2021).More work is needed examining structural connectivity relationships as, despite some overlap in information provided by functional connectivity measures (Greicius et al., 2009), structural change represents an enduring influence of neuromodulation approaches (Damoiseaux and Greicius, 2009).Most studies examined single follow-up time points, shortly after the final stimulation session.There is little information, therefore, on predictors of longer-term or sustained clinical improvement, or on changes in relationships between predictors and outcomes. Most included studies were small, and under-powered to detect all effects of interest.Some of the heterogeneity in findings likely reflected a study's focus on specific brain regions (partly as a strategy to mitigate having to correct for multiple comparisons with a small sample size).Some of the differences in effect directions may be due to differences in participant characteristics.MDD is known to be a heterogeneous disorder (Goldberg, 2011).It is conceivable that some subtypes of MDD may respond differently to stimulation due to differences in the underlying pathophysiology.In addition, the primary measure of MDD used in the included studies (the HDRS-17) is itself multi-factorial (Nixon et al., 2020).The included studies report change in total score on the HDRS-17, which is not, therefore, trivial to interpret.Improvement in one factor may conceivably be accompanied by decrements in another.Studies identifying associations with improvement on specific groups of HDRS-17 items, may be valuable.Some work using connectivity features to identify sub-types ("biotypes") of MDD has been conducted, although the meaning of these is under debate (Dinga et al., 2019;Drysdale et al., 2017). A small number of patients with a primary diagnosis of bipolar disorder were present in a few of the included studies (three studies that delivered TMS to DLPFC - Liston et al. (2014) 1.2, Avissar et al. (2017) 1.4, Godfrey et al. (2022) 1.26 had between two and three people with bipolar disorder, whilst all studies that delivered TMS to DMPFC had people with bipolar disorder).As per our pre-published protocol, we included studies in which at most 20 % of the sample had bipolar disorder, as we were aware that early TMS studies had accepted a small number of patients with bipolar disorder if they had predominant depressive episodes.Study authors reported that the inclusion of these patients did not affect their results.Nevertheless, this caveat should be borne in mind when viewing the link between DMN and sgACC FC in Fig. 2, and the results for studies that used DMPFC stimulation. Finally, whilst several studies used data from sham-controlled trials, only a few were analysed in such a way as to be able to distinguish the contribution of non-stimulation-specific effects to observed relationships between clinical improvement and either baseline connectivity (Fan et al., 2019;Klooster et al., 2020;Rosen et al., 2021) or change in connectivity (Baeken et al., 2017;Chen et al., 2020;Kang et al., 2016;Klooster et al., 2019;Persson et al., 2020;Struckmann et al., 2022).Recently, Wu et al. (2020) have specifically examined baseline FC predictors of clinical improvement following sham TMS.Greater improvement was associated with greater baseline within-DMN FC (between rostral anterior cingulate cortex, rACC, and precuneus / posterior cingulate cortex) and, potentially, DMN-ECN FC (between rACC and MFG, though MFG is a more heterogeneous region). Conclusions We bring together studies on brain connectivity predictors of improvement, and brain connectivity changes associated with improvement, in MDD following primarily left DLPFC stimulation.Some relationships show promise, such as those involving the targeted area or its associated network, and the sgACC or anterior insula (or the relationship between the ECN and the DMN).This is consistent with the hypothesis that TMS to superficial cortical areas acts by modulating the activity of connected deeper (and thus TMS inaccessible) areas, a hypothesis that is only now being studied directly (Duprat et al., 2022;Oathes et al., 2021).Progress will depend on understanding the transmission of the effects of TMS from the targeted area to other brain areas, how these effects change within and across sessions, and how they manifest in clinical improvement.Most of the identified studies were small and there was considerable heterogeneity in reported effects.This may partly reflect differences in patient characteristics, stimulation protocol and analysis method.Notably, there were no studies that used a targeting approach designed to optimise any specific baseline connectivity relationship at the individual level.Replication of the findings in Fig. 2 are needed in larger sample sizes, ideally facilitated by collaboration and synthesis of data at the patient level across studies.A move towards pre-published analytical protocols is essential for reducing researcher degrees of freedom and the risk of false positive findings.Further work is also needed on non-stimulation-specific contributors to improvement (placebo, and sensory, effects).The recent demonstration that some of the heterogeneity in sgACC findings may be due to between-patient differences in contributors to the global brain signal needs replication and further exploration (Elbau et al., 2023).Further work using task-based approaches, or techniques beyond fMRI, is needed, given limitations in the interpretability of resting-state fMRI findings (Finn, 2021).Greater understanding of brain activity changes following TMS (measured by changes in cerebral blood flow with PET or ASL) and of connectivity changes measured with alternative techniques such as magnetoencephalography, is also needed to provide a stronger framework for interpreting FC changes measured with fMRI.We conclude by summarising a list of recommendations to enhance confidence in the findings of future studies and to understand the identified heterogeneity (Table 4).Whilst TMS is an effective treatment for MDD at the group level, connectivity-informed approaches for predicting, or optimising, treatment response, to reduce response heterogeneity and ensure that more patients can gain maximal, and rapid, benefit from these techniques, are still needed. Declaration of competing interest None. 3.3.1.1.Baseline predictors of symptom improvement.Consistent results were reported in four studies (one of which included comparison with sham stimulation) that used 20-30 sessions of once-daily 10 Hz rTMS in people with MDD, the majority of whom were taking antidepressants.In an open-label design, Ge et al. (2020) (1.17) found that lower baseline FC between sgACC and right DLPFC (assigned to ECN) was associated with greater improvement on the HDRS-17 (N = 50, moderate severity depression on average, as per thresholds of Zimmerman et al. ( Fig. 2 . Fig. 2. Summary of relationships between baseline functional connectivity and clinical improvement following a course of TMS to left dorsolateral prefrontal cortex, for connectivity relationships reported in at least two included studies.sgACC: subgenual anterior cingulate cortex, DMN: default mode network, ECN: executive control network, SN: salience network.Networks assigned as per nearest neighbour in the Power et al. (2011) atlas.Numbering corresponds to that used in the Tables and indicates the studies contributing to each relationship (with the direction of the relationship for each study represented by a minus or plus sign).Relationships for which most studies indicate a positive direction (i.e., greater, or more positive, baseline connectivity associated with greater clinical improvement) are indicated by green solid arrows.Red dashed arrows are used when most studies indicate a negative direction (lesser, or more negative, baseline connectivity associated with greater improvement).Grey dotted arrows indicate equipoise between studies in effect direction.Highlighted studies used a sham-controlled design and report a relationship that differed between the sham and active stimulation arms. Table 1 Associations between functional/effective connectivity and clinical improvement in studies delivering TMS to left DLPFC. Table 2 Associations between structural connectivity and clinical improvement in studies delivering TMS to left DLPFC. Table 3 Associations between connectivity and clinical improvement in studies delivering TMS to left DMPFC.
2024-06-17T13:07:05.283Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "c599ffa6a92245dfe9a861b29be1bada82f2a968", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.pscychresns.2024.111846", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "557fe6d7b57c99649a57eef2c66896bee0212d92", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
234433966
pes2o/s2orc
v3-fos-license
The Nurse as an Information Broker for Children with Terminal Illness: A Qualitative Study This research was conducted qualitatively with a content analysis approach. In-depth interviews with 8 nurses and focus group discussion with 7 nurses were the data collection methods used in this study. The sampling technique in this research is purposive sampling with inclusion criteria for nurses who were willing to participate in this research with a minimum education of diploma degree in nursing, and having at least 3 years of working experience in the pediatric ward. INTRODUCTION Nurses face a great challenge while providing the best care for children with terminal illness and families to fulfill their physical, psychological, spiritual, and emotional needs in the course of an uncertain and even long-lasting illness [1]. Nurses may be the first to meet with children and families and spend more time than other health workers [2]. End-of-life care for pediatric is holistic care intending to improve the quality of life despite the prognosis [3]. Although the child has been diagnosed with an incurable illness, nursing care must remain active to maintain the child's comfort as well as maintain the quality of life of the child in order to grow as optimally as possible. Nurses play an important role in caring for the child who is dying and his/her family. Providing care for a dying child is a multidisciplinary and family-centered process. Nurses should acknowledge the physiological, emotional, and spiritual needs of the child and the family during this difficult time. Children show different responses to the imminent process of dying and death, depending on their level of development [2]. According to American Nurses Association (ANA) (2016), nurses are obliged to provide comprehensive and compassionate care for dying children, recognize when the child is about to take his/her last breath and deliver the information to the family [4]. The primary responsibility of a nurse is to provide nursing care to children and families. Nurses should cooperate with family members to identify their goals and needs, and plan the most appropriate interventions to address the problem [1]. Nurses perform several roles when they provide care to clients and often carry out these roles concurrently, not exclusively [5]. The roles of nurses, among others, are as an advocate, health care provider, communicator, member of the team, educator and leader [6]. Providing interventions to reduce suffering and improve quality of life at each and every stage of the illness is the focus of palliative care in children and a multidisciplinary team is the most effective for treatments like this [7]. In a team, nurses enact an essential role by recognizing symptoms, coordinating care, and facilitating communication [7]. Providing end-of-life care for pediatric patients and their families is a privilege and also a challenge for nurses. Helping patients at the end of their life enables nurses to apply many of the technical skills and mental health skills learned throughout the nursing program [6]. A nurse can help parents by providing detailed information about what will happen if the supportive equipment is removed, ensuring that the pain medication given is appropriate for relieving pain during the dying process, and allowing parents to be with their child and talk to him/her before the release of the device. It is important for nurses to attempt to control the environment by providing privacy, asking them if they want to listen to the music, dim the lights and turn down the sound of the monitors, and organize the religious or cultural rituals that the family might want to do [1]. Nurses often perform multiple roles simultaneously. Similarly, when a nurse cares for a child with a terminal illness , the roles needed at any given time depend on the patient's needs and a particular situation [5]. While caring for a child with a terminal illness, nurses go through many experiences; one of them is related to decision-making. Making end-of-life decisions often involves ethical dilemmas for children, families and other health care teams [2]. The vagueness of the role of health professionals can be a difficult factor in decision making at the patient's end of life [8]. Furthermore, collaboration must be carried out by nurses with other health care teams to ensure optimal management of symptoms and to provide support for patients and families [4]. This study was conducted to identify the roles of nurses in providing nursing care to children with a terminal illness and explore the nurses' perspectives on how they perform their role in caring for terminally ill children. Study Design A qualitative research design with a content analysis approach is used in this study to explore the nurses' perspectives on how they perform their role in caring for children with terminal illnesses. Setting and Participants Purposive sampling is a sampling technique used in this study. Focusing on certain attributes of a population of concern will provide the best answers to research questions which is the main objective of this technique [9]. Nurses with a minimum education of diploma degree in nursing, willing to be involved and participate, and have experience of at least 3 years of working in the pediatric ward, were eligible for this research. In this hospital, there is a trend for nurse rotation; hence by determining the years of experience of care for children, the quality of nursing can be assessed. Clinical nursing expertise relates to the nurses' education level and years of experience [10]. Fifteen nurses participated in this study who were representatives of the pediatric surgical ward, the internal disease pediatric ward, the pediatric intensive care unit (PICU), and the neonatal intensive care unit (NICU). Ethical Consideration This study adheres to ethical principles of beneficence, autonomy, justice, do no harm and protection for the rights of individuals involved in this study and approved by the ethical committee on health research RSUP Dr. Hasan Sadikin Bandung with approval number LB.04.01/A05/EC/191/VII/ 2017. Data Collection In this study, qualitative research data were based on information obtained from the participants regarding how nurses performed their role in caring for children with a terminal illness. Once ethical approval was obtained from the ethics committee and the researcher obtained permission, respondents were sought in accordance with the inclusion criteria and informed consent was taken. Focus group discussion (FGD) and in-depth interviews were conducted from August to November 2017. FGD was conducted for 60 min, and in-depth interviews with nurses were performed for 30-60 min. Through this technique, researchers gained information about how nurses performed their roles in providing care for children with a terminal illness. Qualitative research instruments were used in this study, with researchers as the main instrument. Interview results were recorded and the next process was transcribing the data obtained from the interviews. A chronological note was made to describe any extraordinary event during the interview. Data Analysis Content analysis was used to conduct data analysis in this study. Data collection was carried out simultaneously with data analysis. The information obtained was arranged into the meaning unit to be coded and categories were created based on the tendency and patterns of words used in connection and also structural discourse. The codes were then grouped and a general description of the research topic was written. The last phase reports the analysis process and presents the results [11 -13]. The validity of this study is based on several criteria, such as credibility, transferability, dependability, and confirmability. The data collected were relayed back to the respondents to get the views of the respondents regarding the interpretation of the data. The respondents were representative of various pediatric wards. During the data collection process, the data analysis included researchers outside the team who examined the process and also examined the interpretation of the data and the conclusions whether it was supported by the data. The research team conducted a review of the data independently, then held discussions on the findings and reached an agreement [14 -16]. RESULTS Fifteen nurses participated with a minimum bachelor's degree, aged 25-45 years and having the clinical nursing experience of 5-20 years for pediatric patients ( Table 1). The results of the data analysis propose that the nurses perform the following roles in providing care for children with a terminal illness, and while performing these roles, they play the role of an information broker for a patient/family and the physician and also for the other health care providers. Repeating the explanation that the doctor has submitted until the family can understand and accept the condition of the child. Communicator Nurse as an information broker As a communicator between the doctor and the mother, we help to convey the opinions of the mother and also the child. We help to explain (R10) Clarifying the information that has already been submitted by the doctor. So, we explain also to the patient's family. Translating back what the doctors say, with the language they can understand (R12) Becoming a supporting factor in the explanation that has already been delivered by the doctor. Translating what the doctor has explained in a language that is easily understood by parents and children. Sometimes the nurse goes with the doctor while explaining the terminal patient, the nurse is with the doctor to help to calm the family (R15). Nurses overcome parents' anxiety by providing information Counselor. Nurses help to calm the family when the condition of the terminal disease of their child is communicated to them in detail. There are times when the doctors are more theoretical, not touching the heart. We as a nurse should be more touching, even though speaking overtly, as it is, but the family understands. We clarify, repeat, and ask again whether the family has already understood or not about what has been said by the doctor (R14). Nurses provide motivation to families when families receive any bad news related to the condition of the child. Nurses communicate with empathy and sympathy, ensuring family satisfaction in receiving information so that the family is able to accept the terminal condition of the child. This patient has cancer and we have done all the interventions and the treatments, … the treatment is only for supportive. Well, on the other side, if the patient is hospitalized continuously it will cause another infection, Meanwhile, if treated at home, he can sleep quite well and enjoy a peaceful life (R8). Nurses prevent patients from being infected with any infection during their stay in the hospital. Collaborator Nurse as an information broker. If the patient should be in rehabilitation, we should communicate with the doctor in charge (R10). Arrange nutrition and rehabilitation programs for patients by communicating the patient's needs to other health care teams such as other nurses, doctors, nutritionists, administration, and pharmacy. Everything is related, we cannot stand alone, so in providing service, we have to collaborate well with fellow nurses, doctors, nutritionists, administration, and also pharmacy. We have to communicate well about the needs of patients to the team (R11). - Meaning Unit Subcategories Categories Main Category Sometimes the doctors just collect the blood. Patients and parents will be more courageous to express a rejection of an intervention to us, nurses. For example, regarding collecting of blood in the femoral, for the examination of BGA. The doctors sometimes just collect the blood just like that. We, nurses, convey to the doctor related to the rejection of the intervention and provide the opportunity for the doctor to make informed consent first (R12). Nurses help the patients convey a rejection of doctor's treatment or the life support tool. Advocator. Nurse as an information broker. "This patient has cancer and we have done all the interventions and the treatments, so if according to the doctor there is no more option, but the patient is still able to communicate. … the treatment is only for supportive. Well on the other side, if the patient is hospitalized continuously it will cause another infection, Meanwhile, if treated at home, he can sleep quite well and enjoy a peaceful life (R8). Nurses help the family in making a decision whether the child who is in terminal condition will remain hospitalized or will be treated at home. Nurses prevent the patient from being a target of any infection. Nurses protect the patients' right to die peacefully and with dignity. We provide information like what kind of touch that can help appease the child; that's one of our roles as educator (R6). Nurses provide health education to children and parents and reiterate the doctor's explanation about the treatment program. Educator. Parents will inevitably ask for an overview of the treatment program that should be followed by the child, after being given an explanation by the doctor (R2). Providing insight that the child does not have to be in the hospital to get the treatment. Children can also get home care (R8). Nurses provide health education during discharge for planning sustainable care at home. Importance related discharge planning, about the nutrients, recommendation on wearing a mask when going out of the house. If necessary, the patient is not always too close to his friends; it is worried that if the friend is having a cough and the patient is exposed to cough, it will aggravate the illness (R10). For example, the provision of medication or treatment, which should be informed by the doctor in detail, about side effects, how therapy is given, how it is administered and how long it will be given. Sometimes the doctor is often busy, so the doctor just explained 'later the patient will be given medicine, in the form of injection'. The doctor does not explain the side effects, but parents approve it, even though they still do not understand. Then parents become anxious, and their anxiety about it becomes questions to the nurse, anxiety due to knowledge deficit and uncertainty (R2). Giving an explanation to the parents regarding the care received by the child, the examination procedures and medication that must be followed as well as every progress of the child's condition. The rest is direct care or caregiver is obliged to use good communication and provide detailed information when to conduct nursing intervention (R6). Provide direct care to patients with good communication and provide detailed information related to nursing interventions. Care provider. Nurse as a Communicator Performing the role of a communicator, a nurse's duties include: (1) repeating the information that the doctor has communicated until the family understand and accept the condition of the child; (2) translating what the doctor has explained in a language that is easily understood by parents and children; (3) becoming a supporting factor in the explanation that has already been delivered by the doctor; and (4) clarifying the information that has already been communicated by the doctor. Some respondent statements related to these subcategories are as follows: "Sometimes doctors give only a cursory explanation, we repeat it again" (R1) "As a communicator between the doctor and the mother, we help to convey the opinions of the mother and also the child. We help to explain" (R10). "So, we explain also to the patient's family. Translating back what the doctors say, with the language they can understand" (R12). Nurse as Counselor In this role, there are several subcategories identified, among them: (1) nurses overcome parents' anxiety by providing information; (2) nurses help to calm the family when the condition of the terminal disease of their child is communicated to them in detail; (3) nurses provide motivation to families when families receive any bad news related to the condition of the child; and (4) nurses communicate with empathy and sympathy, ensuring family satisfaction in receiving information so that the family is able to accept the terminal condition of the child. Some respondents' statements related to this subcategory are as follows: "Sometimes the nurse goes with the doctor while explaining the terminal patient, the nurse is with the doctor to help to calm the family" (R15). "There are times when the doctors are more theoretical, not touching the heart. We as a nurse should be more touching, even though speaking overtly, as it is, but the family understands. We clarify, repeat, and ask again whether the family has already understood or not about what has been said by the doctor (R14). Nurse as a Collaborator In this role as a collaborator, nurses prevent patients from being infected with any infection during their stay in the hospital, arrange nutrition and rehabilitation programs for patients by communicating the patient's needs to other health care teams such as other nurses, doctors, nutritionists, administration, and pharmacy. Some respondents' statements related to this subcategory are as follows: "This patient has cancer and we have done all the interventions and the treatments, … the treatment is only for supportive. Well on the other side, if the patient is hospitalized continuously it will cause another infection, Meanwhile, if treated at home, he can sleep quite well and enjoy a peaceful life" (R8). "If the patient should be in rehabilitation, we should communicate with the doctor in charge" (R10). "Everything is related, we cannot stand alone, so in providing service, we have to collaborate well with fellow nurses, doctors, nutritionists, administration, and also pharmacy. We have to communicate well about the needs of patients to the team" (R11). Nurse as an Advocator Following are the roles of nurses based on this category: (1) nurses help the patients convey a rejection of doctor's treatment or the life support tool; (2) nurses help the family in making a decision whether the child who is in terminal condition will remain hospitalized or will be treated at home; (3) nurses prevent the patient from being a target of any infection; and (4) nurses protect the patients' right to die peacefully and with dignity. Some respondents' statements related to this subcategory are as follows: "Sometimes the doctors just collect the blood. Patients and parents will be more courageous to express a rejection of an intervention to us, nurses. For example, regarding collecting of blood in the femoral, for the examination of BGA. The doctors sometimes just collect the blood just like that. We, nurses, convey to the doctor related to the rejection of the intervention and provide the opportunity for the doctor to make informed consent first." (R12). "This patient has cancer and we have done all the interventions and the treatments, so if according to the doctor there is no more option, but the patient is still able to communicate. … the treatment is only for supportive. Well on the other side, if the patient is hospitalized continuously it will cause another infection, Meanwhile, if treated at home, he can sleep quite well and enjoy a peaceful life" (R8). Nurse as an Educator In this role, nurses provide health education to children and parents and reiterate the doctor's explanation about the treatment program; nurses provide health education during discharge for planning sustainable care at home; and giving an explanation to the parents regarding the care received by the child, the examination procedures and medication that must be followed as well as every progress of the child's condition. Some respondents' statements related to this subcategory are as follows: "We provide information like what kind of touch that can help appease the child; that's one of our roles as educator" (R6). "Parents will inevitably ask for an overview of the treatment program that should be followed by the child, after being given an explanation by the doctor" (R2). "Providing insight that the child does not have to be in the hospital to get the treatment. Children can also get home care" (R8). "Importance related discharge planning, about the nutrients, recommendation on wearing a mask when going out of the house. If necessary, the patient is not always too close to his friends; it is worried that if the friend is having a cough and the patient is exposed to cough, it will aggravate the illness" (R10). "For example, the provision of medication or treatment, which should be informed by the doctor in detail, about side effects, how therapy is given, how it is administered and how long it will be given. Sometimes the doctor is often busy, so the doctor just explained 'later the patient will be given medicine, in the form of injection'. The doctor does not explain the side effects, but parents approve it, even though they still do not understand. Then parents become anxious, and their anxiety about it becomes questions to the nurse, anxiety due to knowledge deficit and uncertainty." (R2). Nurse as a Care Provider In this role (Fig. 1), it is identified that nurses provide direct care to patients with good communication and provide detailed information related to nursing interventions. Some respondents' statements related to this subcategory are as follows: "The rest is direct care or caregiver is obliged to use good communication and provide detailed information when to conduct nursing intervention" (R6). DISCUSSION Based on the results, while performing their roles in children's life with terminal illnesses, nurses are seen as informers. As identified when a nurse acts as a communicator, he/she repeats, translates, and clarifies the explanation that has already been delivered by the doctor until the family is able to comprehend and accept the condition of the child. To perform this role, nurses must have excellent communication skills and also good knowledge of the children's medical condition [4]. In addition, as a communicator, the nurse must be capable of communicating clearly and accurately in order to meet the client's needs [5]. Effective communication is needed to foster a trustworthy relationship between children experiencing lifethreatening conditions, their parents, and the health care team because confusing or incomplete information might be distressing for the health care team and families also [17]. Nurses have a crucial role in facilitating communication among families and the health care team. Nurses, as an informer are liable to provide correct information regarding patients to doctors and family members [8]. The role of nurses as a counselor requires that they must be able to overcome anxiety in parents and appease the family. Nurses should also pay attention to communicate information with empathy so that parents accept their child's condition with positivity. It is highly demanding for nurses since they also see the patients' condition and the emotional struggles that their families experience and get emotionally connected [18]. Nurturing trust among the family and the health care team with good communication at the end of a child's life could help ensure that the child receives the best proper care [19]. Continuously providing information about the situations of dying children to their family members is also one of the roles of nurses [20]. In addition, many children with terminal conditions experience sickness for a long time. Therefore, as aninformer, nurses have to communicate honestly. While discussing complicated issues, nurses are overt to the indirect comments from child or family that express uncertainty or concerns about the direction of care and nurses have to answer the questions honestly, and if nurses do not know the answer, they have to convince the family that they will arrange a discussion with the physician [1,21]. What is needed for the purpose of care conversations is factual information and a nurse might say the same thing but in a different way because, as an information broker, the approach must be in congruence with the nurse as a supporter [22]. The nurse, as a collaborator, is also discussed in this study. As a collaborator, the nurse communicates the patients' needs to other health care teams. The conflict between the members of the health-care team might become a barrier in performing this role; this situation is common and might lead to moral distress among team members [23]. Effective collaboration is required because family relationships with medical staff can be affected by miscommunication, especially when the medical staff gives different opinions regarding the prognosis and treatment plan [20]. As information brokers, nurses provide information to the health care team about children and families both before and after discussions about advanced care planning, provide detailed information about the treatment to children and families, answer additional questions from children and parents and clarify all misconceptions, and also organize discussions amongst the family and the health care team and reviews the topics discussed in the previous discussion regarding the care plan [21,24]. As an advocate, the nurse may convey the clients' needs and expectations to other health professionals, such as communicating the clients' views on information provided by the physician. Communication at the end-of-life care, which already began when a serious diagnosis had to be discussed and bad news delivered to a child and his or her parents can be burdensome as it is often related to further uncertainties, moreover the inadequacy of professional and personal support for child and parent [18]. The nurse, as an informer, has to communicate family members the things they need to ask the physician [25]. Nurses also help clients get their rights and help families in making decisions. As an advocate for the family, the nurse should provide clear information and explain the implications of the decisions taken; for example, in decision making related to DNR, a nurse should explain the DNR as well as the implications until the family really understands before making a decision [8]. Although parents may not always be contented with how the healthcare team reveals the diagnostic information [26], parents want the physicians to discuss the options of advance care and to support them in complex decision making, even though nurses might not precede the discussions, but they shall be influential in advocating for discussions to take place and that patient preferences are expressed [21]. Another role of the nurse is as an educator. Based on the results, nurses provide health education to children and parents about care and reiterate the doctor's explanation about treatment program. Nurses have to interpret what physicians have said and also translate medical terms in order to make the families understand; this is where a nurse becomes an informer [27,28]. Nurses also have to provide health education during discharge, planning for sustainable care at home. A nurse could be the finest member of the healthcare team to assess family comprehension and provide clarification regarding the given information [29]. In order to enhance the care of dying patients and family, better education and good communication are needed regarding terminal care with support from staff [25]. Based on the results of the study, while providing care to terminally ill children, as an information broker, nurses provide direct care to patients with good communication and provide detailed information related to nursing interventions. Parents require the health care team to discuss the options of an advance care plan and to help parents in complex decision making [21]. As a health care provider, nurses help each individual achieve their optimal welfare [6]. Provision of care to patients with terminal conditions includes physical, psychosocial, developmental, cultural, and spiritual care. As an informer, performing the above roles for children with a terminal illness is an extraordinary challenge for nurses. Nurses must be able to become good information brokers by overcoming possible barriers. Nurses must also be able to improve their communication skills. Furthermore, to identify palliative care need for child and family, the health care team must be trained to enhance communication among them. In this way the incidence of conflict and moral distress will be alleviated [23]. In addition, to be able to help nurses as information brokers, it would be beneficial to develop and provide digital information such as a brochure or a poster that contains explanations regarding common issues related to treatment for children with a terminal illness or answers to frequently asked questions (FAQs) that can be identified through surveys of patients, their parents as well as practitioners. CONCLUSION To conclude, nurses play several important roles in children's life with a terminal illness. These roles include them being a communicator, a counselor, a collaborator, an educator, and as a care provider. While performing the roles, the nurse becomes an information broker for terminally ill children and their families. Therefore, nurses must acquire good communication skills and also the knowledge related to the condition of the child and the ability to work with other health care teams with the aim of providing holistic and comprehensive care to children with a terminal illness. ETHICS APPROVAL AND CONSENT TO PARTI-CIPATE This study is approved by the ethical committee on health research RSUP Dr. Hasan Sadikin Bandung, Indonesia with approval number LB.04.01/A05/EC/191/VII/2017. HUMAN AND ANIMAL RIGHTS Not applicable. CONSENT FOR PUBLICATION Informed consent was obtained from all participants. AVAILABILITY OF DATA AND MATERIALS Not applicable. FUNDING This research was supported by an internal grant from Universitas Padjadjaran.
2021-05-13T00:03:03.435Z
2020-12-31T00:00:00.000
{ "year": 2020, "sha1": "53de3edd7aedd43ecd7fb1b72fa78e1086e11932", "oa_license": "CCBY", "oa_url": "https://opennursingjournal.com/VOLUME/14/PAGE/317/PDF/", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "31bd40ee669c2e44ec066c80648da0d1c68c6dd8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
221389263
pes2o/s2orc
v3-fos-license
Modelling COVID-19 contagion: risk assessment and targeted mitigation policies We use a spatial epidemic model with demographic and geographical heterogeneity to study the regional dynamics of COVID-19 across 133 regions in England. Our model emphasizes the role of variability of regional outcomes and heterogeneity across age groups and geographical locations, and provides a framework for assessing the impact of policies targeted towards subpopulations or regions. We define a concept of efficiency for comparative analysis of epidemic control policies and show targeted mitigation policies based on local monitoring to be more efficient than country-level or non-targeted measures. In particular, our results emphasize the importance of shielding vulnerable subpopulations and show that targeted policies based on local monitoring can considerably lower fatality forecasts and, in many cases, prevent the emergence of second waves which may occur under centralized policies. If none of these hold than modeling asymptomatic infections is irrelevant. The authors should qualify which of these are being modeled, and where the alternative parameters come from . The overall infectiousness and recovery rates can be easily sources from literature but how were the alternative asymptomatic parameters determined. These should be added with citations to the master parameter table. Comments regarding the analysis 1: Overall I think that analyses claiming to produce accurate estimates of real world parameters need to show parameter/model robustness and also predictive value. At the very least, the authors should acknowledged that absolute numbers of infections or deaths computed by the model are not statements of fact. While I think it is reasonable to compare mitigation strategies within the context of this model saying that X number of people will die under scenario Y is not appropriate. These numbers are only as accurate as the underlying assumptions about model parameters which are uncertain. Furthermore there are many sources of unknown heterogeneity and this will in general reduce the real world epidemic size over what is predicted. The authors do not make an attempt to fit unmodeled heterogeneity (as is done in https://www.medrxiv.org/content/10.1101/2020.04.27.20081893v3) and I don't think this is necessary here but it does imply that statements about epidemic size and number of deaths cannot be taken as facts about the real world. Overall, I find this part of the analysis the least compelling and it distracts with the much more interesting (IMO) analysis of mitigation strategies at the end. 2: A related or separate point is that the following statement as it is misleading and undermines the otherwise high standard of scientific rigor in this paper. "We estimate that, in absence of social distancing and confinement measures, the number of fatalities in England may have exceeded 216,000 by August 1, 2020, indicating that the lockdown has saved more than 174,000 lives." Besides the more global point (1) above it is not possible to say that X number of lives have been saved since this depends on the future number of deaths. Counterfactual claims are generally very difficult to support and should in my opinion be avoided altogether. 3: There are multiple issues with the figures throughout the paper. -As a general rule the legend of the figure should be completely self sufficient in order to understand the content (What is A,H, and F in Figure 19?) The reader should not have to look at the text. Ideally, the authors would also give one or two sentences describing the point of the figure. For many figures in the paper I don't know what the reader should take away from it. -Many of the figures are missing legend labels (perhaps a technical error with PDF generation???) -There are too many figures. I suggest focusing on at most 7 highly compelling figures and making the rest supplemental. -Some specific comments are below under "minor comments" but as there were a lot of omissions in the figures the list is likely not exhaustive and I suggest checking each one carefully. 4: The analysis presented in Figure 2 need to be fleshed out more. As far as I can tell the method does not return a posterior distribution. Can the authors determine if these values are indeed statistically different. It would be nice to back up these results with some external data. One of the regions with the highest multiplier (small region NNW of London) doesn't seem to correspond to any known population center, how should we interpret this result? 5: The analysis presented in Figure 5 and Table 4 is not convincing. The authors purport to show some variation in the regional multiplier. Even if it could be statistically shown that these are indeed different (see comment about Figure 2) the interpretation is not straight forward. Certainly, it is premature to call this "compliance" as it is not clear if these values are determined by relative changes in behavior rather than baseline variation in behavior/social norms. For example, for communities with relatively small rates of recreational socializing the multiplicative effect of mitigations will also be smaller since there is some minimal amount of necessary socializing that must persist. Overall I found this section of the paper to be among the least compelling. 6: The discussion of how it is possible to have a long run of 0 cases while the epidemic is progressing (section 4.2) should be shortened or removed. It is obvious that this *can* happen for some parameters (such as very low detection rate) but it also doesn't appear to be relevant to the real world. The authors bring up S. Korea as an example but cases never went to 0 there. I am not aware of a location where community acquired cases went to 0 and then re-emerged from a community source. While there was some speculation about the New Zealand second wave genomic analysis strongly favors the re-introduction hypothesis. Authors need to provide source code, input data and examples. Minor comments -In equation 4.2 the authors use what they refer to as $\bar{i}f$ --The (average) infection fatality rate. However, this is strongly dependent on the age distribution of the infected individuals. How was this take into account? -The analysis in Figure 9 seems to be considerably influenced by the 7 day periodicity (in the correlation is locally maximal when the periodicity is aligned, giving local peaks at1,7, and 14). It would be better to correlate the window average. - Figure 3a is not explained at all. -In figure 3b the simulation and the data should span the same interval on the x-axis. -What is the pink region in Figure 8 -What is on the X-axis in Figure 10. - Figure 16 is not referenced in the text. It is unclear what it show. If these are data fits it would be better to show a per-region scatter plot again real data. The map display, while visually pleasing is not informative. -I don't understand the purpose of Figure 17. Is there supposed to be a difference between (a) and (b) --it is not visible by eye. Also legend labels are missing. -I don't understand figure 18. There are no labels on the legend. I am assuming the orange line is the fit. Why does the first panel show results of multiple simulations but only an average is shown for the second panel. Why does the simulation not go for as long as the data? -I don't see any purpose to figures 19-22 other than to show that the models are different. There is no systematic differences and the results are not compared to real data. -small thing but in 2.6 I would change "man×day" to "person×day" since that most accurately reflects the correct unit. -The authors assume that adaptive policies monitor case numbers. (As an aside I would avoid using $R$ to denote this threshold to avoid confusion with the reproductive number). However, it is also possible to monitor hospitalizations that are less subject to reporting probability. This should be discussed. Review form: Reviewer 2 Is the manuscript scientifically sound in its present form? Yes Do you have any ethical concerns with this paper? No Have you any concerns about statistical analyses in this paper? No Recommendation? Accept with minor revision (please list in comments) Comments to the Author(s) The paper presents an extended compartmental model for SARS-COV-2 in the UK stratified by both age and region. The model is then used to assess the effectiveness of different control measures in reducing disease impact. This metric is then considered against the social cost of each set of measures in order to determine a sense of efficiency. I generally found the article to be well presented and insightful with sensible modelling assumptions throughout. I believe the assessment of control measures will be of interest to others and have just a few suggestions for improvement prior to publication. The authors spend a long time in the early parts of the paper (sections 3-6) discussing model fitting and presenting the benefits of a metapopulation approach. While much of this is certainly useful and informative, such models are certainly not new and similar approaches have already been extensively applied to the Covid pandemic. What this paper adds to the discussion is in its assessment of control measures and I wonder whether it would be beneficial to move a lot of this early discussion to appendices or a supplement. The paper is not short and I think this would help give it a stronger message. As far as the model itself is concerned, I thought the greatest deficiency for its use in the context of epidemic control, was the lack of any quarantining dynamic. Having symptomatic individuals (or even exposed households) reducing their contact may have a huge impact. Though I recognise no model is perfect, it may be worth acknowledging this for future inclusion. In section 6.1 I was a little unclear on the length of the period of social distancing after lockdown. I assume no matter the level of compliance, if this period is long enough the social cost would be greater than that of the lockdown itself? Could this be clarified? In section 7.1 there is a discussion of increasing the testing capacity. Is the effect of this identical to changing the triggering thresholds? Since the testing capacity increases over time then might there be some temporal effects on the triggering thresholds? This may be worth mentioning. Finally a different colouring (over a larger spectrum) would be beneficial for the map figures, in their current form they appear rather homogenous. Decision letter (RSOS-201535.R0) We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below. Dear Professor Cont The Editors assigned to your paper RSOS-201535 "Modelling COVID-19 contagion: risk assessment and targeted mitigation policies" have now received comments from reviewers and would like you to revise the paper in accordance with the reviewer comments and any comments from the Editors. Please note this decision does not guarantee eventual acceptance. We invite you to respond to the comments supplied below and revise your manuscript. Below the referees' and Editors' comments (where applicable) we provide additional requirements. Final acceptance of your manuscript is dependent on these requirements being met. We provide guidance below to help you prepare your revision. We do not generally allow multiple rounds of revision so we urge you to make every effort to fully address all of the comments at this stage. If deemed necessary by the Editors, your manuscript will be sent back to one or more of the original reviewers for assessment. If the original reviewers are not available, we may invite new reviewers. Please submit your revised manuscript and required files (see below) no later than 21 days from today's (ie 03-Nov-2020) date. Note: the ScholarOne system will 'lock' if submission of the revision is attempted 21 or more days after the deadline. If you do not think you will be able to meet this deadline please contact the editorial office immediately. Please note article processing charges apply to papers accepted for publication in Royal Society Open Science (https://royalsocietypublishing.org/rsos/charges). Charges will also apply to papers transferred to the journal from other Royal Society Publishing journals, as well as papers submitted as part of our collaboration with the Royal Society of Chemistry (https://royalsocietypublishing.org/rsos/chemistry). Fee waivers are available but must be requested when you submit your revision (https://royalsocietypublishing.org/rsos/waivers). My opinion is that this manuscript meets the criteria to be put to out peer review at Open Science, however, I feel I need to give you a fair warning that I expect it to take a very long time. All of the experts in this field are, as I'm sure you are aware, enormously busy doing their own modelling, analysis and forecasting of the ongoing pandemic and government response. Recent past experience suggests that it will be very difficult to find qualified reviewers who are able to give their time to refereeing your manuscript. I would suggest that if you want your work to influence policy making in the UK, then you need to also pursue other routes. For example, consider contacting members of the RAMP project: https://royalsociety.org/topics-policy/Health%20and%20wellbeing/ramp/ Best wishes Tim Rogers Reviewer comments to Author: Reviewer: 1 Comments to the Author(s) This paper presents a SIR-like model of COVID dynamics in the UK that incorporates social structure and geographic heterogeneity. Overall, the model is well put together and the consideration the authors give to spatial heterogeneity is a valuable contribution. The authors use this model to make a medley of different points including: (1) estimates of real world values (such as the total number of people infected), (2) counterfactual estimates (how many lives the lock-down saved), (3) theoretical consideration (how many days it is possible to observe 0 cases while the epidemic is ongoing) (4) heterogeneity in model parameters across regions and (5) effects of different intervention strategies. Of these 5 points by (5) is by far the most compelling. The authors do a thorough analysis of mitigation strategies and also quantify the social cost which has seen little quantitative discussion in related work. As it stands the manuscript is very disorganized. Points 1-4 should be either shortened/removed or considerably strengthened and/or turned into a separate manuscripts. With regards to specific analyses (1-4) my criticism in brief is: (1) Estimates of real world values should be be subjected to extensive robustness analysis and predictive (forecasting) validation. I believe this is beyond the scope of this paper. (2) Counterfactual estimates are generally difficult to support. Since the epidemic is not over we cannot purport to know how many lives were saved by the lockdown since it depends the future epidemic trajectory. (3) The discussion about the possible number of days with 0 infections is unlikely to be real world relevant (see discussion below) and doesn't leverage the UK specific model. If the authors chose to pursue this it is better to spin this off into another paper that includes theoretical analysis similar to (https://personalpages.manchester.ac.uk/staff/thomas.house/blog/why-zero-is-sohard.html) (4) The spatial heterogeneity claims need to be strengthened with statistical analysis Major comments about the model itself: 1: The authors need a very clear table of all the parameters of the model and whether these are fixed or estimated (an extended Table 3) and this should come much earlier in the paper. 2:I don't understand why the contact matrix from Mossong et al was binned into courser age groups than the 5-year groups in original data. This seems entirely unnecessary for the purpose of this model yet incurs the cost of considerably under-estimating heterogeneity and thus overestimating epidemic size. I am in particular concerned about the binning of 20-60 year olds into one group. The number of contacts vary by several fold within that age range. The The authors should re-run their analysis with the original bins. I expect that many of the key results regarding mitigation will be strengthened. 3:Much of the results and the discussion is devoted to asymptomatic infections. The authors state at the beginning: "The probability p that an infected individual develops symptoms is an important parameter for epidemic dynamics." It is not clear to me why this is universally the case without some qualifying assumption. These assumptions are never stated. The importance of asymptomatic cases has to be attributed to: -Asymptomatic individuals are less infectious -Asymptomatic individuals recover more quickly -Asymptomatic infections are less detectable. If none of these hold than modeling asymptomatic infections is irrelevant. The authors should qualify which of these are being modeled, and where the alternative parameters come from . The overall infectiousness and recovery rates can be easily sources from literature but how were the alternative asymptomatic parameters determined. These should be added with citations to the master parameter table. Comments regarding the analysis 1: Overall I think that analyses claiming to produce accurate estimates of real world parameters need to show parameter/model robustness and also predictive value. At the very least, the authors should acknowledged that absolute numbers of infections or deaths computed by the model are not statements of fact. While I think it is reasonable to compare mitigation strategies within the context of this model saying that X number of people will die under scenario Y is not appropriate. These numbers are only as accurate as the underlying assumptions about model parameters which are uncertain. Furthermore there are many sources of unknown heterogeneity and this will in general reduce the real world epidemic size over what is predicted. The authors do not make an attempt to fit unmodeled heterogeneity (as is done in https://www.medrxiv.org/content/10.1101/2020.04.27.20081893v3) and I don't think this is necessary here but it does imply that statements about epidemic size and number of deaths cannot be taken as facts about the real world. Overall, I find this part of the analysis the least compelling and it distracts with the much more interesting (IMO) analysis of mitigation strategies at the end. 9 2: A related or separate point is that the following statement as it is misleading and undermines the otherwise high standard of scientific rigor in this paper. "We estimate that, in absence of social distancing and confinement measures, the number of fatalities in England may have exceeded 216,000 by August 1, 2020, indicating that the lockdown has saved more than 174,000 lives." Besides the more global point (1) above it is not possible to say that X number of lives have been saved since this depends on the future number of deaths. Counterfactual claims are generally very difficult to support and should in my opinion be avoided altogether. 3: There are multiple issues with the figures throughout the paper. -As a general rule the legend of the figure should be completely self sufficient in order to understand the content (What is A,H, and F in Figure 19?) The reader should not have to look at the text. Ideally, the authors would also give one or two sentences describing the point of the figure. For many figures in the paper I don't know what the reader should take away from it. -Many of the figures are missing legend labels (perhaps a technical error with PDF generation???) -There are too many figures. I suggest focusing on at most 7 highly compelling figures and making the rest supplemental. -Some specific comments are below under "minor comments" but as there were a lot of omissions in the figures the list is likely not exhaustive and I suggest checking each one carefully. Figure 2 need to be fleshed out more. As far as I can tell the method does not return a posterior distribution. Can the authors determine if these values are indeed statistically different. It would be nice to back up these results with some external data. One of the regions with the highest multiplier (small region NNW of London) doesn't seem to correspond to any known population center, how should we interpret this result? 5: The analysis presented in Figure 5 and Table 4 is not convincing. The authors purport to show some variation in the regional multiplier. Even if it could be statistically shown that these are indeed different (see comment about Figure 2) the interpretation is not straight forward. Certainly, it is premature to call this "compliance" as it is not clear if these values are determined by relative changes in behavior rather than baseline variation in behavior/social norms. For example, for communities with relatively small rates of recreational socializing the multiplicative effect of mitigations will also be smaller since there is some minimal amount of necessary socializing that must persist. Overall I found this section of the paper to be among the least compelling. 4: The analysis presented in 6: The discussion of how it is possible to have a long run of 0 cases while the epidemic is progressing (section 4.2) should be shortened or removed. It is obvious that this *can* happen for some parameters (such as very low detection rate) but it also doesn't appear to be relevant to the real world. The authors bring up S. Korea as an example but cases never went to 0 there. I am not aware of a location where community acquired cases went to 0 and then re-emerged from a community source. While there was some speculation about the New Zealand second wave genomic analysis strongly favors the re-introduction hypothesis. Authors need to provide source code, input data and examples. Minor comments -In equation 4.2 the authors use what they refer to as $\bar{i}f$ --The (average) infection fatality rate. However, this is strongly dependent on the age distribution of the infected individuals. How was this take into account? -The analysis in Figure 9 seems to be considerably influenced by the 7 day periodicity (in the correlation is locally maximal when the periodicity is aligned, giving local peaks at1,7, and 14). It would be better to correlate the window average. - Figure 3a is not explained at all. -In figure 3b the simulation and the data should span the same interval on the x-axis. -What is the pink region in Figure 8 -What is on the X-axis in Figure 10. - Figure 16 is not referenced in the text. It is unclear what it show. If these are data fits it would be better to show a per-region scatter plot again real data. The map display, while visually pleasing is not informative. -I don't understand the purpose of Figure 17. Is there supposed to be a difference between (a) and (b) --it is not visible by eye. Also legend labels are missing. -I don't understand figure 18. There are no labels on the legend. I am assuming the orange line is the fit. Why does the first panel show results of multiple simulations but only an average is shown for the second panel. Why does the simulation not go for as long as the data? -I don't see any purpose to figures 19-22 other than to show that the models are different. There is no systematic differences and the results are not compared to real data. -small thing but in 2.6 I would change "man×day" to "person×day" since that most accurately reflects the correct unit. -The authors assume that adaptive policies monitor case numbers. (As an aside I would avoid using $R$ to denote this threshold to avoid confusion with the reproductive number). However, it is also possible to monitor hospitalizations that are less subject to reporting probability. This should be discussed. Reviewer: 2 Comments to the Author(s) The paper presents an extended compartmental model for SARS-COV-2 in the UK stratified by both age and region. The model is then used to assess the effectiveness of different control measures in reducing disease impact. This metric is then considered against the social cost of each set of measures in order to determine a sense of efficiency. I generally found the article to be well presented and insightful with sensible modelling assumptions throughout. I believe the assessment of control measures will be of interest to others and have just a few suggestions for improvement prior to publication. The authors spend a long time in the early parts of the paper (sections 3-6) discussing model fitting and presenting the benefits of a metapopulation approach. While much of this is certainly useful and informative, such models are certainly not new and similar approaches have already been extensively applied to the Covid pandemic. What this paper adds to the discussion is in its assessment of control measures and I wonder whether it would be beneficial to move a lot of this early discussion to appendices or a supplement. The paper is not short and I think this would help give it a stronger message. As far as the model itself is concerned, I thought the greatest deficiency for its use in the context of epidemic control, was the lack of any quarantining dynamic. Having symptomatic individuals (or even exposed households) reducing their contact may have a huge impact. Though I recognise no model is perfect, it may be worth acknowledging this for future inclusion. In section 6.1 I was a little unclear on the length of the period of social distancing after lockdown. I assume no matter the level of compliance, if this period is long enough the social cost would be greater than that of the lockdown itself? Could this be clarified? In section 7.1 there is a discussion of increasing the testing capacity. Is the effect of this identical to changing the triggering thresholds? Since the testing capacity increases over time then might there be some temporal effects on the triggering thresholds? This may be worth mentioning. Finally a different colouring (over a larger spectrum) would be beneficial for the map figures, in their current form they appear rather homogenous. ===PREPARING YOUR MANUSCRIPT=== Your revised paper should include the changes requested by the referees and Editors of your manuscript. You should provide two versions of this manuscript and both versions must be provided in an editable format: one version identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); a 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. This version will be used for typesetting if your manuscript is accepted. Please ensure that any equations included in the paper are editable text and not embedded images. Please ensure that you include an acknowledgements' section before your reference list/bibliography. This should acknowledge anyone who assisted with your work, but does not qualify as an author per the guidelines at https://royalsociety.org/journals/ethicspolicies/openness/. While not essential, it will speed up the preparation of your manuscript proof if accepted if you format your references/bibliography in Vancouver style (please see https://royalsociety.org/journals/authors/author-guidelines/#formatting). You should include DOIs for as many of the references as possible. If you have been asked to revise the written English in your submission as a condition of publication, you must do so, and you are expected to provide evidence that you have received language editing support. The journal would prefer that you use a professional language editing service and provide a certificate of editing, but a signed letter from a colleague who is a native speaker of English is acceptable. Note the journal has arranged a number of discounts for authors using professional language editing services (https://royalsociety.org/journals/authors/benefits/language-editing/). 12 To revise your manuscript, log into https://mc.manuscriptcentral.com/rsos and enter your Author Centre -this may be accessed by clicking on "Author" in the dark toolbar at the top of the page (just below the journal name). You will find your manuscript listed under "Manuscripts with Decisions". Under "Actions", click on "Create a Revision". Attach your point-by-point response to referees and Editors at Step 1 'View and respond to decision letter'. This document should be uploaded in an editable file type (.doc or .docx are preferred). This is essential. Please ensure that you include a summary of your paper at Step 2 'Type, Title, & Abstract'. This should be no more than 100 words to explain to a non-scientific audience the key findings of your research. This will be included in a weekly highlights email circulated by the Royal Society press office to national UK, international, and scientific news outlets to promote your work. At Step 3 'File upload' you should include the following files: --Your revised manuscript in editable file format (.doc, .docx, or .tex preferred). You should upload two versions: 1) One version identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); 2) A 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. --If you are requesting a discretionary waiver for the article processing charge, the waiver form must be included at this step. --If you are providing image files for potential cover images, please upload these at this step, and inform the editorial office you have done so. You must hold the copyright to any image provided. --A copy of your point-by-point response to referees and Editors. This will expedite the preparation of your proof. At Step 6 'Details & comments', you should review and respond to the queries on the electronic submission form. In particular, we would ask that you do the following: --Ensure that your data access statement meets the requirements at https://royalsociety.org/journals/authors/author-guidelines/#data. You should ensure that you cite the dataset in your reference list. If you have deposited data etc in the Dryad repository, please include both the 'For publication' link and 'For review' link at this stage. --If you are requesting an article processing charge waiver, you must select the relevant waiver option (if requesting a discretionary waiver, the form should have been uploaded at Step 3 'File upload' above). --If you have uploaded ESM files, please ensure you follow the guidance at https://royalsociety.org/journals/authors/author-guidelines/#supplementary-material to include a suitable title and informative caption. An example of appropriate titling and captioning may be found at https://figshare.com/articles/Table_S2_from_Is_there_a_trade-off_between_peak_performance_and_performance_breadth_across_temperatures_for_aerobic_sc ope_in_teleost_fishes_/3843624. At Step 7 'Review & submit', you must view the PDF proof of the manuscript before you will be able to submit the revision. Note: if any parts of the electronic submission form have not been completed, these will be noted by red message boxes. Recommendation? Accept with minor revision (please list in comments) Comments to the Author(s) I thank the authors for taking the time to address my comments, and find the revised manuscript to be greatly improved. Their response however has highlighted to me one additional issue. It is widely understood that an asymptotic individual will spread the virus to a greatly reduced degree compared to a symptomatic case. I had assumed this was incorporated into the model via the parameter kappa in equation 2.2. Following clarification on this point however, it seems that this parameter is in fact used to reduce spread from symptomatic individuals via a quarantine dynamic. The issue with neglecting to include reduction in transmission from asymptomatics (which may be as great as a factor of 10) is it results in a far higher weighting in the importance of spread amongst younger age groups with high degrees of contact. Due to the considerable amount of uncertainty we still have regarding all disease parameters for Covid, I don't believe this issue entirely invalidates the results, but I do think it significant enough that it should be highlighted in the text as a limitation of the model. Decision letter (RSOS-201535.R1) We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below. Dear Professor Cont, On behalf of the Editors, we are pleased to inform you that your Manuscript RSOS-201535.R1 "Modelling COVID-19 contagion: risk assessment and targeted mitigation policies" has been accepted for publication in Royal Society Open Science subject to minor revision in accordance with the referees' reports. Please find the referees' comments along with any feedback from the Editors below my signature. We invite you to respond to the comments and revise your manuscript. Below the referees' and Editors' comments (where applicable) we provide additional requirements. Final acceptance of your manuscript is dependent on these requirements being met. We provide guidance below to help you prepare your revision. Please submit your revised manuscript and required files (see below) no later than 7 days from today's (ie 02-Mar-2021) date. Note: the ScholarOne system will 'lock' if submission of the revision is attempted 7 or more days after the deadline. If you do not think you will be able to meet this deadline please contact the editorial office immediately. Please note article processing charges apply to papers accepted for publication in Royal Society Open Science (https://royalsocietypublishing.org/rsos/charges). Charges will also apply to papers transferred to the journal from other Royal Society Publishing journals, as well as papers submitted as part of our collaboration with the Royal Society of Chemistry (https://royalsocietypublishing.org/rsos/chemistry). Fee waivers are available but must be requested when you submit your revision (https://royalsocietypublishing.org/rsos/waivers). Thank you for submitting your manuscript to Royal Society Open Science and we look forward to receiving your revision. If you have any questions at all, please do not hesitate to get in touch. If it is possible/practical it would be good to re-run some of the analysis with a modified infection rate for asymptomatic spreaders to check robustness to this parameter. Otherwise, a suitable modification to the text to highlight this important caveat will be needed. Reviewer comments to Author: Reviewer: 2 Comments to the Author(s) I thank the authors for taking the time to address my comments, and find the revised manuscript to be greatly improved. Their response however has highlighted to me one additional issue. It is widely understood that an asymptotic individual will spread the virus to a greatly reduced degree compared to a symptomatic case. I had assumed this was incorporated into the model via the parameter kappa in equation 2.2. Following clarification on this point however, it seems that this parameter is in fact used to reduce spread from symptomatic individuals via a quarantine dynamic. The issue with neglecting to include reduction in transmission from asymptomatics (which may be as great as a factor of 10) is it results in a far higher weighting in the importance of spread amongst younger age groups with high degrees of contact. Due to the considerable amount of uncertainty we still have regarding all disease parameters for Covid, I don't believe this issue entirely invalidates the results, but I do think it significant enough that it should be highlighted in the text as a limitation of the model. ===PREPARING YOUR MANUSCRIPT=== Your revised paper should include the changes requested by the referees and Editors of your manuscript. You should provide two versions of this manuscript and both versions must be provided in an editable format: one version identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); a 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. This version will be used for typesetting. Please ensure that any equations included in the paper are editable text and not embedded images. Please ensure that you include an acknowledgements' section before your reference list/bibliography. This should acknowledge anyone who assisted with your work, but does not qualify as an author per the guidelines at https://royalsociety.org/journals/ethicspolicies/openness/. While not essential, it will speed up the preparation of your manuscript proof if you format your references/bibliography in Vancouver style (please see https://royalsociety.org/journals/authors/author-guidelines/#formatting). You should include DOIs for as many of the references as possible. If you have been asked to revise the written English in your submission as a condition of publication, you must do so, and you are expected to provide evidence that you have received language editing support. The journal would prefer that you use a professional language editing service and provide a certificate of editing, but a signed letter from a colleague who is a native speaker of English is acceptable. Note the journal has arranged a number of discounts for authors using professional language editing services (https://royalsociety.org/journals/authors/benefits/language-editing/). ===PREPARING YOUR REVISION IN SCHOLARONE=== To revise your manuscript, log into https://mc.manuscriptcentral.com/rsos and enter your Author Centre -this may be accessed by clicking on "Author" in the dark toolbar at the top of the page (just below the journal name). You will find your manuscript listed under "Manuscripts with Decisions". Under "Actions", click on "Create a Revision". Attach your point-by-point response to referees and Editors at Step 1 'View and respond to decision letter'. This document should be uploaded in an editable file type (.doc or .docx are preferred). This is essential. Please ensure that you include a summary of your paper at Step 2 'Type, Title, & Abstract'. This should be no more than 100 words to explain to a non-scientific audience the key findings of your research. This will be included in a weekly highlights email circulated by the Royal Society press office to national UK, international, and scientific news outlets to promote your work. At Step 3 'File upload' you should include the following files: --Your revised manuscript in editable file format (.doc, .docx, or .tex preferred). You should upload two versions: 1) One version identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); 2) A 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them. --If you are requesting a discretionary waiver for the article processing charge, the waiver form must be included at this step. --If you are providing image files for potential cover images, please upload these at this step, and inform the editorial office you have done so. You must hold the copyright to any image provided. --A copy of your point-by-point response to referees and Editors. This will expedite the preparation of your proof. At Step 6 'Details & comments', you should review and respond to the queries on the electronic submission form. In particular, we would ask that you do the following: --Ensure that your data access statement meets the requirements at https://royalsociety.org/journals/authors/author-guidelines/#data. You should ensure that you cite the dataset in your reference list. If you have deposited data etc in the Dryad repository, please only include the 'For publication' link at this stage. You should remove the 'For review' link. --If you are requesting an article processing charge waiver, you must select the relevant waiver option (if requesting a discretionary waiver, the form should have been uploaded at Step 3 'File upload' above). --If you have uploaded ESM files, please ensure you follow the guidance at https://royalsociety.org/journals/authors/author-guidelines/#supplementary-material to include a suitable title and informative caption. An example of appropriate titling and captioning may be found at https://figshare.com/articles/Table_S2_from_Is_there_a_trade-off_between_peak_performance_and_performance_breadth_across_temperatures_for_aerobic_sc ope_in_teleost_fishes_/3843624. At Step 7 'Review & submit', you must view the PDF proof of the manuscript before you will be able to submit the revision. Note: if any parts of the electronic submission form have not been completed, these will be noted by red message boxes. Decision letter (RSOS-201535.R2) We hope you are keeping well at this difficult and unusual time. We continue to value your support of the journal in these challenging circumstances. If Royal Society Open Science can assist you at all, please don't hesitate to let us know at the email address below. Dear Professor Cont, It is a pleasure to accept your manuscript entitled "Modelling COVID-19 contagion: risk assessment and targeted mitigation policies" in its current form for publication in Royal Society Open Science. COVID-19 rapid publication process: We are taking steps to expedite the publication of research relevant to the pandemic. If you wish, you can opt to have your paper published as soon as it is ready, rather than waiting for it to be published the scheduled Wednesday. This means your paper will not be included in the weekly media round-up which the Society sends to journalists ahead of publication. However, it will still appear in the COVID-19 Publishing Collection which journalists will be directed to each week (https://royalsocietypublishing.org/topic/special-collections/novel-coronavirus-outbreak). If you wish to have your paper considered for immediate publication, or to discuss further, please notify openscience_proofs@royalsociety.org and press@royalsociety.org when you respond to this email. You can expect to receive a proof of your article in the near future. Please contact the editorial office (openscience@royalsociety.org) and the production office (openscience_proofs@royalsociety.org) to let us know if you are likely to be away from e-mail contact --if you are going to be away, please nominate a co-author (if available) to manage the proofing process, and ensure they are copied into your email to the journal. Due to rapid publication and an extremely tight schedule, if comments are not received, your paper may experience a delay in publication. Please see the Royal Society Publishing guidance on how you may share your accepted author manuscript at https://royalsociety.org/journals/ethics-policies/media-embargo/. After publication, some additional ways to effectively promote your article can also be found here https://royalsociety.org/blog/2020/07/promoting-your-latest-paper-and-tracking-yourresults/. We thank both reviewers for their detailed and constructive comments and insights, which we have carefully taken into account when revising our manuscript. In summary, we have • re-estimated the model with 16 age groups instead of only 4 age groups and redone all the analysis with this more granular model, as suggested by Reviewer 1; • removed 15 figures from the paper and shortened in by 10 pages; • removed references to absolute number of fatalities in Section 1; • removed claims related to variation of compliance levels for social distancing across regions; • updated our estimate of reporting probability in Section 4 using more recent data, which confirms the increase in reporting probabilities following an increase in testing across the UK; • deleted the section on 'Counterfactual analysis: no intervention' (previously Section 5); • corrected several typos pointed out by reviewers. We hope the revised -and much improved-version will be deemed suitable for publication in Open Science. We have also made available an interactive online app implementing the model described in the paper, which may be interesting as an online add-on for the reviewers and readers for further exploring the model: The following sections contain detailed responses to comments by the reviewers. Reviewers remarks are in italics, while our responses are in straight font. Response to Reviewer 1 This paper presents a SIR-like model of COVID dynamics in the UK that incorporates social structure and geographic heterogeneity. Overall, the model is well put together and the consideration the authors give to spatial heterogeneity is a valuable contribution. The authors use this model to make a medley of different points including: (1) estimates of real world values (such as the total number of people infected), (2) counterfactual estimates (how many lives the lock-down saved), (3) theoretical consideration (how many days it is possible to observe 0 cases while the epidemic is ongoing) (4) heterogeneity in model parameters across regions and (5) effects of different intervention strategies. Of these 5 points by (5) is by far the most compelling. The authors do a thorough analysis of mitigation strategies and also quantify the social cost which has seen little quantitative discussion in related work. [...] Points 1-4 should be either shortened/removed or considerably strengthened and/or turned into a separate manuscripts. We thank the reviewer for the careful reading of the manuscript and the detailed feedback. We have considerably shortened points (1), (2) and (4) as suggested to better emphasise (5). We have left (3) as it responds to other points raised by the reviewer (see below). Overall this has shortened the paper by 10 pages. With regards to specific analyses (1-4) my criticism in brief is: 1. Estimates of real world values should be be subjected to extensive robustness analysis and predictive (forecasting) validation. I believe this is beyond the scope of this paper. We agree that robustness check on all parameters is not feasible in a single paper, but the paper does contain predictive validation, region by region, using March 15 -May 31 as estimation period and June 1-Aug 1 as out-of-sample validation period. Robustness analysis is performed on several (though not all) parameters, including the asymptomatic ratio and the level of granularity (see last section of this document). 2. Counterfactual estimates are generally difficult to support. Since the epidemic is not over we cannot purport to know how many lives were saved by the lockdown since it depends the future epidemic trajectory. We agree and we have toned down this aspect, and deleted what was previously Section 5 (Counterfactual analysis) also following your other remarks and those of Reviewer 2. We have now emphasised comparative analysis of policies rather than absolute forecasts. 3. The discussion about the possible number of days with 0 infections is unlikely to be real world relevant (see discussion below) and doesn't leverage the UK specific model. If the authors chose to pursue this it is better to spin this off into another paper that includes theoretical analysis similar to (https://personalpages.manchester.ac.uk/staff/thomas.house/blog/why-zero-is-so-hard.html) A key aspect of the feedback policies is the fact that the state variables are not directly observed but 'sampled' with some reporting probability π < 1. The issue here is one of control with partial observation, whereas the analysis alluded to by the reviewer is related to the actual extinction probability of the infection, which is not what we discuss here. 4. The spatial heterogeneity claims need to be strengthened with statistical analysis. Our goal is not to demonstrate specific patterns in spatial heterogeneity; we have modified the text to remove any assertion which may be interpreted as a 'significant difference' in behavior or compliance across regions. Major comments about the model itself 1. The authors need a very clear table of all the parameters of the model and whether these are fixed or estimated (an extended Table 3) and this should come much earlier in the paper. Point taken. We have grouped all model parameters in Tables 4 and 5, indicating the sources (if any) or the Appendix where estimates are given. 2. I don't understand why the contact matrix from Mossong et al was binned into courser age groups than the 5-year groups in original data. This seems entirely unnecessary for the purpose of this model yet incurs the cost of considerably under-estimating heterogeneity and thus overestimating epidemic size. I am in particular concerned about the binning of 20-60 year olds into one group. The number of contacts vary by several fold within that age range. The authors should re-run their analysis with the original bins. I expect that many of the key results regarding mitigation will be strengthened. Following this remark we have re-done all the analysis with the original (16) 5-year age groups used by Mossong et al. Interestingly, the results are quite similar to our original model with 4 age groups. However, some quantitative differences may emerge when assessing the impact of targeted policies. Figure 1 show an example of such differences. (Details of the models compared are given in Section 3 below). The dynamics of infections, cases and fatalities are rather insensitive to the demographic granularity but some differences in the impact of targeted policies are visible. In the revised version, we have followed the recommendation of the reviewer and retained the more granular model with 16 age groups. 3. Much of the results and the discussion is devoted to asymptomatic infections. The authors state at the beginning: "The probability p that an infected individual develops symptoms is an important parameter for epidemic dynamics." It is not clear to me why this is universally the case without some qualifying assumption. These assumptions are never stated. The importance of asymptomatic cases has to be attributed to: -Asymptomatic individuals are less infectious -Asymptomatic individuals recover more quickly -Asymptomatic infections are less detectable. If none of these hold than modeling asymptomatic infections is irrelevant. The authors should qualify which of these are being modeled, and where the alternative parameters come from. The overall infectiousness and recovery rates can be easily sources from literature but how were the alternative asymptomatic parameters determined. These should be added with citations to the master parameter table. In our model, asymptomatic infections are not detectable so asymptomatic individuals do not self-isolate and maintain the same level of social contact as non-infected ones. This effect is represented by the coefficient κ < 1 in the force of infection, which represents a reduction of contacts for symptomatic individuals, an effect not present for asymptomatic individuals. This clearly affects epidemic dynamics. Independently of this effect, the presence of asymptomatic individuals affects compartmental dynamics as the sum over compartments is constant. The source for the asymptomatic ratios is the ONS study [5] which shows that a high proportion -possibly more than 70%-of COVID-19 carriers may be asymptomatic, so clearly this is a relevant feature to be included in the model. This study is based on a larger sample than previous estimates such as the Diamond Princess study, and the estimates are adjusted for the UK population distribution, which is not the case for previous studies. There is a wide range for estimated of symptomatic ratios (various estimates were/are discussed in Sec. 3.3) and we have carried out a corresponding robustness analysis systematically, computing all projections under 'high' and 'low' assumptions for the asymptomatic ratios. As shown in Section 5 and 6, the results do depend on the asymptomatic ratio, showing that this is an important parameter. Comments regarding the analysis 1. Overall I think that analyses claiming to produce accurate estimates of real world parameters need to show parameter/model robustness and also predictive value. At the very least, the authors should acknowledged that absolute numbers of infections or deaths computed by the model are not statements of fact. While I think it is reasonable to compare mitigation strategies within the context of this model saying that X number of people will die under scenario Y is not appropriate. These numbers are only as accurate as the underlying assumptions about model parameters which are uncertain. Furthermore there are many sources of unknown heterogeneity and this will in general reduce the real world epidemic size over what is predicted. The authors do not make an attempt to fit unmodeled heterogeneity (as is done in https://www.medrxiv.org/content/10.1101/2020.04.27.20081893v3) and I don't think this is necessary here but it does imply that statements about epidemic size and number of deaths cannot be taken as facts about the real world. Overall, I find this part of the analysis the least compelling and it distracts with the much more interesting (IMO) analysis of mitigation strategies at the end. We acknowledge that the main output of the model are not the absolute projections for fatalities or infections but the comparative analysis of the models. We have removed such statements from the 'Main findings' section. However, a quantitative comparison cannot be done without stating at some point numerical outcomes for various policies, and this is the sole purpose of the numerical values reported in various tables in the paper. 2. A related or separate point is that the following statement as it is misleading and undermines the otherwise high standard of scientific rigor in this paper. "We estimate that, in absence of social distancing and confinement measures, the number of fatalities in England may have exceeded 216,000 by August 1, 2020, indicating that the lockdown has saved more than 174,000 lives." Besides the more global point (1) above it is not possible to say that X number of lives have been saved since this depends on the future number of deaths. Counterfactual claims are generally very difficult to support and should in my opinion be avoided altogether. We concede this point and have removed this statement from Section 1 and deleted the section on 'counterfactual analysis' in its entirety. 3. There are multiple issues with the figures throughout the paper. -As a general rule the legend of the figure should be completely self sufficient in order to understand the content (What is A,H, and F in Figure 19?) The reader should not have to look at the text. Ideally, the authors would also give one or two sentences describing the point of the figure. For many figures in the paper I don't know what the reader should take away from it. We apologise for the lack of clarity in the figures. We have added legends and detailed captions to all figures. -Many of the figures are missing legend labels (perhaps a technical error with PDF generation???) Legend labels have now been added to all figures. -There are too many figures. I suggest focusing on at most 7 highly compelling figures and making the rest supplemental. We have deleted 14 figures, reducing the total number of figures to 30. 4. The analysis presented in Figure 2 need to be fleshed out more. As far as I can tell the method does not return a posterior distribution. Can the authors determine if these values are indeed statistically different. It would be nice to back up these results with some external data. One of the regions with the highest multiplier (small region NNW of London) doesn't seem to correspond to any known population center, how should we interpret this result? d r is estimated by indirect inference [3], which does yield confidence intervals but we have not reported them here as it does not serve any purpose for our analysis. We have added Table 2, which lists the regions with highest regional adjustment factors, as well as some of their characteristics (inward/outward mobility, population density). The regions with highest multiplier are South Teesside, Croydon and Solihull. As shown in Table 2, regions with high d r may correspond to regions with high commuting rates or high population density. But it is not the purpose of our study to focus on regional characteristics, which would require other types of socio-economic data, and we do not claim to explain these differences. 5. The analysis presented in Figure 5 and Table 4 is not convincing. The authors purport to show some variation in the regional multiplier. Even if it could be statistically shown that these are indeed different (see comment about Figure 2) the interpretation is not straight forward. Certainly, it is premature to call this "compliance" as it is not clear if these values are determined by relative changes in behavior rather than baseline variation in behavior/social norms. For example, for communities with relatively small rates of recreational socializing the multiplicative effect of mitigations will also be smaller since there is some minimal amount of necessary socializing that must persist. Overall I found this section of the paper to be among the least compelling. We have removed Figure 5 as well as any reference to 'compliance' as suggested by the reviewer. We do not claim the regional differences in the l r values to be significant. In fact, they are all quite close to 10% (representing a 90% drop in social contacts during lockdown). 6. The discussion of how it is possible to have a long run of 0 cases while the epidemic is progressing (section 4.2) should be shortened or removed. It is obvious that this *can* happen for some parameters (such as very low detection rate) but it also doesn't appear to be relevant to the real world. The authors bring up S. Korea as an example but cases never went to 0 there. I am not aware of a location where community acquired cases went to 0 and then re-emerged from a community source. While there was some speculation about the New Zealand second wave genomic analysis strongly favors the re-introduction hypothesis. This section has been shortened and one figure removed. The parameters used in the exampels are not extreme or unrealistic parameters: they are the ones estimated from the data, so we do not think this example is far-fetched. It results from a combination of • partial observability of the total number of cases (in absence of widespread testing), and • stochastic dynamics of the model, which lead to the possibility of random flare-ups. Note that we are not referring to the situation where cases go zero but to the situation where cases go undetected: even with a detection probability of 20%, which is quite high, the probability of observing a second peak after 60 days without reported cases is found to be 60%. This is not a small probability and corresponds to the fact that random flare-ups may originate from a small group of undetected cases. 7. Authors need to provide source code, input data and examples. The source code and data have been now deposited on a public repository on GitHub: https://github.com/RenyuanXu/COVID-19 Moreover we have implemented an easy-to-use online simulation app for running scenario simulation and comparative analysis of mitigation policies: http://covid19.kotlicki.pl which readers can use to explore other scenarios/policies than those presented in the paper. Minor comments In equation 4.2 the authors use what they refer to as f -The (average) infection fatality rate. However, this is strongly dependent on the age distribution of the infected individuals. How was this take into account? This is a population-weighted average fatality rate averaged across age groups using England demographics. The same value was used in Ferguson et al. [2]. -The analysis in Figure 9 seems to be considerably influenced by the 7 day periodicity (in the correlation is locally maximal when the periodicity is aligned, giving local peaks at 1, 7 and 14). It would be better to correlate the window average. Thanks for this remark. We have redone the averaging as suggested (see new Figure 7). Figure 3a is not explained at all We have deleted Fig 3a In figure 3b the simulation and the data should span the same interval on the x-axis. We have deleted Fig 3b What is the pink region in Figure 8 We have deleted the pink region in Fig 8 (now Fig 6). What is on the X-axis in Figure 10. This is now Figure 7. This figure displays the estimated reporting ratio as a function of time, estimated using a rolling window. Figure 16 is not referenced in the text. It is unclear what it show. If these are data fits it would be better to show a per-region scatter plot again real data. The map display, while visually pleasing is not informative. Figure 16 has been deleted. In fact the entire section on counterfactual analysis was deleted. I don't understand the purpose of Figure 17. Is there supposed to be a difference between (a) and (b) -it is not visible by eye. Also legend labels are missing. Figure 17 has been deleted. In fact the entire section on counterfactual analysis was deleted. I don't understand figure 18. There are no labels on the legend. I am assuming the orange line is the fit. Why does the first panel show results of multiple simulations but only an average is shown for the second panel. Why does the simulation not go for as long as the data? Figure 18 has been deleted. In fact the entire section on counterfactual analysis was deleted. I don't see any purpose to figures 19-22 other than to show that the models are different. There is no systematic differences and the results are not compared to real data. Figures 19-22 has been deleted as part of the section on counterfactual analysis. -small thing but in 2.6 I would change "man×day" to "person×day" since that most accurately reflects the correct unit. Many thanks: we have made this change. -The authors assume that adaptive policies monitor case numbers. (As an aside I would avoid using R to denote this threshold to avoid confusion with the reproductive number). However, it is also possible to monitor hospitalizations that are less subject to reporting probability. This should be discussed. We have modified the notation of thresholds to B on , B off . Case numbers are in fact widely reported and used by public health authorities to monitor the epidemic. Data on hospitalizations is not available to us. We note that, if one assumes that a fraction π of infected are hospitalized, then we end up with a similar censoring problem as with reported versus total cases. 2 Response to Reviewer 2 I generally found the article to be well presented and insightful with sensible modelling assumptions throughout. I believe the assessment of control measures will be of interest to others and have just a few suggestions for improvement prior to publication. The authors spend a long time in the early parts of the paper (sections 3-6) discussing model fitting and presenting the benefits of a metapopulation approach. While much of this is certainly useful and informative, such models are certainly not new and similar approaches have already been extensively applied to the Covid pandemic. What this paper adds to the discussion is in its assessment of control measures and I wonder whether it would be beneficial to move a lot of this early discussion to appendices or a supplement. The paper is not short and I think this would help give it a stronger message. We thank the reviewer for these detailed remarks. We have shortened the early parts and, in particular, deleted the section on 'counterfactual scenario: no intervention' (previously Sec 5), to focus on mitigation policies as suggested. Thishas shortened the paper by 10 pages. As far as the model itself is concerned, I thought the greatest deficiency for its use in the context of epidemic control, was the lack of any quarantining dynamic. Having symptomatic individuals (or even exposed households) reducing their contact may have a huge impact. Though I recognise no model is perfect, it may be worth acknowledging this for future inclusion. In fact we do have a quarantine effect: this is the role of the coefficient κ < 1 in the force of infection Eq (2.2). We assume that social contacts for infected individuals are lower than for others by a factor κ < 1; this may be interpreted as quarantine i.e. avoidance of social contact by a fraction κ of infectious individuals. In the simulations we use κ = 0.5 In section 6.1 I was a little unclear on the length of the period of social distancing after lockdown. I assume no matter the level of compliance, if this period is long enough the social cost would be greater than that of the lockdown itself ? Could this be clarified? We have assumed social distancing after/between lockdowns. The level of reduction in social contacts is parameterized by the coefficient 0 ≤ m ≤ 1, with m = 0 corresponding to lockdown (estimated ∼ 90% reduction in contact rates) and m = 1 to 'normal' contacts. The social contact rates are then scaled by m. The reviewer is correct to state that a long period of social distancing can entail a social cost (i.e. reduction on social contacts) greater than a short lockdown. That is precisely why an efficiency analysis in terms of social cost vs health outcome is useful (and its outcome far from obvious). In section 7.1 there is a discussion of increasing the testing capacity. Is the effect of this identical to changing the triggering thresholds? Since the testing capacity increases over time then might there be some temporal effects on the triggering thresholds? This may be worth mentioning. The two issues are related but not identical. If one increases the test capacity, say by a factor 2, then this will double the number of reported cases so the tiggering threshold should be also doubled in order to retain a similar policy. However increasing the testing capacity also reduces the fraction of non-detected cases so makes the policy more effective. Finally a different colouring (over a larger spectrum) would be beneficial for the map figures, in their current form they appear rather homogeneous. We have used a broader range of colours, going from yellow to red to black. Impact of demographic granularity In response to the comment by Reviewer 1, we have re-calibrated our model (previously using 4 age groups) with a more detailed model with the 16 age groups used by Mossong et al. [4]). We have also looked at other variations, e.g. splitting the age group 20-59 into 2 subgroups, etc. The results are quite similar to our original model with 4 age groups. However, some quantitative differences may emerge when assessing the impact of targeted policies. Figure 1 shows an example of such differences. The dynamics of infections, cases and fatalities are rather insensitive to the demographic granularity but some differences in the impact of targeted policies are visible. In the revised version, we have followed the recommendation of the reviewers and retained the more granular model with 16 age groups. This section provides details on the different levels of granularity used, whose results are compared in Figure 1 (now added in the paper in Section 6.1 ). Four age groups We consider the following four age groups [0, 20), [20, 60), [60, 70) and 70+. We assume people with age between 20 and 60 will travel for work (age groups 2), hence the the force of infection λ t (r, a) is given by (2) We assume people with age between 20 and 60 will travel for work (age groups 2-3), hence the force of infection λ t (r, a), which measures the rate of exposure at location r for age group a, is given by σ r a,a (t) κI t (r, a ) + A t (r, a ) N (r, a ) + α σ r a,2 (t) K r =1 M r,r (t) κI t (r , 2) + A t (r , 2) N (r , 2) We assume people with age between 20 and 60 will travel for work (age groups 5-12), hence the the force of infection λ t (r, a) is given by We are grateful to the Asssociate Editor and reviewer for their constructive comments regarding the differences in infection rates for symptomatic vs asymptomatic carriers. We agree that this may be an important feature, so we have followed the suggestion of the reviewer and re-run the model with this feature, distinguishing • the infection rate α 0 for asymptomatic carriers from • the infection rate α 1 > α 0 for symptomatic carriers. In terms of the model, introducing heterogeneous infection rates only affects the expression of the force of infection (Equation 2.2 in the paper). The rest is unaffected. Estimation of symptomatic vs asymptomatic infection rates Previously we had estimated a single infection rate parameter (which was shown to be consistent with values used in most other papers on COVID modeling). Now this value is interpreted as an average value across symptomatic/asymptomatic carriers. To estimate the heterogeneous infection rates, we have based our approach on the recent study [1] which estimates that, when adjusted for age and gender, the incidence of COVID-19 among close contacts of a symptomatic index case is 3.85 times higher than for close contacts of an asymptomatic carrier, which means α 1 3.85α 0 We explain in Sec 3.4 how we use this constraint together with previous estimates of the average infection rate to estimate α 0 , α 1 in each age class. The results are shown in the Table 1: Age-dependent infection rates: symptomatic (α 1 ) vs asymptomatic (α 0 ). Influence of different infection rates for symptomatic vs asymptomatic carriers Using the estimated values of infection rates α 0 < α 1 , we have re-run scenario simulations and compared the results with the previous simulations with a homogeneous infection rate. These differences are discussed in a new Section 5.3 Impact of parameter uncertainty. Figures 18 and 19 illustrate the impact of having heterogeneous infection rates. Introducing different infection rates for symptomatic vs asymptomatic carriers reduces the estimated fatalities by 10-20% in the scenarios, but does not modify the conclusions regarding the comparison of different policies. For example in Section 5 (Figure 18), we still observe school closures to be less effective than restrictions on non-work/school gatherings ('no pubs'). We conclude from our new simulations that the policy comparisons, which are the main focus of the paper, are not affected by (although the numerical outcomes of the scenarios are of course affected by these and other parameters). We hope the revised version will be deemed suitable for publication in Open Science.
2020-09-01T13:05:27.244Z
2020-08-26T00:00:00.000
{ "year": 2021, "sha1": "c2efa6bff3f2df03994f7ab7bbbd370e54c2d356", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1098/rsos.201535", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a8ab58d0f2529f7fb5ed5955c1e28831bd8b27bc", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
13799272
pes2o/s2orc
v3-fos-license
Deciphering Potential Chemical Compounds of Gaseous 1 Oxidized Mercury in Florida , USA 2 Abstract. The highest mercury (Hg) wet deposition in the United States of America (USA) occurs along the Gulf of Mexico, and in the southern and central Mississippi River Valley. Gaseous oxidized Hg (GOM) is thought to be a major contributor due to high water solubility and reactivity. Therefore, it is critical to understand concentrations, potential for wet and dry deposition, and GOM compounds present in the air. Concentrations and dry-deposition fluxes of GOM were measured and calculated for Naval Air Station Pensacola Outlying Landing Field (OLF) in Florida using data collected by a Tekran® 2537/1130/1135, the University of Nevada Reno Reactive Mercury Active System (UNRRMAS) with cation exchange and nylon membranes, and the Aerohead samplers that use cation-exchange membranes to determine dry deposition. Relationships with Tekran®-derived data must be interpreted with caution, since the GOM concentrations measured are biased low depending on the chemical compounds in air and interferences with water vapor and ozone. Criteria air pollutants were concurrently measured. This allowed for comparison and better understanding of GOM. In addition to other methods previously applied at OLF, use of the UNRRMAS provided a platform for determination of the chemical compounds of GOM in the air. Results from nylon membranes with thermal desorption analyses indicated seven GOM compounds in this area, including HgBr2, HgCl2, HgO, Hg–nitrogen and sulfur compounds, and two unknown compounds. This indicates that the site is influenced by different gaseous phase reactions and sources. Using back-trajectory analysis during a high-GOM event related to high CO, but average SO2, indicated air parcels moved from the free troposphere and across Arkansas, Mississippi, and Alabama at low elevation ( In order to develop methods to measure GOM concentrations and chemistry, and model dry-deposition processes, the actual GOM compounds need to be known, as well as their corresponding physicochemical properties, such as Henry's Law constants. Introduction Mercury (Hg) has been classified as a persistent, bioaccumulative toxin (PBT) (UNEP, 2013), and deposition from the atmosphere is considered the dominant pathway by which Hg enters remote ecosystems (Lindberg et al., 2007).In some areas, scavenging by precipitation controls atmospheric Hg removal processes, such as in the southeastern United States of America (USA), where precipitation amounts are high (Prestbo and Gay, 2009).However, wet deposition concentrations are not necessarily correlated with precipitation amounts > 81 mm, and deposition has not decreased with emission reductions as coal combustion facilities in the region have implemented control technologies (Prestbo and Gay, 2009;MDN, 2014).For example, fluxes at Naval Air Station Pensacola Outlying Landing Field (OLF) in Florida were 17.1 µg m −2 in 2012 and 21.0 µg m −2 in 2014 (MDN, 2014).A contributing factor to wet deposition in the Gulf Coast area may be related to high atmospheric convection during thunderstorms and scavenging of gaseous oxidized Hg (GOM) from the free troposphere (Nair et al., 2013), as well as down-mixing of air with high GOM from the free troposphere (Gustin et al., 2012). An additional concern is that the Tekran ® system measurement currently used to quantify GOM does not equally quantify all GOM forms, and has interferences with water vapor and ozone (cf.Ambrose et al., 2013;Gustin et al., 2013;Huang et al., 2013;Lyman et al., 2010Lyman et al., , 2016;;McClure et al., 2014).Since GOM is considered an important form that can be rapidly removed from the atmosphere due to high water solubility (Lindberg et al., 2007), it is important to understand both atmospheric concentrations and chemistry (i.e., specific chemical compounds).Use of the University of Nevada Reno Reactive Mercury Active System (UNR-RMAS) that collects GOM on nylon membranes in tandem with cation exchange membranes has indicated that there are different chemical compounds in the air and concentrations are 2 to 13 times higher than previously thought at locations in the western USA (Huang et al., 2013;Gustin et al., 2016). Mercury has been studied in Florida for many years, initially because of the high concentrations measured in fish and Florida panthers (Dvonch et al., 1999;Gustin et al., 2012;Marsik et al., 2007;Pancras et al., 2011;Peterson et al., 2012).Long-term GEM and GOM concentrations as measured by the Tekran ® system have declined; however, PBM concentrations increased after 2009 (Edgerton, unpublished data), suggesting the atmospheric chemistry has changed.Peterson et al. (2012) and Gustin et al. (2012) suggested, based on detailed assessment of passive sampler and Tekran ® system collected Hg data, criteria air pollutants, and meteorology, that at three locations in Florida (OLF, Davie, and Tampa) different GOM compounds were present, and these were generated by in situ oxidation associated with pollutants generated by mobile sources, indirect and direct inputs of Hg from local electricity generating plants, and direct input of Hg associated with long-range transport.At OLF, background deposition was equal to that associated with mobile sources, and a significant component was derived from longrange transport in the spring.Long-range transport has been reported for OLF in the spring (Weiss-Penzias et al., 2011;Gustin et al., 2012).Long-range transport of ozone is a very common event in the spring in the western United States (see special issues on ozone: Gertler and Bennett, 2015;Lefohn and Cooper, 2015). In this work, GOM collected using the UNRRMAS and the Aerohead dry-deposition measurement method (Lyman et al., 2007(Lyman et al., , 2009) ) were analyzed along with Tekran ® Hg and criteria air pollutant data to understand GOM chemistry and dry deposition at OLF, located ∼ 15 km NW of Pensacola, Florida. GOM dry-deposition fluxes were calculated using deposition velocities determined using a multi-resistance model with ambient air GOM concentrations from the Tekran ® system (multiplied by a factor of 3 due to bias in the Tekran ® system; cf.Huang and Gustin, 2015), and compared to those obtained using Aerohead data.The chemistry of GOM compounds was identified.Results were used to estimate drydeposition velocities for the GOM compounds observed.The hypothesis for this work was that since GOM compounds can vary spatially and temporally, due to different compounds produced by different sources and processes, this will result in different dry-deposition velocities and dry-deposition fluxes. Field site The sampling site was located at OLF (30.550 • N, 87.374 • W, 44 m a.s.l.-above sea level).The closest major Hg emission source is a coal-fired power plant (Plant Crist) northeast of the site (Fig. 1).This area has been used for atmospheric Hg research in previous studies (Caffrey et al., 2010;Lyman et al., 2009;Gustin et al., 2012;Peterson et al., 2012;Weiss-Penzias et al., 2011).OLF is a coastal site (∼ 25 km away from Gulf of Mexico) influenced by sea breezes especially during the summer (Gustin et al., 2012).Based on cluster analyses of data from 1 year at this location, ∼ 24 % of the air is derived from the marine boundary layer during the day and 60 % at night (Fig. 1). Sampling methods Aerohead samplers for determination of dry deposition were deployed bi-weekly from June 2012 to March 2014.UN-RRAMS samples were taken bi-weekly from March 2013 to March 2014.Atmospheric Hg concentrations, including GEM, GOM, and PBM, were measured using a Tekran ® system (model 2357/1130/1135, Tekran ® Instrument Corp., Ontario, Canada) that was operated with 1 h sampling and 1 h desorption with detection limits of 0.1 ng m −3 , 1.5 pg m −3 , and 1.5 pg m −3 , respectively. Reactive Hg (GOM + PBM) concentrations were measured using the UNRRAMS with three sets of two inseries 47 mm cation-exchange membranes (ICE450, Pall Corp., MI, USA).Three sets of nylon membranes (0.2 µm, Cole-Parmer, IL, USA) were also deployed to assess Hg compounds in the air (see Pierce and Gustin, 2017, for schematic).Cation exchange membranes have been demonstrated to quantitatively measure specific compounds of GOM in the laboratory; however, they may not measure all compounds (Gustin et al., 2015(Gustin et al., , 2016)).These membranes have also been shown to retain compounds loaded for 3 weeks (Pierce and Gustin, 2017).Nylon membranes do not retain GOM compounds quantitatively, and retention during transport needs to be tested (Huang et al., 2013;Gustin et al., 2015Gustin et al., , 2016)).Nylon membrane retention is impacted by relative humidity that might limit uptake of specific forms.Criteria air pollutants and meteorological data, including CO, SO 2 , O 3 , PM 2.5 , NO, NO 2 , NO y , temperature, relative humidity, wind speed, wind direction, pressure, solar radiation, and precipitation were available at this site for the sampling period.See Peterson et al. (2012) for detailed information on collection of these measurements. Aeroheads and membranes were prepared at UNR, packed in a thermal isolated cooler, and shipped back and forth between the laboratory and site.Samples were stored in a freezer (−22 • C) at UNR until analyzed.Cation-exchange membranes were digested and analyzed following EPA Method 1631 E (Peterson et al., 2012), and nylon membranes were first thermally desorbed, and then analyzed using EPA Method 1631 E (Huang et al., 2013).Cation-exchange membrane blanks for Aerohead and UNRRAMS were 0.40 ± 0.18 (n = 42), 0.37 ± 0.26 (n = 77) ng, respectively, and for nylon membranes used in the active system blanks were 0.03 ± 0.03 (n = 69) ng.Therefore, method detection limits (MDLs, 3σ ) for a 2-week sampling time (336 h) was 0.13 ng m −2 h −1 for dry deposition.For the active membrane system, the Hg amount on the back-up filters and blanks were not significantly different (cation-exchange membrane: 0.4 ± 0.3 vs. 0.4 ± 0.3 ng; nylon membrane: 0.03 ± 0.03 vs. 0.02 ± 0.02 ng); therefore, the back-up filters were included in the calculation of the bi-weekly blanks.The bi-weekly MDLs (336 h) for active systems with cation-exchange and nylon membranes were 2-68 pg m −3 (mean: 24 pg m −3 ) and 0.01-14.6pg m −3 (mean: 2.1 pg m −3 ), respectively.Biweekly MDL was calculated from 3 times the standard deviation of bi-weekly blanks.The MDL was calculated for each period of sampling, due to the fact this can vary based on treatment of the membranes, the time samples are prepared for deployment, deployment at the field site, and handling once returned to the laboratory.The membranes may also vary by material lot.All samples were corrected by subtracting the blank for the corresponding 2-week period. Data analyses Hourly Tekran ® , criteria air pollutants, and meteorological data were managed and validated by Atmospheric Research & Analysis, Inc (see Peterson et al., 2012).These were then averaged into 2-week intervals to merge with the membrane measurements. In previous studies, scaling factors similar to HNO 3 (α = β = 10) were used to calculate oxidized Hg dry-deposition velocity (Marsik et al., 2007;Castro et al., 2012); however, Lyman et al. (2007) used the effective Henry's Law constant, and half-redox reactions in neutral solutions of HgCl 2 , and indicated HONO might better represent the chemical properties of oxidized Hg rather than HNO 3 .Huang and Gustin (2015a) indicated that due to limited understanding of oxidized Hg chemical properties, no single value can be used to calculate oxidized Hg dry deposition, because α and β would change with different GOM compounds.Here, dry deposition was calculated using the multiple resistance model of Lyman et al. (2007) using both α = β = 2, 5, 7, and 10. Overall measurements Similar to previous work at this location (Gustin et al., 2012), O 3 was highest in the spring.CO concentrations were high in winter, due to a low boundary layer and biomass burning, and low in summer (Table 1).Observations from the three GOM sampling methods (Tekran ® , and nylon and cation exchange membranes) showed higher GOM concentrations in spring relative to other seasons (Table 1).Concentrations of GOM measured by cation-exchange membranes in the active system were significantly (p value < 0.05, paired-t test) higher than those measured by Tekran ® KCl-coated denuder and nylon membranes, both of which have been reported to be influenced by relative humidity (Huang and Gustin, 2015b;Gustin et al., 2015).Mean cation-exchange membrane concentrations were higher than Tekran ® -derived GOM by 14, 48, 11, and 13 times in the spring, summer, fall, and winter, respectively. Nylon membranes collected higher GOM concentrations than those measured by the Tekran ® in spring 2013 when the humidity was low.Overall, air concentrations measured by the Tekran ® system in this study were similar to those measured at OLF in 2010 (Peterson et al., 2012).Particulatebound Hg had the same seasonal trend as GOM, but higher concentrations. Understanding the oxidants present in air is important for understanding potential GOM compounds.Oxidants to consider include O 3 , halogenated compounds, and sulfur and nitrogen compounds (cf.Gustin et al., 2016).Since the active system is currently limited to a 2-week sampling period, they are useful for understanding the specific compounds that might be present, and this in turn can be used to understand sources. Potential GOM compounds Standard desorption profiles for GOM compounds obtained by Huang et al. (2013) and Gustin et al. (2015) are compared to those obtained at OLF (Fig. 2).Compounds used as standards included HgBr 2 , HgCl 2 , HgN 2 O 6 q H 2 O, HgSO 4 , and HgO.HgCl 2 and HgBr 2 have been identified as being released from permeation tubes (Lyman et al., 2016); however, the exact N and S compounds are not known.During 10 periods the nylon membranes (collected in triplicate) collected a significant amount of GOM based on their bi-weekly detection limit (Fig. 2), and the desorption profiles varied.Although data are limited, we have observed similar thermal desorption compounds in other studies (i.e., Huang et al., 2013;Gustin et al., 2016).For example, in the marine boundary layer in Santa Cruz, California, based on the additional curves in Gustin et al. (2015), Hg-nitrogen and sulfur compounds were observed.At the Reno Atmospheric Mercury Intercomparison Experiment (RAMIX) site, Nevada, Huang et al. (2013) reported HgBr 2 / HgCl 2 compounds; this is due to free troposphere inputs at this site (Gustin et al., 2013).At a highway-impacted site Huang et al. (2013) reported similar patterns to that in Gustin et al. (2016) that included Hgnitrogen and sulfur compounds, and unknown compounds that generated a high residual tail in the profile.This indicates that similar chemical forms are being collected, and is supported by work described below, and that the compounds are not being generated on the membranes.Lack of generation on membranes has also been shown to be the case in a limited study (Pierce and Gustin, 2017).In addition to our work, HgBr 2 and HgCl 2 were reported to occur in Montreal, Canada (Deeds et al., 2015). Seven distinct patterns of release were observed from membranes collected at OLF during thermal desorption.One had a high residual tail that does not match our stan-dard profiles; however, this was also observed in Nevada (Gustin et al., 2016).These occurred on 2 and 9 April and 21 May 2013.This suggests that in spring there is a compound that is unknown based on current standard profiles.Based on our methylmercury profile, generated using methylmercury added as a liquid to membranes and presented in Gustin et al. (2015), it is possible this could be some organic compound.A nitrogen-based compound was found on 21 May 2013 based on the desorption profile.A pattern occurred on 19 March and 19 November 2013, and this corresponded to HgBr 2 / HgCl 2 with some residual tail that is again some compound not accounted for. Patterns observed on 7 May and 27 August 2013 corresponded to a Hg-nitrogen-based compound with a residual tail.The fifth pattern occurred on 14 January 2014, and 24 September 2013 was associated with HgSO 4 and the error bars are small.Data collected on 22 October 2013 was noisy and had subtle peaks that correspond with HgO, a nitrogenbased compound, and a high residual tail.It is interesting to note that the 19 November 2013 profile was similar to HgCl 2 . Previous studies reported consistent desorption profiles from three sites in Nevada and California without significant point sources (Huang et al., 2013).Huang et al. (2013) presented desorption profiles from a highway, agriculture, and marine boundary layer site.Profiles from the marine boundary layer and agriculture-impacted site did not show clear residual tails at 185 • C, but these were observed at the highway-impacted site.At OLF, a significant amount of GOM (15-30 %) was released after 160 • C.This and previous work implies that we are missing one or more GOM compound(s) (Fig. 2) in our permeation profiles.Interestingly, a peak was found in the 9 April 2013 sample at the GEM release temperature; this is not due to GEM absorption as demonstrated by Huang et al. (2013), and was also observed in Nevada (Gustin et al., 2016), suggesting an additional unidentified compound.This information indicates GOM compounds at OLF varied with time, and this variation is due to complicated Hg emission sources and chemistry at this location (cf.Gustin et al., 2012). At OLF, GOM composition on the nylon membrane was more complicated than that collected at rural sites in the western USA (cf.Huang et al., 2013;Gustin et al., 2016); however, similar complexity was observed at a highway location in Reno, Nevada (Gustin et al., 2016).Desorption curves from the nylon filters collected at rural locations in Nevada were in the range of the standard GOM compounds that have been investigated (Huang et al., 2013;Gustin et al., 2016).Curves with multiple peaks in this study imply that there were at least seven GOM compounds collected on the nylon membranes. Dry-deposition measurements Dry deposition of GOM measured by Aerohead samplers ranged from 0 to 0.5 ng m −2 h −1 , and 83 % of GOM dry deposition was higher than the detection limit (0.13 ng m −2 h −1 ).Higher GOM dry deposition was observed in spring relative to winter (ANOVA one-way rank, p value < 0.01); GOM dry deposition was slightly lower in summer and fall (not statistically different) relative to the spring due to high wet deposition and scavenging processes during these seasons.The pattern in GOM seasonal dry deposition was similar to that reported by Peterson et al. (2012).However, GOM dry-deposition rates were significantly higher in this study than 2010 values (0.2 vs. 0.05 ng m −2 h −1 ).This is due to the correction of 0.2 ng m −2 h −1 applied in Peterson et al. (2012) to account for contamination of the Aerohead that has been demonstrated to be unnecessary (Huang et al., 2014).Although the highest GOM dry deposition measured using the Aerohead sampler and GOM concentrations measured using the UN-RRAMS were observed in spring 2013, the value in March 2014 was relatively low.In March 2014, atmospheric conditions were more similar to winter than spring, with low temperatures and high CO concentrations.These results are different from those calculated using Tekran ® measurements that suggest low GOM concentrations and high deposition velocities, and this is because the denuder measurements are biased low.Modeled GOM dry-deposition fluxes were calculated using GOM concentrations measured by the Tekran ® system that were multiplied by a factor of 3 (cf.Huang et al., 2014).In general, measured Hg dry-deposition fluxes were similar to those modeled simulations with GOM dry deposition α = β = 2 during winter, spring, and fall (see below; Fig. 3).Measured Hg dry deposition was significantly higher than modeled results (both α = β = 2 and 10) in summer and early fall (Fig. 3).This indicates that there are compounds of GOM in the summer that are poorly collected by the denuder, and this also can help explain the higher wet deposition measured during this season (Prestbo and Gay, 2009).Highest deposition was measured during the spring, when the input from long-range transport is greatest (Gustin et al., 2012).Figure 3 shows the disparity that occurs by season, and compares model and measured values.For example in spring a = b = 10 significantly overestimates deposition, while in the summer and early fall measured deposition is greater than modeled values. Because of the low GOM concentrations and influence of humidity on the nylon membrane measurements (Huang and Gustin, 2015b), GOM compounds were identified only in one summertime sample as HgN 2 O 6 q H 2 O.During this time, measured GOM dry deposition was ∼ 6 times higher than both modeled results, and considering the Tekran ® correction factor of 3, membranebased HgN 2 O 6 q H 2 O dry-deposition flux was ∼ 18 times higher than the Tekran ® -model-based value.Gustin et al. (2015) indicated HgN 2 O 6 q H 2 O collection efficiency on cation-exchange membrane in charcoal scrubbed air was ∼ 12.6 times higher than on the Tekran ® KCl-coated denuder. However, in May 2013, two samples were dominated by a profile similar to the Hg nitrogen-based compound with lower measured-to-modeled ratios (2.1-6.0 with Tekran ® correction factor).This might be due to ambient air GOM chemistry being dominated by a compound with a different dry-deposition velocity, less interference on the denuder surface, or parameters in the dry-deposition scheme.In May, GOM concentrations measured by the Tekran ® were higher than in summer due to lower wet deposition and mean humidity (Table 1).Therefore, despite the fact that GOM collection efficiency associated with the Tekran ® and nylon membranes are impacted by environmental conditions, this demonstrates the presence of different compounds in the air.The dry-deposition scheme needs Henry's Law constants for determining the scaling factors for specific resistances for different compounds (Lyman et al., 2007;Zhang et al., 2002).Lin et al. (2006) stated that the dry-deposition velocity of HgO is 2-times higher than that for HgCl 2 , due to the different Henry's Law constant.The Henry's Law constants for HgCl 2 , HgBr 2 , and HgO presented in previous literature (Schroeder and Munthe, 1998) have high uncertainty, for how these calculations were done is not clear (S.Lyman, Utah State University, personal communication, 2015), and the constants for HgN 2 O 6 q H 2 O and HgSO 4 are un- Table 2. Modeled (multiple-resistance model) and measured (surrogate surfaces) GOM dry deposition (ng m −2 h −1 ); GOM concentrations used to calculate for modeled results are from the Tekran ® data and corrected by compounds' corresponding ratios from Gustin et al. (2015Gustin et al. ( , 2016)).Model resistance for the unknown compound was calculated using the Tekran ® data multiplied by 3. The tentative GOM compounds are identified from nylon membrane results. If the ratios (HgBr 2 : 1.6, HgCl 2 : 2.4, HgSO 4 : 2.3, HgO : 3.7, and HgN 2 O 6 q H 2 O : 12.6) of GOM concentrations measured by the Tekran ® vs. cation-exchange membranes for different GOM-permeated compounds (Gustin et al., 2015;Huang et al., 2013) are used to correct Tekran ® GOM data in this study, modeled GOM dry deposition (Fig. 3) is not correlated with measurements.For example, on 9 March and 19 November 2013 (Fig. 3), GOM was dominated by HgBr 2 and HgCl 2 .Dry deposition of HgBr 2 from Aerohead measurements and modeling were close to α = β = 10; however, modeled and measured HgCl 2 dry deposition were matched as α = β = 2. Average deposition velocity for α = β = 2 was 0.78 cm s −1 , and for α = β = 10 is 1.59 cm s −1 , if we assume the model is right.There were three samples that were identified as Hg-nitrogenbased compounds using nylon membranes; however, the ratios of measurement and modeling HgN 2 O 6 q H 2 O dry deposition were inconsistent over time.In spring, all modeled HgN 2 O 6 q H 2 O dry-deposition values were much higher than measured values; however, in summer, measured and modeled HgN 2 O 6 q H 2 O dry deposition were similar as α = β = 5 (Table 2).If you assume the dry-deposition measurements made by the surrogate surfaces are accurate then this demonstrates that there are different forms that occur over time, and these will have different deposition velocities, as suggested by Peterson et al. (2012). Elevated pollution event In spring 2013, there was a time period when concentrations of O 3 , CO, and all Hg measured were high (Fig. 4). Figure 5 shows that during this time, air masses traveled west to east across the continent.The air movement pattern is similar to that found in Gustin et al. (2012) for OLF Class 2 events which had low SO 2 concentrations.During this 4-week period, air parcels traveling to OLF were in the free troposphere and descended to the surface (Fig. 5).Although there are coal-fired power plants in the upwind area within a 500 km range (Fig. 1), the low SO 2 concentrations and elevated CO, O 3 , and GOM values were not from fossil fuel combustion.Gustin et al. (2012) also indicated that free troposphere air impacted OLF.The first few endpoints for these trajectories indicate that air parcels entered North America at > 1000 m a.g.l.; therefore, there was transport of some air measured during this time from the free troposphere.Ozone concentrations were also similar to those measured in Nevada in the free troposphere at this time (Gustin et al., 2014).It is important to note that the back trajectories are only for 72 h and the ones that subsided to surface levels in the midwestern USA were traveling fast.This is a common event in the spring that represents free troposphere and/or stratosphere transport into the western USA and Florida (Gertler and Bennett, 2015;Lefohn and Cooper, 2015;Weiss-Penzias et al., 2011;Gustin et al., 2012). The chemical composition of this event suggests potential input from Asia, as previously suggested, for three locations in Florida in the spring by Gustin et al. (2012).During this time, based on thermal desorption profiles, HgBr 2 was measured initially and then the following profiles obtained showed a gradual increase in GOM with increasing temperature with a high residual tail.This would suggest initial subsidence of air from the stratosphere and/or troposphere (cf.Lyman and Jaffe, 2012) followed by a mixture of polluted air as observed in the western USA (cf. VanCuren and Gustin, 2015). Conclusions The chemical forms of GOM in the atmosphere at OLF varied by season as suggested by Gustin et al. (2012).Seven potential different GOM compounds were identified at OLF using nylon membranes with thermal desorption analysis, including HgBr 2 , HgCl 2 , HgO, Hg-nitrogen and sulfur com- pounds, and two unknown compounds.Given the long sampling time, detailed assessment of specific sources is difficult, but the presence of different compounds indicates multiple sources and different GOM chemistry.Comparing modeled and measured Hg dry-deposition fluxes also demonstrates that there are different forms in air and this will affect drydeposition velocities.In order to measure GOM accurately, we need to know what compounds exist in the atmosphere. Data availability Data are available upon request from the first author Jiaoyan Huang (huangj1311@gmail.com). Figure 2 . Figure 2. Desorption profiles from nylon membranes with standard materials in laboratory investigation (top) and field measurements.Each whisker is 1 standard variation, and only present in the desorption peak.Note the Hg-nitrogen compound in the permeation tube was HgN 2 O 6 q H 2 O.The y axis indicates the percentage released at each temperature. Figure 3 . Figure 3. Measured and modeled GOM dry-deposition fluxes.Tekran ® data (correction factor of 3) were used with multiple resistance models (α = β = 2 and 10).Tentative GOM compounds were determined using the results from nylon membrane desorption. Figure 4 . Figure 4. Temporal variation of GOM concentrations (mean ± standard deviation, bi-week average); outlined rectangle indicates a polluted event with high Hg, CO, and ozone concentrations.Data are missing for 3 weeks because it was not collected.Tekran ® data are presented when > 75 % of the data were available and membrane data are shown when above the method detection limit. Figure 5 . Figure 5. Results of gridded frequency distribution (top panel); light color indicates less endpoints in a grid.Altitude of 72 h trajectories (bottom panel) during the polluted event (12 March-2 April 2013); light color of dots on left panel represents low altitude. Table 1 . Overall seasonal average of criteria air pollutants, GEM, PBM, GOM (measured using three different methods) concentration, GOM dry deposition (DD), and meteorological data at OLF.
2018-04-26T00:51:08.302Z
2017-02-03T00:00:00.000
{ "year": 2017, "sha1": "ee76cdf8087fc5b5f9b18fd7c16a54ba10163668", "oa_license": "CCBY", "oa_url": "https://www.atmos-chem-phys.net/17/1689/2017/acp-17-1689-2017.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f749d2c846c06174ee75d7da45667cf16c82af80", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
24251241
pes2o/s2orc
v3-fos-license
Imputation-Based Fine-Mapping Suggests That Most QTL in an Outbred Chicken Advanced Intercross Body Weight Line Are Due to Multiple, Linked Loci The Virginia chicken lines have been divergently selected for juvenile body weight for more than 50 generations. Today, the high- and low-weight lines show a >12-fold difference for the selected trait, 56-d body weight. These lines provide unique opportunities to study the genetic architecture of long-term, single-trait selection. Previously, several quantitative trait loci (QTL) contributing to weight differences between the lines were mapped in an F2-cross between them, and these were later replicated and fine-mapped in a nine-generation advanced intercross of them. Here, we explore the possibility to further increase the fine-mapping resolution of these QTL via a pedigree-based imputation strategy that aims to better capture the genetic diversity in the divergently selected, but outbred, founder lines. The founders of the intercross were high-density genotyped, and then pedigree-based imputation was used to assign genotypes throughout the pedigree. Imputation increased the marker density 20-fold in the selected QTL, providing 6911 markers for the subsequent analysis. Both single-marker association and multi-marker backward-elimination analyses were used to explore regions associated with 56-d body weight. The approach revealed several statistically and population structure independent associations and increased the mapping resolution. Further, most QTL were also found to contain multiple independent associations to markers that were not fixed in the founder populations, implying a complex underlying architecture due to the combined effects of multiple, linked loci perhaps located on independent haplotypes that still segregate in the selected lines. traits are related to metabolism, feeding-behavior and growth, they also provide a good model for translational studies to decipher the genetic architecture of traits of interest in human medicine, including obesity, eating disorders, and diabetes. Kemper et al. (2012) recently reviewed the literature on the genetic basis of body size and highlighted how complex the genetic architectures of body size are in species with contributions by many loci with large, intermediate, and small individual effects. Also within species, the genetic basis of variations in body size among strains of mice (Valdar et al. 2006), breeds of cattle (Saatchi et al. 2014), pigs (Yoo et al. 2014), and chickens (Van Goor et al. 2015) is often polygenic and due to polymorphisms with modest individual effects. Studies of experimental crosses from artificially selected populations with extreme body sizes in the mouse (Bevova et al. 2006;Parker et al. 2011) and chicken (Sheng et al. 2015) using, for example, chromosome substitution strains (Bevova et al. 2006) and advanced intercross lines (AILs) (Darvasi and Soller 1995;Besnier et al. 2011;Parker et al. 2011) have revealed that the responses to selection in these populations has resulted from selection on highly complex and polygenic genetic architectures. The Virginia lines are experimental populations established in 1957 to study the genetic effects of long-term (.50 generations), divergent, single-trait selection for 56-d high (HWS) or low (LWS) body weight in chickens (Dunnington and Siegel 1996;Márquez et al. 2010;Dunnington et al. 2013). The lines originated from the same base population, composed by crossing seven partially inbred White Plymouth Rock chicken lines, and today display more than a 12-fold difference in body weight at 56 d of age (Márquez et al. 2010;Dunnington et al. 2013). In addition to the direct effects of selection on body weight, the selected lines also display correlated selection responses for a range of metabolic and behavioral traits including disrupted appetite, obesity, and antibody response (Dunnington et al. 2013). The Virginia HWS and LWS lines have been used extensively for studying the genetic architecture of body weight and other metabolic traits. These studies have uncovered a number of loci with minor direct effects on body weight, metabolic traits, and body-stature traits by quantitative trait loci (QTL) mapping in an F 2 intercross (Jacobsson et al. 2005;Park et al. 2006;Wahlberg et al. 2009). Also, a network of epistatic loci has been found to make a significant contribution to longterm selection response through the release of selection-induced additive variation (Carlborg et al. 2006;Le Rouzic et al. 2007;Le Rouzic and Carlborg 2008). Explorations of the genome-wide footprint of selection by selective-sweep mapping suggests that perhaps .100 loci throughout the genome have contributed to selection response (Johansson et al. 2010;Pettersson et al. 2013), and many of these contribute to 56 d body weight (Sheng et al. 2015). To replicate and fine-map the body weight QTL inferred in the F 2 intercross, we developed, genotyped and phenotyped for body weight at 56 d of age (BW56), a nine-generation AIL. This large AIL originated from the same founders as the F 2 intercross, but was selectively genotyped at a higher resolution (1 marker/cM) in nine QTL . In this population, most of the original minor ) and epistatic (Pettersson et al. 2011) QTL were replicated and fine-mapped. These earlier studies analyzed the data using a haplotypebased linkage-mapping approach in a variance-component based model framework to infer single-locus effects or a fixed-effect model framework assuming fixed alternative alleles in the two founder lines for detecting epistasis (Pettersson et al. 2011). The variance-component model was used in the replication study to avoid the assumption of allelic fixation in the founder lines. By implementing it in a Flexible Intercross Analysis modeling framework (Rönnegård et al. 2008), it was expected to improve power when the parental lines carry alleles with correlated effects (e.g., multiple alleles with similar effects). Although the initial studies mapped QTL under the assumption of fixation, or an effect correlation, of divergent alleles in the parental lines, the results at the same time implied that multiple alleles might be segregating in several of the mapped regions. To this end, the first QTL replication study in the AIL population found a large within founder line heterogeneity in the allelic effects. Later the selective-sweep studies, which utilized data from multiple generations of divergently selected and relaxed lines, identified ongoing selection and multiple sweeps in many QTL (Johansson et al. 2010;Pettersson et al. 2013), as well as extensive allelic purging (Pettersson et al. 2013). This allelic heterogeneity challenges attempts to dissect the architecture of the selected trait via, e.g., QTL introgression (Ek et al. 2012). Alternative approaches are therefore needed to uncover multi-locus, multiallelic genetic architectures in QTL and their contributions to the long-term response to directional selection. In this study, we explore an imputation-based association-mapping strategy for further dissection of previously mapped and replicated QTL Pettersson et al. 2011). For this, we made use of available high-density (60K SNP-chip) genotypes for founders (Johansson et al. 2010;Pettersson et al. 2013) and intermediate-density SNP-genotypes in several QTL in the entire nine-generation AIL pedigree. By increasing the marker density in the QTL throughout the AIL by imputation, we aimed to better capture the effects of segregating haplotypes within and between the divergently selected founder populations than with the previously used markers. This aim can be achieved as the original markers genotyped in the AIL were selected to identify high-and low-line derived alleles, and not alleles that segregate within or across the founder lines. By testing for association between imputed markers and body weight, the fine-mapping analyses were less constrained by the original selection of markers and facilitated a more thorough exploration of the genetic architectures of the nine evaluated QTL. A single-marker association analysis was first used to identify regions with candidate associations. These were then simultaneously analyzed using a backward-elimination approach with bootstrapping to identify statistically independent signals that were robust to the effects of markers elsewhere in the genome and the pedigreestructure in the population. In regions where the signals were robust to the pedigree-structure, the results from the single-marker association analysis were used to fine-map the region. Our imputation-based approach replicated most QTL and also improved the resolution in the fine-mapping analyses by not only using the recombination events in the AIL, but also the historical recombinations in the pedigree. We found that several of the original QTL are likely due to the combined effects of multiple linked loci, several of which are segregating in the founder lines of the AIL. Animals The Virginia chicken lines are part of an ongoing selection experiment to study the genetics of long-term, single-trait selection (Márquez et al. 2010;Dunnington et al. 2013). It was initiated in 1957 from a base population, generated by intercrossing seven partially inbred lines of White Plymouth Rock chickens. From the offspring of the partially inbred lines, resulting from the intercrossing, the birds with the highest and lowest 56 d body weights (with some restrictions), respectively, were selected to produce the high-and low-weight selected lines (HWS and LWS) (Márquez et al. 2010;Dunnington et al. 2013). Since then, the lines have undergone divergent selection for increased and decreased body weights with one new generation hatched in March of every year. An AIL was founded by reciprocal crosses of 29 HWS and 30 LWS founder birds from generation 40 . The mean, sexaveraged 56 d body weights for HWS and LWS at this generation were 1522 g and 181 g, respectively. Repeated intercrossing of birds was used to develop a nine-generation AIL consisting of generations F 0 -F 8 . In each generation, 90 birds were bred by paired mating, genotyped, and weighed at 56 d of age (BW56). In total, the AIL population consisted of 1536 F 0 -F 8 individuals with complete records on pedigree and genotypes (see Genotyping), and 1348 F 2 -F 8 individuals with juvenile body weight (BW56) records. Genotyping The complete AIL pedigree (1536 birds) had earlier been genotyped in nine selected QTL for 304 SNP-markers that passed quality control as described in Besnier et al. (2011). Further, 40 of the founders for the pedigree (20 HWS and 20 LWS) had also earlier been genotyped using a whole genome 60K SNP-chip (Johansson et al. 2010;Pettersson et al. 2013). The 6607 markers from the SNP-chip that were informative and passed quality control in that study are located in the nine QTL regions targeted in this study. When merging the information from the 60K SNP-chip and the information from the 304 markers genotyped earlier, 55 markers in 40 founders were genotyped using both methods. Out of these 55 markers, 28 markers with genotype inconsistencies between the genotyping technologies were removed during quality control. In total, our analyses were based on 6888 markers, where 40 of the 59 AIL founders had genotypes for all markers, and the remaining individuals in the pedigree had genotypes for 281 markers. Table 1 shows how these markers are distributed across the nine QTL regions. Phasing and imputation of markers All genotyped markers in the QTL (Table 1) were first ordered according to their physical location in the chicken genome assembly of May 2006 (galGal3). In the ordered marker set, the SNP-chip markers were evenly distributed in the intervals between the sparser set of markers genotyped across the entire AIL. Using the software ChromoPhase (Daetwyler et al. 2011), we phased and imputed genotypes for the complete set of 6888 markers across the entire AIL pedigree. ChromoPhase first phases large segments of chromosomes, in our case the QTL regions. It then imputes the missing genotypes in the AIL individuals genotyped with the sparse set of markers from the genotype information available in high-density genotyped founders utilizing the pedigree information. It thus predicts both phased haplotypes across the nine studied QTL and genotypes at markers that were only genotyped in a subset of the founder individuals in the pedigree. A two-step fine-mapping approach accounting for population structure Earlier studies have shown that the genetic architecture of body weight is highly polygenic in the Virginia lines (e.g., Siegel 1962a,b;Jacobsson et al. 2005;Wahlberg et al. 2009;Johansson et al. 2010;Besnier et al. 2011;Pettersson et al. 2011Pettersson et al. , 2013Sheng et al. 2015). We therefore implemented a forward-selection/backward-elimination procedure with a termination criteria suitable for a polygenic trait in a bootstrap-based framework to correct for population structure in the AIL (Valdar et al. 2009;Sheng et al. 2015). As all markers with genotypes could not be included in a backward-elimination analysis due to the limited sample size, we first used a forward-selection based singlemarker association analysis to identify a smaller set of statistically suggestive independent signals within each QTL region. The backwardelimination analysis (Valdar et al. 2009;Sheng et al. 2015) was then used to identify associations robust to possible influences of genetic dependencies (linkage or LD) between markers within the QTL or population structure in the AIL (Peirce et al. 2008;Cheng et al. 2010). Single-marker association analyses: The qtscore function in the Gen-ABEL package (Aulchenko et al. 2007) was used to test for association between body weight at 56 d of age and, genotyped or imputed, individual genetic markers within the targeted QTL. The allelic effect of each marker,b genotype ; was estimated using a regression model (Model 1), where the genotype at each marker was coded in Z as 0 if homozygous for the major allele, 1 if heterozygous, and 2 if homozygous for the minor allele. Sex and generation were added as categorical covariates, with two different levels for sex and seven different levels for generation, defined for each individual in X. The phenotype, body weight at 56 d of age, is given in the numerical variable y. e was assumed to be iid and normally distributed around 0 with variance s 2 : m is the intercept, which in this model represented the mean body weight at 56 d of age for individual F 2 females. The associations for the individual markers from this model were used for comparisons to results from earlier linkage-mapping analyses to fine-map the QTL in this pedigree that did not account for the possible effects of pedigree-structure (Model A in Besnier et al. 2011). Further, they were also used to evaluate the resolution of regions with associations robust to the pedigree-structure in the population (described in detail in Results). Next, a forward-selection analysis was performed by scanning across all markers within each QTL using Model 1. If any of the markers were nominally significant (P , 0.05) in the scan, the marker with the strongest association was added as a covariate in the model. This procedure was repeated until no additional significant markers were detected. The markers from this analysis with an allele-frequency . 0.10 in the population were subjected to the full backward-elimination analysis described in the next section. A multi-locus association analysis to identify regions with associations that are robust to the pedigree-structure in the population: In short, we used a bootstrap-based backward-elimination model selection framework (Sheng et al. 2015) across the markers selected by forward-selection in the QTL. An adaptive model selection criterion controlling the False Discovery Rate (Abramovich et al. 2006;Gavrilov et al. 2009) was used during backward-elimination in a standard linear model framework, starting with a full model including the fixed effects of sex and generation, and the additive effects of all markers (Model 2): where phenotype, sex, and generation were coded as described for Model 1 and where e again is assumed to be iid and normally distributed around 0 with variance s 2 : The intercept, m; represents the mean body weight at 56 d of age for female individuals from the F 2 generation. In Model 2, genotypes were coded based on the line-origin of the alleles at each locus. Genotypes of individuals homozygous for the major allele in the AIL founders from the high-weight selection line were coded as 1 at that locus. If an individual was heterozygous, its genotype was coded as 0. Genotypes of individuals homozygous for the allele corresponding to the major allele in AIL founders from the low-weight selection line were coded as 21. By coding genotypes in a 21, 0, and 1 manner, the estimates of the marginal allelesubstitution effects,b marker ; from fitting Model 2 will be negative if the allele that is at highest frequency in the high-weight line decreases weight or if the allele with highest frequency in the lowweight line increases weight. Convergence was based on a 20% False Discovery Rate (FDR) level. The analysis was performed using bootstrapping with 1000 resamples. Markers with an RMIP (Resample Model Inclusion Probability) . 0.46, as suggested for an AIL generation F 18 (Valdar et al. 2009), were included in the final model. The FDR in the final model was confirmed using the original FDR procedure described in Benjamini and Hochberg (1995) as implemented in the p.adjust function in the R stats-package (R Development Core Team 2015). The additive genetic effect for each locus was estimated using the multi-locus genetic model described above (Model 2). The contribution of a set of n associated markers to the founder line difference was calculated as P n i¼1 ð2a i · p i ðHWSÞ 2 p i ðLWSÞ Þ; where a i is the allele-substitution effect for marker i, and p i ðLWSÞ are the frequencies of the major AIL allele at marker i in the HWS and LWS founders, respectively. Data availability Genotype, phenotype, and pedigree data are included in the supplemental files. Supplemental Material, File S1 contains detailed descriptions of all supplemental data files. File S2 contains the genotypes, File S3 the pedigree, and File S4 the phenotypes. RESULTS AND DISCUSSION We compared the results of the imputation-based association analyses with the previously reported results from the linkage-based analysis of the same nine QTL in Besnier et al. (2011). Figure 1 shows the statistical support for association and linkage to BW56 across the QTL. The significances for all the genotyped and imputed markers from the single-marker associations are provided together with the results from model A in Besnier et al. (2011) that were also obtained without correction for population structure. Figure 1 also highlights those regions that contain associations robust to the pedigree-structure in the bootstrap-based forward-selection/backward-elimination analyses ( Figure 1 and Table 2). Overall, the results from these three analyses overlap well. Together, they show that most regions with strong associations in the single-marker association were robust to the pedigree-structure and that the association analysis approach using imputed genotypes for SNPs suggests that several QTL were likely due to multiple linked loci. In the sections below, these results are described and discussed in more detail. Four statistically independent associated markers in the GGA7 QTL Growth9 The QTL Growth9 on GGA7 (Gallus Gallus Autosome 7) (Figure 1; 10.9-35.5 Mb) was the only QTL that reached genome-wide significance in the first F 2 intercross between the HWS and LWS lines (Jacobsson et al. 2005). It was later identified as a central QTL in an epistatic network explaining a large part of the difference in weight between HWS and LWS lines (Carlborg et al. 2006). In the earlier fine-mapping analysis, the linkage signal covered most of the QTL region (from 15 to 35 Mb), but subsequent analyses showed that two independent loci were segregating in the region . The signal in the imputation-based association analysis performed here is more focused, with a highly significant signal in a 2.8 Mb region between 23.7 and 26.4 Mb. This region overlaps with the strongest signal in the linkage-scan and is tagged by a single imputed marker (rs16596357 i ; Table 2) in the multi-locus analysis accounting for population structure. The major allele in the HWS line (P = 0.67) increases weight by 18.9 (SE 5.6) g, and it still segregates at an intermediary frequency (P = 0.50) in the LWS line. Previously, Ahsan et al. (2013) explored potential candidate mutations in the QTL and found two regulatory SNPs near the peak at 21 Mb (21.6 and 22.7 Mb) and a synonymous-coding SNP in a CpG island in an exon of the Insulin-like growth factor binding protein 2 (IGFBP2) gene in the middle of the major association peak at 24.8 Mb. In addition to the strong association around 24 Mb, the association analysis also highlights two additional regions (centered around 18 and 29 Mb). A single imputed marker (rs14611566 i ; Table 2) is retained in the first region in the multi-locus analysis accounting for population structure. This marker has an estimated allele-substitution effect of 220.1 (SE 5.1) g, but as it segregates at equal, intermediary frequencies in the HWS and LWS lineages (P = 0.50) it did not contribute to the founder line difference. Two linked imputed markers are kept in the third region (rs10727581 i ; rs317586448 i ; 2.6 Mb apart; Table 2). Here, the first marker is nearly fixed for one allele in the LWS line (P = 0.93), but segregates at an intermediary frequency in the HWS line (P = 0.61). At the second n 1 Growth1 169 634 954 181 087 961 11 453 008 26 504 530 46 2 Growth2 47 929 675 65 460 002 17 530 328 33 667 700 40 2 Growth3 124 333 151 133 581 122 9 247 972 19 395 414 45 3 Growth4 24 029 841 68 029 533 43 999 693 57 1885 1942 44 4 Growth6 1 354 213 13 511 203 12 156 991 23 514 537 44 4 Growth7 85 459 943 88 832 107 3 372 165 14 141 155 46 5 Growth8 33 g Markers/Mb. marker, the major allele in the HWS (P = 0.74) segregates at an intermediary frequency in the LWS (P = 0.41). Due to the close linkage between the associated markers, it is difficult to interpret their individual effects and disentangle whether the detected associations are due to the LD-pattern of multiple closely linked loci or a single locus with multiple segregating alleles. The peak in the F 2 QTL overlaps the major 23.7-26.4 Mb association peak detected in this analysis (Wahlberg et al. 2009). Due to the low differences in allele frequencies between the associated markers in the three regions, their total contribution to the founder line difference is small (8 g) amounting to only about 10% of the original estimated F 2 QTL effect of 86 g (Wahlberg et al. 2009). Two statistically independent associated markers in the GGA1 QTL Growth1 The strongest association in the study by Besnier et al. (2011) was found on GGA1 in the QTL Growth1 (Figure 1; 169.6-181.1 Mb). Here, the second strongest association was detected in that QTL. The imputation-based association analysis highlights two significant associations separated by a region of very low association and both associations remained in the multi-locus analysis accounting for population structure (the imputed marker rs13968052 i and the genotyped marker rs14916997 at 170.6 and 173.7 Mb, respectively). The strongest of these association-peaks was located near the peak detected using the earlier linkage-based analysis. Several of the significant associated markers were located in this region (173.6-175.3 Mb). A candidate gene for growth, Asparagine-linked glycosylation 11 homolog gene (ALG11), is located at 174.6 Mb and has a strong mutation in its regulatory region (Ahsan et al. 2013). The second association was found to a group of significant markers in a narrower region upstream from the main linkage-peak (170.3-171.7 Mb). The association analysis thus suggests that the original 10.6 Mb QTL region that has its peak between markers located at 175.2-177.7 Mb is due to the effects of two separate loci located in these confined 1.5 and 1.8 Mb regions. As the two associated markers are closely linked in this population, it is difficult to interpret their individual effects, but their total contribution to the founder line difference (37 g) is about 75% of that by the original Growth1 QTL as estimated in the F 2 analysis (49 g; Wahlberg et al. 2009). Four statistically independent associated markers in the GGA2 QTL Growth2 and Growth3 GGA2 contains two QTL, Growth2 (Figure 1; 47.9-65.5 Mb) and Growth3 (Figure 1; 124.3-133.6 Mb). The multi-locus analyses identified three significantly associated markers in Growth2, where the first two are clustered at 56.7 and 57.2 Mb (genotyped marker rs14185295 and imputed marker rs14185836 i , respectively), with the last genotyped marker (rs14196021) located at 65.5 Mb. The distance between the markers, and the region of low association between them in the singlemarker analysis (Figure 1), suggests that two linked loci contribute to the Growth2 QTL. In the earlier linkage-based analysis the strongest signal in Growth2 was located in between these markers at 60.6 Mb. The QTL-peak in the original F 2 analysis (Jacobsson et al. 2005) is difficult to assess as nearest marker (MCW130) is not mapped to the chicken genome and no significant signal was found using a denser marker-map by Wahlberg et al. (2009). As the first two markers in the QTL are tightly linked, it is difficult to interpret the individual estimates of their effects; however, the third marker located 8 Mb upstream from them has a small independent effect. The estimated contribution by these loci to the founder line difference is small (14 g), which amounts to about 30% of the original contribution by the Growth2 QTL in the F 2 analyses (Jacobsson et al. 2005). In Growth3, a single association was detected to a genotyped marker (rs16120360) in the multi-locus analysis and this peak was inside the original F 2 QTL [101.6-131.9 Mb; Six statistically independent associated markers in the GGA5 QTL Growth8 One of the strongest association signals was found on GGA5 in the QTL Growth8 (Figure 1; 33.7-39.1 Mb) and six markers were retained in this region after accounting for population structure in the multi-locus analysis ( Table 2). The single-locus analysis suggests that these markers tag two locione from 33.7-36.3 Mb (three imputed and one genotyped marker; Table 2) and one around 38.8 Mb (two imputed markers). The markers were located between the markers flanking the original QTL (21.6-44.2 Mb) in Jacobsson et al. (2005) and overlaps with the earlier linkage signal. The association signal was, however, stronger than the linkage signal suggesting that the imputed markers tag the QTL better than the haplotypes inferred from the sparser set of genotyped markers. Although the linkage between the markers again makes it difficult to obtain stable estimates for the effects of individual markers in the two associated loci, their estimated contribution to the founder line difference (16 g) amounts to about one third of that estimated effect in the F 2 (Jacobsson et al. 2005). Six statistically independent associated markers in the GGA3 QTL Growth4 In the QTL Growth4 on GGA3 (Figure 1; 24.0-68.1 Mb), both the association and linkage analyses identify a broad region of association from 24 to 41 Mb. Although the statistical support curve in the linkage analysis contains multiple peaks, that analysis was unable to fine-map the region into multiple, independent signals. Here, the multi-locus analysis suggests that perhaps up to four independent regions contribute to this QTL, with one associated genotyped marker at 26.2 Mb, three imputed markers from 33.7-39.1 Mb, one imputed marker at 47.7 n Table 2 Estimated additive effects and standard error for experiment-wide independent association signals, between body weight at 56 d of age and genotype, identified in a bootstrap-based approach implemented in a backward-elimination model selection framework across the markers in the genotyped QTL For a marker with a positive estimated additive effect, the effect on weight is caused by the allele with its origin in the line associated with the sign of the effect, i.e., an allele with its origin in high-line is associated with an increase in body weight and an allele with its origin in low-line is associated with a decrease in body weight. In cases where a weight-increasing allele has its origin in the low-line or a weight-decreasing allele has its origin in the high-line the sign of the estimated additive effect will be negative. a Gallus Gallus Autosome. b QTL name as in Jacobsson et al. (2005). c Base pair position according to Chicken genome assembly (galGal3) of May 2006. d SNP name as in NCBI dbSNP where imputed markers are labeled with i after the marker name. e Difference in allele-frequency between the HWS and LWS founder lines. f Additive effect 6 SE calculated in a model including all loci in the table. g Significance of the estimated additive genetic effect in a model including all loci in the table. Mb, and one at 57.6 Mb. The single-locus association analysis highlights two particularly strong and distinct association-peaks located approximately between 24-27 and 33-37 Mb, respectively. A candidate mutation in Growth4 was found near the second association region at 33.6 Mb inside the regulatory region of Cysteine rich transmembrane BMP regulator 1 (CRIM1) (Ahsan et al. 2013). The associated region around 55-57 Mb displayed very low significance in the previous linkage analysis. The outmost markers (26.2 and 57.6 Mb) have allelesubstitution effects of 19.8 (SE 5.4) and 18.2 (SE 4.1) g, respectively, and are rather diverged between the lines (50% difference between the lines). For the other two clusters of markers, it is difficult to obtain stable estimates of their individual effects. Their estimated joint contribution to the founder line difference (35 g) is about 65% of that in the original F 2 analysis (Jacobsson et al. 2005). Four statistically independent associated markers in the GGA20 QTL Growth12 The earlier linkage analysis replicated the QTL Growth12 on GGA20 (Figure 1; 7.1-13.9 Mb), with the strongest associated marker at 10.7 Mb, and the signal covered most of the region (8-13.9 Mb). Four markers were significant in the multi-locus analysis and three imputed markers were located in the main single-marker association peak covering the region from 9 to 11 Mb, while the fourth associated imputed marker was located about 4 Mb upstream (13.4 Mb). Again, it is difficult to interpret the individual effects of the tightly linked markers; however, their estimated contribution to the line difference (22 g) is about 75% of that estimated in the F 2 analysis (Jacobsson et al. 2005). Five statistically independent associated markers in the GGA4 QTL Growth6 and Growth7 In both Growth6 (Figure 1; 1.3-13.6 Mb) and Growth7 (Figure 1; 85.4-88.9 Mb) on GGA4, several markers were significant in the multimarker analysis. The single-marker analysis illustrates that these markers tag association-peaks that were located very close to the main peaks in the earlier linkage-based analysis, suggesting that both analyses identified the same underlying loci. The association analysis identified two genotyped markers in a region in Growth7 with strong association around 86.5-88.5 Mb. It is difficult to interpret their individual effects due to the close linkage, but their estimated contribution to the founder line difference was small (13 g) amounting to about a fifth of that estimated in the F 2 population (66 g; Jacobsson et al. 2005). In Growth6, the multi-marker analysis detected associations to two imputed and one genotyped marker representing one locus at 2.3 Mb and a second locus at 10.9-13.5 Mb. Here, the tight linkage in the second locus also makes interpretation of their individual effects difficult, whereas the allele frequencies are not that differentiated in the first locus. Their estimated contribution to the line difference is therefore small (17 g) amounting to about a fifth of the 92 g estimated in the original F 2 analysis (Jacobsson et al. 2005). General comments Here we report the results from using an imputation-based associationmapping strategy to fine-map QTL in a nine-generation, outbred AIL. By combining high-density genotyping of the AIL founders with imputation throughout the rest of the pedigree utilizing a sparser genotyped marker backbone, we increased the marker density 20fold in the studied regions. This subsequent association analysis had a comparable power for replication of QTL to the earlier used linkagebased strategy. In addition to this, the new analyses also detected multiple association-peaks in several of the QTL and narrowed the associated regions considerably compared to the regions detected previously . Together, they suggest that this imputation-based association-mapping approach is a promising strategy for improving the resolution in fine-mapping studies in outbred pedigrees, where high-density marker genotypes are not available for all studied individuals. When interpreting the full results from the multi-locus backwardelimination analysis (Table 2), it should be noted that the results are reported at a 20% False Discovery Rate. Although a significant proportion of the markers could thus be false positive associations, as illustrated by the individual P-values for the additive effects of the associated markers, most of the peaks on the chromosomes also contain markers with more significant individual associations. We used this threshold because earlier mapping and replication studies confirmed that the QTL contain at least one small-effect locus contributing to 56 d weight, and that the major aim of this study was to provide an overall view of the most likely genetic architecture of the fine-mapped QTL, rather than high-confidence estimates for the individual regions. In both Growth1 and Growth4 two strong, distinct association signals were identified. Also in the QTL Growth8 and Growth9 the new analysis identified strong association-peaks covering many markers. In these regions, the strongest linkage signals identified in the previous fine-mapping analysis overlap with the strongest signals in the current analyses. However, the association analysis also separates the signals into multiple peaks and highlights narrower regions. Hence, it provides more useful input for further analyses to identify candidate genes underlying the QTL. In most cases the associated regions are restricted to distinct 2-3 Mb regions, which as indicated by the findings from Ahsan et al. (2013), is useful for restricting the bioinformatics analyses to only the most promising candidate genes for further functional studies. In Growth6, Growth7, and Growth12, the association signals were not as significant as in the other QTL. Despite this, the multi-locus analyses suggest that the linkage signals in the earlier analyses were due to distinct loci with independent effects, mapped here into narrower association-peaks. Overall, the location of the association signals in this study overlapped well with the top signals in the earlier linkage analyses. However, in two of the QTL (Growth2 and Growth3), the association-peaks are shifted when comparing results from both studies. Further work is needed to explore whether this reflects separate loci with distinct genetic architectures that could only be detected with the respective methods, or if they reflect a signal of the same underlying causal locus. Here, we estimated the additive genetic effects of the fine-mapped regions using data from the F 2 -F 8 generations of the AIL. To evaluate whether they were in general agreement with estimates obtained for the same regions in earlier studies, we compared them to the estimates obtained from our first large F 2 population (Jacobsson et al. 2005;Wahlberg et al. 2009). The QTL effects were generally lower in the F 2 -F 8 data than in the F 2 . Although this may be interpreted as the F 2 estimates being inflated, several other factors could also be considered. First, the 56 d body weights were considerably lower for the F 8 generation because younger dams were used to generate these ). In the analyses, a fixed effect of generation was used to account for the mean weight differences between generations. However, it did not account for the likely scenario that the QTL effects were smaller in these F 8 birds due to their lower body weight. As about 30% of the birds in the pedigree are from this generation, this would bias the overall effects downward. Standardization of the phenotypes from different generations to the same mean and variance is a way to possibly account for this, but a caveat of that approach is an introduction of an upward bias of the effects if the QTL effects in the F 8 are, in fact, not that much smaller. We therefore chose to report the more conservative estimates based on analyzing the nonstandardized phenotypes. Second, eight of the nine QTL contain fine-mapped regions with associations to several tightly linked markers. If these markers are located on the same haplotypes, it is not possible to disentangle their effects in this pedigree as too few recombination events have accumulated in the F 2 -F 8 generations of the AIL, and due to such collinearities the estimates for the individual markers reported here would not properly describe the contributions of these haplotypes to the line difference. In several of the regions with multiple associated markers, the estimates of the additive effects were also negative for at least one of these markers. Although this could be interpreted as transgression being common in the population, we find them more likely to result from the collinearities among the closely linked markers. Further analyses utilizing, for example, later AIL generations, markers that specifically tag the haplotype-structure of the founder lines and methods that can account for multi-allelic genetic architectures will be needed to disentangle the genetic architectures of these loci and quantify their contributions to the founder line difference. Third, the F 2 QTL estimates were obtained using a line-cross analysis where it is assumed that the founder lines are fixed for alternative QTL alleles (Jacobsson et al. 2005;Wahlberg et al. 2009). In the current association analysis, it is instead assumed that the alternative alleles at the tested markers tag nearby functional alleles. As none of the associated markers were fixed for alternative alleles in the founder lines (Table 2 and Table S1), the current fine-mapping analysis suggests that one, or both, founder lines segregate for multiple functional alleles in the QTL. To compare the estimates from the line-cross analysis in the F 2 and the association analyses in the AIL F 2 -F 8 generations, they need to be compared using a common reference. Here, we did this by estimating how much the associated markers in each QTL were expected to contribute to the founder line difference under the assumption that they act completely additively. Under this assumption, their contribution would equal twice the sum of the allele-substitution effects of the markers in a QTL weighted by their respective allele-frequency differences between the founder lines. That is, if the markers are fixed for alternative alleles in the founder lines they would contribute two allele-substitution effects to the founder line difference, whereas they would contribute nothing if the allele was present at equal frequencies in both founder lines. This estimate is conservative as, for example, dominance leads to an underestimation of the contribution of the locus. This as, in the presence of a dominant allele, one line will not need to be fixed to contribute most of its effect because this is also displayed in the heterozygotes for that allele. When comparing estimates this way, the combined effects of the associated markers in each of the QTL contribute from 10 to 75% of that estimated in the F 2 by Jacobsson et al. (2005). In total, the QTL replicated here contributed 171 g to the founder line difference, compared to the 416 g in the F 2 . As discussed above, further analyses in other populations with more informative genetic markers using other statistical methods are, however, required to explore this further. Our analyses suggest that there is extensive within-line segregation in the QTL regions. One possible explanation for the slow fixation in these loci could be that the beneficial alleles at the linked fine-mapped loci were located on different haplotypes at the onset of selection. Due to the low selection pressure on each QTL region resulting from the highly polygenic architecture of the selected trait, the close linkage between the loci contributing to the QTL, and the small effective breeding population, the probability that beneficial recombinant haplotypes are selected and increase in frequency in the population should be low. Another alternative explanation could be that the effects of the linked loci are dependent on the genetic background (epistasis) or dominance, which might have affected the selection pressure on individual contributing loci. As we did not explore the contributions by dominance or epistasis in this study, further work would be necessary to evaluate their contributions to the low fixation in the QTL. A key for successful imputation of the high-density marker set throughout the AIL pedigree is that the haplotypes across these markers are correctly estimated in the founders. There are several properties of the Virginia lines that improve haplotype estimation from high-density genotypes. First, as the number of generations since the lines diverged is relatively few (40 generations), most new haplotypes will result from recombination of original haplotypes, rather than by new mutations. Second, the strong artificial selection imposed on the populations since they were founded is likely to have further reduced haplotype diversity across the genome. This is likely the reason that many selective-sweeps across long haplotypes have been found to be fixed, or nearly fixed, across the genome within and between the lineages (Johansson et al. 2010;Pettersson et al. 2013). This is reflected in a large average LD-block size (. 50 kb) across the genome (Marklund and Carlborg 2010). Given the density of the 60k SNP-chip genotyping used here, several markers will be present on each such LD-block and hence improve efficiency in haplotype estimation. Additional genotyping will, however, be necessary in subsequent generations to experimentally confirm the associations to imputed markers reported here. Genotype data are available for all individuals in the AIL pedigree. The dense marker backbone (1 marker/cM) from the first genotyping of the AIL , allow the relatively long haplotypes that are inherited as intact segments from parents to offspring to be efficiently phased, imputed, and traced throughout the pedigree for later association analyses. The highly polygenic genetic architecture in this population is consistent with what has been revealed in other fine-mapping analyses in deep intercrosses (Parker et al. 2011) and chromosomal substitution strains (Bevova et al. 2006) involving intensively selected mouse populations. Recent work on a mouse population that has evolved to an extreme body size in nature has also uncovered a highly polygenic architecture of adaptation (Gray et al. 2015), illustrating that complex genetic architectures are likely to be involved in responses to both natural and experimental selection. Further, our detection of multiple associations to nearby markers in our AIL is also consistent with reports from other AIL-based fine-mapping studies in chickens from outbred base-populations (Van Goor et al. 2015) and association studies within and across cattle breeds (Saatchi et al. 2014). Subsequent studies will help to elucidate whether the underlying genetic architecture of associations detected to linked markers in this and other outbred populations are primarily due to the segregation of multiple haplotypes in the outbred founder populations and breeds or a reflection of several tightly linked functional polymorphisms. Here, the association analysis was performed using a linear model including fixed effects of genotype, sex, and AIL generation. Sex and generation were included as both these environmental factors had significant effects on BW56 ). Implementing the model selection by backward-elimination in a bootstrap-based framework is a way to account for possible effects of population structure in the AIL that might increase the risk for reporting false positives. However, since the association signals in most cases overlap well with the final marker set resulting from the testing of experiment-wide significant associations, we do not find this to be any cause of great concern in this experiment. Conclusions In conclusion, this study shows that the proposed imputation-based association-mapping strategy, and further model selection by backwardelimination in a bootstrap-based framework, is useful for identifying independent association signals within and across the nine evaluated QTL. The association-peaks were narrower than those obtained in the earlier performed linkage analysis, often highlighting regions down to 2-3 Mb in length allowing the identification of multiple association signals in several QTL. This suggests that the association-based strategy has higher resolution, as well as provides an improved power to disentangle the effects of multiple linked loci inside QTL, compared to linkage-based fine-mapping. Combining traditional linkage-based approaches to analyze outbred advanced intercross populations with imputation-based association-mapping approaches might thus be an important and cost-effective approach to improve the efficiency in postassociation bioinformatics analyses and functional explorations aiming to identify candidate mutations. A previous candidate gene study based on the nine QTL fine-mapped here has already reported some interesting mutations in growth-related genes (Ahsan et al. 2013) overlapping with the association signals reported here. Further bioinformatics investigations of the regions fine-mapped here could potentially reveal new important genes and mutations affecting body weight in these chicken lines and provide new candidate genes for studying the genetic architecture of metabolic traits in other species, including humans.
2018-04-03T02:21:06.027Z
2016-10-31T00:00:00.000
{ "year": 2016, "sha1": "0f82c24a366c6b0e5ecd1ca0301058a3b9454b47", "oa_license": "CCBY", "oa_url": "https://www.g3journal.org/content/ggg/7/1/119.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "f5df9903a6bdb8d461b39c8a3d9f3b59f170b365", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
7136068
pes2o/s2orc
v3-fos-license
On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks. A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order), and what counts as abnormality (i.e.- disorder). The distinction(s) between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are.In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice. Introduction Psychiatry is uniquely problematic because debates over what mental disorders are have presented substantial challenges to medical praxis and ethics. In many ways, the question of what constitutes a mental disorder is related to uncertainties about the nature of mental experience, and the underlying relationship(s) of body, brain and mind. Traditionally, medicine has been successful in establishing etiology of diseases and disorders, and developing focal therapies based upon such mechanistic conceptualizations. The acts of medicine (i. e.-diagnosis, therapeutics, and prognosis) depend upon the ability to distinguish between what is "normal" and what is pathologic, and the evolution and practice of psychiatry has attempted to adopt and utilize the medical model in this regard. Yet, as neuroscience probes ever deeper into the workings of the brain, it becomes evident that the "mind" remains somewhat enigmatic, and thus, any attempt to link mental events to biology must confront what Chalmers has referred to as the "hard problem" of consciousness [1]. But given the continued ambiguity of the brain-mind relationship, unresolved questions remain of 1) how can, and perhaps should psychiatry proceed to formulate a viable system of characterizing mental normality and abnormality, and 2) how might such formulation affect the scope and tenor of psychiatric practice? As several papers in this journal have shown, such questions are not esoteric or merely academic. Rather, in light of 1) ongoing progress in genetics and neuroscience; 2) development and tentative articulation of a forthcoming Decade of the Mind; and 3) proposed healthcare reforms that are based to a large extent upon diagnostic classifications, these questions reveal genuine challenges, and form the groundwork upon which a new diagnostic schema (if not Diagnostic and Statistical Manual) for, and definition of psychiatric profession and practice might be constructed. Problems in Psychiatric Diagnosis Horwitz asserts that "because [diagnostic psychiatry] uses symptoms to classify disorders, it also categorizes an enormous diversity of human emotions, conduct, and relationships as distinct pathological entities" [2]. At first blush, such an approach seems logical because precise diagnostic classifications can presumably distinguish between particular disease states and offer reliable information about etiology, prognosis, and treatment. In the The Myth of Mental Illness, Szasz disputed psychiatry's claims of medical legitimacy. Szasz was concerned about the validity of psychiatric concepts, and his critique raised questions about the evaluative nature of the psychiatric enterprise. To Szasz, psychiatry utilized terms (such as delusions, compulsions, and obsessions) that lacked the descriptive objectivity of other domains of medicine. Szasz did not deny that neuroanatomical lesions could result in dysfunctional behaviors, however, such abnormality is, strictly speaking, a brain disease. Labeling various forms of behavior as pathological "... rests on a serious, albeit simple, error: ... mistaking or confusing what is real with what is imitation; literal meaning with metaphorical meaning; medicine with morals" [3]. If psychiatry lacked terms that could definitively individuate normality from pathology, how could psychiatrists issue seemingly objective diagnoses and prognoses while relying on a predominantly subjective (and elastic) epistemology? This conceptual tension in psychiatry mirrors larger debates about objectivity and normativity in the philosophy of science. In The Structure of Scientific Revolutions, Thomas Kuhn argued that science does not operate within an Archimedean framework, but instead, is sensitive to the normative practices of social communities [4]. Scientists (and clinicians) undergo training and develop expertise within localized academic institutions. As a consequence, intellectual traditions tend to bind scientists and clinicians within a coherent community of practitioners. Kuhn noted that members of a particular academic community tend to hold similar constructs and values about what constitute a good theory, and these values were largely assumed, unquestioned, and maintained as valid within the group. For Kuhn at least, the collective nature of scientific theory-building suggested that communities' values matter in the content of scientific discourse and theorization (and, we might add, clinical practice). Postmodern criticisms of science generally impugn this relativistic bend, and pose the question: If science evolves within a cultural frame (just like other ideologies), then in what sense is it immune from the normative practices of society [5]? The crucial issue is not whether the unique status of science (and by extension, clinical medicine) hinges on cultural biases, but whether its epistemology is better than other ideologies at obtaining knowledge about the natural world. All ideologies manifest hegemonic assumptions about the nature of reality and being. However, unlike other ideologies, science also values a self-correcting process through which increasingly refined and robust characterizations about the natural world can be made over time. If new observations become difficult to reconcile with standing hegemonic beliefs, then those initial assumptions are usually abandoned. Thus, scientific epistemology allows for large scale reorganization of ontological assumptions, or what Kuhn called "paradigm shifts" [4]. In applying this framework to the medical model of psychiatry, we see a reliance upon four main ontological assumptions. These are 1) Realism: the claim that mental properties (such as desires, beliefs, and thoughts) are real phenomena and not merely artifacts of socio-cultural norms; 2) Naturalism: the concept that disturbances in neural structures are causally implicated in the formation and persistence of mental disorders; 3) Reductionism: the view that at some level, disturbances in neural structures are necessary to account for mental disorders, and 4) Essentialism: the assertion that mental disorders have underlying 'essences" that allow distinction of one type from another. Are each and all of these assumptions warranted and necessary in order to arrive at a valid concept of mental disorder? We assert that naturalism, realism, and reductionism are reconcilable with advances in contemporary neuroscience, but that essentialism has proven to be, and may still be somewhat more problematic, vis-a-vis the medical model of psychiatry, at least to date. Let us examine each of these assumptions in turn. Realism The realist position asserts that terms used in scientific theories map onto actual properties in the external world, even if the relevant phenomena are not necessarily observable. So, for example, sodium-gated ion channels or serotonin receptors all do, in fact, exist. Their existence is not predicated upon our ability to perceive them through our senses. Another important aspect of realism is that properties referred to by scientific theories are independent of our linguistic practices or socio-cultural norms; hence, the amino acid glycine will always have a hydrogen atom as its functional group. This description holds true regardless of human circumstance. Realism entails that a mental realm does not exist separately from the physical, and so an acceptance of realism necessitates a rejection of dualism. Simply, there is not an ontologically separate mental world, independent of its physical instantiation in the brain. The idea of an overriding mind, metaphysically independent of the brain, becomes untenable when we realize that lesions to various regions of the brain have profound consequences for subsequent subjective experience. How would the mental realm causally interact with an aphasic's brain, given the loss of linguistic capabilities due to an insult to the superior temporal gyrus or Broca's area? Similarly, how are we to account for the gradual loss of cognitive function in patients with Alzheimer's disease? To experience disease is to be in a certain experiential state. To use a rather overplayed computational metaphor, to have such an experience requires that one have the requisite "hardware" (brain) and "software" (mind). A rejection of dualism would logically mean that all mental disorders are (in some way) biologically based. The tenet claims that every mental process, pathological or otherwise, arises in and from the brain [6]. It is important to note that nothing has been claimed about how neural structures causally produce mental states (naturalism), or whether mental states are best understood through their more basic, physical components (reductionism). Realism has been a rather controversial assumption in the philosophy of psychiatry. An objection to the realist case is that there is no reason to claim that mental properties, such as beliefs, doubts, desires, and fears actually exist in the natural world. Moreover, as matter of fact, such mental properties do depend on the normative constraints of local communities. According to Cash, "...people's intentions, beliefs, thoughts and decisions are different in kind, not just in scale, from causal mechanisms in the brain. The nature of this 'difference in kind' can be revealed by considering the nature of the public criteria we use to ascribe intentional states to one another" [7]. The veridicality of intentional states often depends upon the requisite conditions; intentional states can mean or be about something. The property of aboutness cannot be mapped onto reality in any law-like way. One can sidestep this criticism by noting that realism is best approached as an epistemological constraint. It is not the case that the tentative plausibility of a certain theoretical term commits us to finding its 'real world" equivalent. The validity of theoretical terms, that is, their ability to appropriately map onto real world properties, is completely contingent on the congruency of the associated theory with other established scientific principles. Critics of realism often conflate the object of scientific knowledge with the process of knowledge construction. Fundamentally, science is an interpretative process; it is something people do. Given that science is a project of collaboration, it is empirically impure, relying on built-in explanations that become embedded in the process of theory development. This does not mean that science is merely a by-product of cultural practices. Roy Bhaskar articulates the problem in this way: "[M]en in their social activity produce knowledge which is a social product much like any other, which is no more independent of its production and the men who produce it than motor cars, armchairs and books... and which is no less subject to change than any other commodity. This is one side of 'knowledge'. The other is that knowledge is 'of' things which are not produced by men at all: the specific gravity of mercury, the process of electrolysis, the mechanism of light propagation. None of these 'objects of knowledge' depend upon human activity. If men ceased to exist sound would continue to travel and heavy bodies fall to earth in exactly the same way, though ex hypothesi there would be no one to know it" [8]. Knowledge, in the form of theories and explanations, is interpretational and should be regarded as a changeable social product. This does not mean that the object of any such knowledge is always dependent upon sociocultural constructions. Science describes entities of nature, but "proof" comes through our success in interpreting, interacting with, manipulating (and often, controlling) them. Naturalism Naturalistic theories of mind generally assume that mental properties, such as thoughts or beliefs, are derived from neurobiological structures in a causally relevant way. In order to legitimize the naturalistic characterization of a mental disorder, the observed clinical expressions of behavior should have causal roots in biology. This is not to claim that all mental behavior should only be understood through biology, but rather that we -as dynamic organisms within complex environmentswill undoubtedly be influenced by a variety of interacting variables, including biology. A pressing question in naturalistic theories is how is it, exactly, that neurobiological disorders can be causally linked to certain behavioral outcomes? The steps implicated in the causal chains from the biochemical to the behavioral level(s) are vast and endless, and as Hume noted, we cannot "see" causation [9]. In science, we observe event regularities, and if such regularities occur with sufficient frequency, then we tentatively accept these observations as truly causal. Such observations are affirmed through the use of statistical theories, which provide a mathematical measure for the probability of an event occurring solely by chance. While the development of statistical methods has refined the scientific process, the act of establishing causal relationships in the world long predates the development of statistics, or even mathematics. Such reasoning is possible because human beings have the capacity to reason inductively and infer logical relationships from data in, and obtained from the environment. Children as young as three years old can make appropriate judgments about novel stimuli and causally link processes they have only observed in operation [10]. These types of observations have prompted many philosophers (since Hume) to posit that causality can, at best, be understood as event regularities. We cannot determine by reasoning alone which of the observed (or potentially unobserved) effects actually cause the phenomena in question. To arrive at such conclusions, however, is to be led astray by words. As Ross states, "... to the extent that we have culturally universal intuitions about causation, this is a fact about our ethology and cognitive dispositions, rather than a fact about the general structure of the world" [11]. In other words, naturalistic intuitions are not evidence of their content. Reductionism Over the last few decades, neuroscience has elucidated a biological basis for several mental disorders. These developments have fuelled the quest to explain mental properties by reducing them to an interaction of their putative substrates. Given that interactions of neurobiological structures are causally implicated in aberrant of behavior, a logical paradigm would grant underlying genetic and biochemical entities explanatory primacy. Subjective experience and cultural influences can play a role in psychiatric disorders, but the "true" explanatory locus would rest in pathological structures and functions. Many of these overly reductionist tendencies can be assuaged by revisiting some of Dennett's work that attempts to clarify the relations and predictions of mentalistic behavior through the use of three levels of explanatory abstraction [12]. The first is the Physical Stance, in which behavior could be predicted, in principle, from physical laws governing the interactions of material components. The second is the Design Stance, which predicts behavior, not from an understanding of the physical constitution of the mind, but through an understanding of the mind's purpose, function, and design. The final level of abstraction is the Intentional Stance, which requires neither an understanding of the physical constitution of the mind nor any design principles, but instead predicts behavior by considering what moves a rational agent would make in a given circumstance. The brain and its potential representations are a primary focus of neuroscience, and neuroscientific information sustains both an evolving philosophy of mind, and the profession and practice of psychiatry. But it is important to recall that neuroscience, as a science, remains a process, and in so far as people are working on the common project of explanation, the objects of knowledge need to be interpreted. Normativity cannot be expunged from science, nor should it be. We make sense of the world and explain it with our theories, and it is inevitable that practical considerations will play an important role in theory choice. This means that reductionism need not be the raison d'être for the naturalistic project, but neither should it imply that reductionism is not possible, in principle. It is important to note that defining mental content in this way becomes a practical consideration. Accordingly, behavior can be interpreted using a level of abstraction that depends upon the needs of the investigator (and/or clinician). Essentialism A more controversial ontological assumption of the medical model of psychiatry is essentialism. This is the claim that psychiatric disorders, as defined by clinical nosology, map onto reality in a discrete way, and that these disorders possess essential properties, without which they would not be what they are. We argue that this assumption is highly questionable, and that as currently conceived, is anachronistic at best, and remains inconsistent with scientific thinking (at worst), and therefore is in need of re-examination and revision. Science routinely organizes its body of knowledge into categories. How we sort things into categories largely depends on what measures we value. That is, we classify objects for a particular reason or to serve a specific function; to these ends, classification schemes cannot be arbitrary or random assortments. As Sadler notes, "...this non-arbitrariness is essential to a classification because it provides the basis for users with common purposes to talk about the same things. For us to discuss 'major depression' productively, we have to agree, in large part, about what major depression is, and in what practical context such a notion arises" [13]. An important concern for classification is the concept of validity. The validity of a category is related to the degree that it fits within a consonant body of explanatory theories. So, to group lungfish and cows in a similar category would require that there are genuine motivations for doing so. If one were an evolutionary biologist, such a grouping would align with what is known about macro-evolutionary processes. If one were a fisherman, the validity of such a pairing would seem impractical. A criticism of the construct of essentialism is found in the later work of Ludwig Wittgenstein. Summarizing the Wittgensteinian view, Garth Hallett writes: Suppose I show someone various multi-coloured pictures, and say: "The colour you see in all these is called "yellow ochre"... Then he can look at, point to, the common thing." But "compare this case: I show him samples of different shades of blue and say: "The colour that is common to all these is what I call "blue"."' Now what can be looked at or pointed to save the varied hues of blue? And don't say, "There must be something common, or they would not, be called 'blue,"' "but look and see whether there is anything in common at all" [14]. The crucial argument here is that the property of "blue" is reliant, to some extent, upon practical considerations and constraints. Yet, a form essentialism persists in psychiatry. This is clearly articulated by Robins and Guze who claim that, "...the finding of an increased prevalence of the same disorder among the close relatives of the original patients strongly indicates that one is dealing with a valid entity" [15]. In this framework, genetic and biochemical factors are attributed as primary causes, and the role of psychiatry is to locate these pathological qualities within the physical brain. While experience does play a role in one's mental health, this model is decidedly oriented toward brain function. In this way, genetic and biochemical causes are seen as exerting their influences uni-directionally and any/all manifest symptoms are the consequence of unique and individuated etiologies. The medical model of psychiatry views the current classifications as representing discrete organic disease states as opposed to heterogeneous symptom clusters. Validation of these symptom clusters often occurs via post-hoc quantitative and statistical analyses (such as hierarchical cluster analysis or pattern recognition paradigms) of the clinical data to ascertain which combinations of symptoms tend to group together. The problem with creating these types of discrete definitions for many contemporary psychiatric conditions is that "...no amount of clustering can get around the fact that several variables used in such models may have little or no biological plausibility" [16]. Without clear biological mechanisms, it is unclear whether symptom clusters represent different ways of labeling the same affliction, socio-cultural influences, or other biological confounds. Peter Zachar and Nick Haslam have presented a strong case that psychiatric categories do not uniformly individuate to underlying essences, but are defined, to a large part, by practical considerations [17][18][19][20][21][22][23][24]. In many ways, this recalls the Szaszian argument for mental illness as "myth" -here literally used to denote a practical, explanatory narrative. We do not refute, or even doubt that practical considerations are important to define the threshold(s) at which a particular set of signs and symptoms may be deemed clinically relevant. But, if we are to regard essentialism as critical to the medical model of psychiatry, and adopt practice standards in accordance, then the task at hand is to establish how and what essential criteria are pertinent to any construct of normality and order (versus abnormality and disorder), as relates to brain function, mental processes and expressions of cognition, emotion and behavior (within a social milieu). Toward this end, we have posited that one such "essential" element of normality is non-linear adaptive properties within and between particular brain networks; thus progressive linearity would be aberrant and could manifest effects from the cellular to the cognitive-behavioral (and even socio-cultural) levels [25]. In this way, mental disorders would occur as a spectrum of possible effects. We maintain that particular genotypic factors predispose endo-and exophenotypes that are differentially expressed through interaction(s) with internal and external environmental influences throughout the lifespan, thereby grounding neuropsychiatric syndromes to underlying biological factors [25,26]. This acknowledges causal determinants of psychiatric disorders (at least at formal and material levels), and while accepting a form of token physicalism (i.e.-that particular mental events occur as result of some physical function(s) or dysfunction(s)), allows for appreciation of both emergence and the bio-psychosocial influence of environments. As well, the spectrum disorder concept satisfies the criteria that define the medical model (i.e.realism, naturalism, reductionism, essentialism). In this light, a spectrum disorder can be considered to 1) involve neural substrates (i.e.-realism); 2) represent a disturbance in the natural function of the substrate(s) or system (i.e.-. naturalism); 3) be a perturbation or disruption of some underlying and/or contributory component (s) of the bio-psychosocial organism (i.e.-reductionismin this case as token physicalism), and 4) manifest a particular "eidos" that defines its aberrant qualities -in this case the progressive loss of non-linear adaptability and the resultant effects on neural function, cognition, emotion and behavior (i.e.-essentialism). Conclusion Psychiatry has increasingly adopted a categorical approach in delineating mental disorders. This has been beneficial insofar as the defined categories reflect clear and well-understood biological mechanisms. For certain psychiatric conditions, such as schizophrenia, bipolar disorder, and other psychoses that involve clear dysfunctions of mechanisms that regulate perception, cognition, and communication, a categorical approach may be reasonable [2]. Human beings, however, have a range of behaviors whose normality or pathology is constrained within certain socio-cultural niches. Various phobias, compulsions, obsessions, and emotions cannot easily be explained by a singular biological mechanism. As well, manifestations of the same condition may be the result of heterogeneous mechanisms working in concert. Essentialism is evidently important to the medical model, and as such persists in contemporary psychiatry. One of the central tenets in essentialism is the existence of natural kinds. According to Zachar, a natural kind is "...an entity that is regular (nonrandom) and internally consistent from one instance to the next" [24]. That is, once the property that captures the essence of a specific natural kind is known, that property can identify any other prototypical instantiation of that kind with accuracy. But, if a category cannot be identified with respect to its essential properties, then such a category is not, in the strict definitional sense, a natural kind, but an artificial category. Rom Harré argues that the philosophy of science is such that the idea of a 'natural kind' is a fancy, and that a 'natural kind' is a concept which can only be understood within the double framework of practice and theory [27]. The validity of a category is contingent upon how well it integrates within a diverse, multidimensional system of fact(s) and explanation(s). While the theoretical context of the kind determines, via appropriate hierarchical explanations, what properties constitute an entity's essence, it is the practical context that distinguishes accidental properties from essential ones, and we opine, perhaps more importantly, what extent of properties will be deemed relevant to regard and guide action(s). To be sure, physiological systems function and interact nonlinearly over a wide range of spatial and temporal scales. As Goldberger notes, "...the combination of nonlinearity and non-stationarity, more the rule than the exception in the output of physiologic systems, poses a major challenge to conventional bio-statistical assessments and standard reductionist modeling stratagems" [28]. Biological systems (including the embodied brain-mind) display complex network properties, and behavioral processes are often best characterized as non-linear interactions between physiological systems and the environment [29]. The extent to which the activity of the system as a whole reflects the response(s) of its component networks will vary based upon the condition of the system and its sensitivity, and relative attractors and constraints that exist; each and all of these may be differentially expressed in certain individuals, at various points throughout the lifespan. Moreover, there is evidence to suggest that the activity and response-parameters of constituent parts and networks (i.e.-"bottom-up" effects) may be responsive to, and affected by the activity of the entire system as a wholeinclusive of psycho-social factors in which it is nested (i. e-"top-down" effects) [30]. Therefore, it remains an open question whether there are essential parameters that characterize these nonlinear dynamical patterns. We believe that the aforementioned refined eidetic conceptualization shows some promise, and in this way might provide a "missing link" between the medical model and psychiatry. Further research in neuropsychiatry will need to reassess the role of spatial and temporal scales in diseased organisms. Mental disorders, like all other dysfunctions, are processes that unfold through time. It is important to heed Ghaemi's advice, and recall that etiology is not a binary issue, but instead involves elements of degree [31]. In light of this, we posit that one of the benefits of the spectrum concept is that it allows categorization of mental disorders according to the extent and type(s) of relatedness conferred by 1) common genetic risk and predisposing factors, 2) dysfunction of shared substrates and networks, and 3) benefit from types of treatments that have identifiable effects/actions. An understanding of mental normality and pathology necessitates an approach that embeds it in the complex spatial and temporal processes of life. Yet, we must be cautious -despite the attractiveness and popularity of complexity science, it is important to ground any such account to well-established fact(s), and appreciate the limits of what is known and un-known. As Jaspers noted, "every concrete event -whether of a physical or psychic nature -is open to causal explanation in principle, and psychic processes too may be subjected to such explanation. There is no limit to the discovery of causes and with every psychic event we always look for cause and effect" [32], but he also adds that "...reality is seen through the spectacles of one theory or another. We have therefore to make a continual effort to discount theoretical prejudices...and to train ourselves to pure appreciation of facts...every advance in factual knowledge means an advance in method..." [33] At some point, the distinction between what is normal and abnormal, ordered and disordered will need to be made, and any such distinction must be practical in the sense of its viability to sustain the good of patient-centered clinical care. Therefore, it may be that the task (for the Decade of the Mind project, development of the DSM-V, and for psychiatry, if not medicine, writ large) is to clarify how syndromes are related (within various spectrum disorders), and adapt or create a classification scheme, nomenclature (and thus ontology) that communicates the meaning and value of taxonomy and diagnosis. Whether an attempt to elucidate the "natural basis" of mental function and dysfunction will serve such practical ends remains to be seen, and thus, this goal remains a work in progress.
2015-03-21T17:44:09.000Z
2010-01-28T00:00:00.000
{ "year": 2010, "sha1": "927450692d04df8762932d2a6389c45d3de45687", "oa_license": "CCBY", "oa_url": "https://peh-med.biomedcentral.com/track/pdf/10.1186/1747-5341-5-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "927450692d04df8762932d2a6389c45d3de45687", "s2fieldsofstudy": [ "Psychology", "Philosophy" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
168657853
pes2o/s2orc
v3-fos-license
Natural resource governance in lower Omo, Ethiopia – negotiation processes instead of property rights and rules? Research on common-pool resources in the last 30 years has hinged on concepts such as rules and property rights for understanding how access and use of natural resources is managed by communities and other actors. However, a small body of literature on mobile pastoralism maintains that resource governance might not always be based on resource-related rules, but instead on negotiation and general norms of reciprocity. Situations conventionally labelled as ‘open access’ might therefore not always be as unregulated and unmanaged as they seem. Here, we examine what the absence of rules for resource access and use means in practice, and how resource users adapt such a governance system to increasing scarcity of pasture land. We conducted interviews, group discussions and participatory mapping exercises in two neighbouring areas, Hamar and Bashada, in the lower Omo area in southern Ethiopia. Both groups are culturally closely related to each other, but showed important differences in their ability and willingness to change their institutions to adapt to resource scarcity. In both Hamar and Bashada, access to grazing was generally non-exclusive. Instead, we found a complex mosaic of ways in which access to grazing was practiced and sanctioned, characterised largely by negotiations and interplay between individual actors rather than by firm rules. Both groups were confronted with increasingly erratic rainfalls and insufficient availability of pasture. Strikingly, while the Bashada had recently established a strictly enforced set-aside area to provide grazing for the end of the dry season, the Hamar rejected such ideas and sought grazing in protected areas, which eventually led to conflict between herders and authorities. Reasons for these diverging strategies might be connected to subtle 446 Degu Tadie and Anke Fischer differences in the degree to which decision-making is individualised and social coordination accepted. These seem to have important implications for community adaptability to changing environmental and societal conditions. Introduction Ever since Ostrom's seminal publication "Governing the Commons" in 1990, a large part of scholarly and applied enquiry into resource use by local communities, especially in developing countries, has adopted her and her colleagues' perspective on the governance of common-pool resources. This perspective is manifested, for example, in the Institutional Analysis and Development (IAD) and its successor, the Socio-Ecological Systems (SES) framework (Ostrom 2007), and the design principles for successful community-based resource management (Ostrom 1990(Ostrom , 2009, and has led to an extremely fruitful shift of the academic gaze towards the role of local communities, with probably immeasurable impacts on real-world practices in relation to the empowerment of local actors, too. Two key elements of this perspective are the notions of (i) property rights and (ii) rules, crucial concepts in both analytical frameworks and design principles. Property rights refer to rights of access, withdrawal, management, exclusion and alienation (Schlager and Ostrom 1992) in relation to a resource, or more precisely, a good or service. In addition, three different levels of action can be distinguished, each of them usually guided by sets of rules -operational rules, collective choice rules and constitutional rules. These shape and constrain resource appropriation and use, decision making about operational rules, and modalities of decision making overall, respectively (Schlager and Ostrom 1992). Rules operationalise rights and rights can thus, in fact, be conceptualised as the product of rules (Schlager and Ostrom 1992). And while resource access and withdrawal (e.g. harvest) are situated at the operational level, management, exclusion and alienation require action at the collective-choice level, that is, decisions that might alter the operational level (Schlager and Ostrom 1992). Operational rules and the specification of related rights to resource access and use tend to be the centrepiece of common-pool resource governance. These rules and rights tend to make clear connections between resources and people, stipulating who is allowed to use (e.g. harvest, collect or abstract) how much of a specific resource (e.g. grass, firewood, water) in a given temporal, spatial and social context (see examples in e.g. Cox et al. 2010;Wakjira et al. 2013). Where such resource-or ecosystem-related, operational rules for resource use exist but the overall use pattern is unsustainable, they are usually seen as a starting point for developing more sustainable resource management approaches (Scoones 1999;Wakjira et al. 2013). Where such rules are absent or exist but are not enforced, an 'open access' situation is conventionally seen to prevail that exposes a resource to overexploitation and degradation (Ostrom 2009). And while the largest part of research on common-pool resources focuses on common property governance arrangements, for which clear, shared and effectively enforced rules are crucial (Ostrom 2009), empirical analyses of open access tend not to go beyond the fatalistic diagnosis of a tragedy of the commons (Hardin 1968) or absence of effective restrictions on resource use (McGinnis 2011, 179). However, a notable exception consists in the literature on transhumance and the management of rangelands by mobile pastoralists, especially related to western and northern Africa (e.g. Niamir-Fuller 1999;Moritz 2016). In this body of literature, there is a strong recognition that grazing lands can be operated as open access, i.e. without clear use rights and resource-related rules, which does not necessarily imply that use is unregulated and unmanaged: Access might not be based on rigid, spatially defined rules but be obtained through "social networks and norms of reciprocity that are characterised by flexibility, porosity, and malleability" (Moritz et al. 2013, 352, see also Mehta et al. 1999;Turner 1999;Galvin 2009). Here, we draw on this literature to explore what a (seeming) absence of resource-related operational rules for resource access and use might mean for our understanding of resource governance in a situation that, as described by Moritz et al. (2013, 355), is based on a certain "ethos of open access", but where resource scarcity, emerging in recent years (Terefe et al. 2010;Gil-Romera et al. 2011), renders current practices unviable and seems to call for adaptation. The lower Omo case: motivation for this study We stumbled across this question during our fieldwork on illegal hunting and human-nature relationships in the lower Omo valley in Ethiopia (Tadie and Fischer 2013). While investigating what hunting meant to members of four ethnic groups -the Kara, Hamar, Bashada and Arbore -we found that the hunting of large game had extremely strong social meaning, associated with complex and intricate practices that almost exclusively referred to interactions between people. By contrast, interactions between people and wildlife appeared rather simple, with only very few customs and norms that structured access to and practices related to wild animals. This also seemed to be the case for relations with other non-human elements of the natural environment, for example, grazing land or trees for beekeeping, but our data was too limited to allow us to understand how resource access and use in areas other than hunting was practiced. However, especially among the Hamar, the seeming absence of social norms in relation to natural resources was striking and starkly contrasted with the ubiquitous nature of references to taboos, customs and other informal institutions in the literature on local communities' natural resource use, especially in developing countries (Colding and Folke 2001;Ashenafi and Leader-Williams 2005;Jones et al. 2008). We suspected that this absence of explicit and shared norms related to natural resource use among the Hamar might be connected to their rejection of authority and the individualisation of knowledge and decision-making (Lydall and Strecker 1979a), and result in feelings of helplessness, lack of agency and despair in the face of environmental degradation and increasing scarcity of resources (Tadie and Fischer 2013). In 2012, we went back to lower Omo to explore these issues in more detail. In particular, we wanted to know how decisions about the use of natural resources, especially pasture for livestock, were being made in a setting where clear resourcerelated rules that could govern access to and use of grazing land and determine norms for people-nature interactions seemed to be lacking. For contrast, we focused on two culturally very closely related groups, the Hamar and Bashada, as these had the same cultural roots but, as we will see, different ways of dealing with resource scarcity. The Hamar and Bashada of lower Omo The lower Omo valley, situated in the very south of the Southern Nations, Nationalities and People's Regional State in Ethiopia, is known for its ethnic and linguistic diversity (Strecker 1976a), and is home to groups such as the Hamar, Mursi, Arbore, Nyangatom and Dassenech. The groups we focus on here, the Hamar and Bashada, much like the nearby Mursi (Gil-Romera et al. 2011), are more closely related to pastoralist than agriculturalist cultures, although nowadays, cultivation of sorghum and maize is widespread (Lydall and Strecker 1979a;Wolde Gossa 1999), and governmental food relief programmes supplement many livelihoods. Both Hamar and Bashada keep cattle and goats, living in small, usually dispersed settlements for most of the year with their families, but, like the Mursi (Gil-Romera et al. 2011), herders will move their livestock, especially their cattle, during the dry season (roughly: December to February) to grazing areas further away once pastures around the hamlets have been depleted (see also Strecker 1976b). As such dry season grazing areas are often located at a substantial distance to the settlement, these trips, undertaken largely by male members of the families, can take days or weeks. Livestock husbandry and pastoralism permeate a large part of Hamar and Bashada culture, from the shaping of social relationships through the exchange of animals (see examples below) to everyday practices and the content of stories, traditions and rituals, such as the boys' initiation rite which involves leaping over a row of cattle (Lydall and Strecker 1979b;Epple 1995). In 2012, people in the villages were still largely independent from larger market economies, with AK-47s, men's clothing and necklaces from plastic beads being more or less the only industrially produced artefacts present in the villages. However, the area is rapidly changing, due to cultural tourism, land degradation and, most recently, the development of large-scale agricultural plantations (see e.g. Turton 2011). Detailed studies of the communities living in the region began to appear in the late 1960s, mainly by European anthropologists. Since then, scholars from both Ethiopia and abroad have conducted anthropological studies among many of the ethnic groups in the area (Turton 1973;Carr 1977;Almagor 1978;Gebre 1993;Wolde Gossa 1999;Elfmann 2005). For our focus on the Hamar and Bashada, the work by ethnographers Lydall and Strecker (1979a,b) and Epple (1995Epple ( , 2010 is of highest relevance. In both groups, the coordination of social life seems to be characterised by decentralised collective mechanisms, with slightly higher expectations in relation to collectivism among the Bashada than the Hamar. For example, Epple (2010) describes how a Hamar woman living among the Bashada was criticised for being too independent, not asking for help (in the form of work parties) when it seemed appropriate to the rest of the village (namely, for the construction of her house). Governance of social life in both groups is portrayed as relatively egalitarian, with constant negotiation and debating processes between elders and other community members, until a conflict resolution is accepted. From an individual's perspective, this requires a balancing act between one's own needs and views (which generally seem to be very strongly developed) and the realisation that "all people depend on each other, not only in economic terms, but also socially, to solve conflicts, to re-establish social balance and peace" (ibid., 242). For the Hamar in the 1970s, Lydall and Strecker (1979a, 197-198) describe this as a general striving for "free individual choice in the application of general principles. […] everyone works towards a maximization of choices in any particular social situation" in a context "where hardly any social relationship can be axiomatically trusted" (Strecker 1976a, 591). Here, we explore how, within these contexts of social coordination, natural resource access is practiced and governed, focusing on the role of operational rules in livestock grazing, as this is culturally the most important livelihood activity of both Hamar and Bashada. Methods Our analysis builds on data collected through interviews, group discussions and field visits in 2010 and 2011 (Tadie and Fischer 2013), but focuses on a set of interviews, group discussions and participatory mapping exercises in May and November 2012. Overall, 15 interviews (two of these with two interviewees each, i.e. including 17 interviewees overall) were conducted -eight in three of the four villages comprised by the Bashada area, and seven in Gembella kebele (ward or peasant association, the smallest administrative unit in Ethiopia, usually consisting of several villages) in Hamar District. Interviewees were selected for diversity of viewpoint and included both young men and elders among them, for example, the spiritual leader (bitta) of the Bashada area, a Hamar employee of the district administration, and the Hamar chairman of the Gembella kebele (the chairman of the kebele of the Bashada area participated in the mapping exercise described below). Interview guidelines were flexible and covered practices of resource use and access, focusing on livestock grazing, beekeeping and farming, resolution of conflicts in relation to resource use and the role of elders, interethnic relationships and the history of different ethnic groups and their interactions, and perceptions of recent changes in resource availability and governance. Interviews tended to start with broader questions on the history of settlement in the area and livelihood activities, and then developed questions on access to grazing, trees and land used for crop cultivation. More specific questions then probed practices related to the negotiation of access among neighbours, among people from different hamlets or with different roles within the same ethnic group, and among people from different ethnic groups. The interviews were complemented by two group discussions that involved the participatory drawing of a resource use map ( Figure 1) in Gembella, Hamar (20 participants) and with participants from two Bashada villages (n=9). Each group was asked to draw a map of the kebele, locating places of settlement, farming and grazing, and movements throughout the year (Figure 1). The drawing of these maps served as prompts for participants to speak about their resource use activities, emerging conflicts, and ways to address these. As livestock herding and beekeeping are male tasks, usually carried out by young men, and our research interest lay in the actual practices of access taking (although we recognise that gender roles can work in unexpected ways, Lowassa et al. 2012), all participants were male, again ranging in age from youngsters to elders. All conversations were held in Hamar language, the mother tongue of both Hamar and Bashada, with the help of translators, audio-recorded, and then verbatim transcribed into English. Data was coded in NVivo in a grounded manner, starting from the original, broad research interest (namely to better understand how access to natural resources was governed among the Hamar and Bashada; see above), and refining the analysis in an iterative process. Results Our presentation of the results starts with an analysis of the ways in which access to grazing was organised among both the Hamar and the Bashada, and then identifies differences in grazing management between the two groups, notably, the use of a set-aside area. Finally, it points at recent changes in understandings of property and access. Access to grazing: the role of rules and negotiations Access to grazing was generally non-exclusive among both groups, Hamar and Bashada: As such, there were no designated user rights for pasture: Where there was a settlement close by to a grazing site of interest to another livestock holder in search of pasture, it seemed that there was an expectation that the local residents would be approached by the newcomer even if there was no livestock present on the pasture at the time of arrival (and the land was therefore not currently in use), but there was a clear potential that these areas could be used at any moment. This expectation seemed to decrease with distance from a settlement. Where access was denied, livestock holders might graze animals without agreement, with a clear risk of physical conflict, especially where groups did not share ancestry or spiritual leaders: However, mechanisms existed that would reduce this risk, and it seemed that usually, exclusion from a grazing area was not an option: No, there won't be a 'no' answer. When he first arrives, he won't say "I came here for grass", he would say "we came here as a guest". Then it would be said "a guest has come" and then a cow hide would be laid out for him, then he would sit down and coffee would be boiled for him, then it would be said "give milk for these guests", milk would be given, "give them food", food would be given, then after they've eaten, they would be asked "you guests, where are you going?". Then, if it is said "we came to you", then other elders from the village would be called, and then they would speak it out. Then we would listen […], in a calm condition, we would talk, then it would be said "ok you settle here [for the moment]"; it would be said like this and they would be given a place. People won't suddenly come and ask, standing on the road. [By., Bashada] This flexible approach to resource use was facilitated by social institutions that helped develop relations between individuals that did not necessarily share family bonds (see Tadie and Fischer 2013 for long-lasting bonds between non-family members created through hunting), and a fluid and complex understanding of livestock ownership that was neither communal nor private. Two types of relationships derived from joint ownership of livestock were described in our interviews (see also Lydall and Strecker 1979b). Beltamo meant that the owner gave a cow or goat to another person, who would take care of it and be allowed to use its milk, blood, offspring and also meat. The recipient would then consider the previous owner a friend. The reverse phenomenon was called siti: A (usually poorer) person would ask to 'borrow' livestock from the (usually better-off) owner, so that they could use the milk, blood or offspring. In both cases, the giver would periodically visit their stock and might demand the original animal, some of the offspring, or other gifts. When we rear cattle, they [Hamar] would also take our calves and rear them there and they would use for themselves; we also take their cattle and herd them and use them as well; that is why we say "my father's cattle" whenever we see cattle. There is exchanging cattle through bel [tamo]. [G., Bashada] While there was generally a very clear understanding of which cattle belonged to which group, which made intended deceit, mixing one's cattle with others in order to obtain access to a grazing place without explicit consent, very difficult, arrangements such as beltamo and siti facilitated negotiations over access to grazing: There was a widespread perception that the availability of grass had been decreasing in the last 25 years, that the land was degrading -especially in Gembella (Hamar), large areas did not have any grass cover anymore, shrubs were encroaching, and soils laid bare -, and the pressure on the remaining pasture was growing. Some interviewees explained that this was due to increasing cattle numbers in the area, and backed this up with detailed accounts of historical migration movements into the area and increasing cattle ownership (see also Tadie and Fischer 2013). Within this general, shared understanding of grazing availability and access, the resource use maps from Gembella (Hamar) and Bashada showed two different approaches to the governance of grazing. In both places, livestock owners had fenced off areas (derr) around their homesteads to be able to keep the animals (especially kids and calves, cows and goats for milking, or sick animals) in confined places and away from farmland. However, this approach seemed more widespread in Bashada than in Gembella (Hamar). Grazing management among the Bashada: Coordination and a set-aside area In terms of use of the wider landscape, participants in Bashada described how they used specific places for grazing, farming, settlement and other activities, for example, as salt licks and for beekeeping. Their grazing grounds were situated in the northwestern and western part of their territory, bordering Mago National Park, while their settlement areas were established in the central, eastern and south central part (Figure 1). Most agricultural plots were situated in the fertile areas around the settlements and adjacent to dried-out river beds (Figure 1). The scrubland and small woodlands in the southern and eastern parts of Bashada were mainly used for beekeeping and occasional grazing by small numbers of cattle and also as saltlicks. Cattle herding was carried out as an informally communal activity (dashed arrows in Figure 1A show direction of movement): Those livestock owners who could send several youth to take care of the livestock would send their herds first, followed by those with fewer or no herders, so that their livestock could be watched by others. Degu Tadie and Anke Fischer There are always cattle that follow the first cattle to be released. They can't go ahead of the first one; they would follow them. They would stay 100 or 200 metres away; they would make a row and graze in one direction. [K., Bashada] This approach was seen as beneficial in terms of defence against both predators and human enemies, such as the Mursi, although it was widely recognised that cattle raiding and related conflicts with the Dassenech, Mursi and Nyangatom had substantially decreased. 1 It also allowed a more efficient use of the available grass, avoiding unnecessary trampling. Interestingly, a few years ago, the people of Bashada had started to set-aside a part of their traditional grazing land as a reserve for dry season grazing. This measure had been suggested by the district administration, and been negotiated and agreed after ten days of community deliberations. It appeared to be strictly enforced -no-one was allowed to enter the set-aside area during its closed-season, not even for beekeeping, as there were concerns that carelessness during the harvest of the honey could cause bushfires. Elders would jointly decide on the start of the open season, in which all Bashada livestock owners, independent of wealth, livestock numbers or status, were allowed to move their animals there for grazing. As soon as the rains started, livestock would have to leave the area again to allow the vegetation to recover for the next season. Grazing management in Gembella (Hamar): less coordination, no set-aside but use of a national park This was somewhat different in Gembella (Hamar) ( Figure 1B). Agricultural plots were, again, situated along the sandy river beds, and joint herding was seen as beneficial to protect oneself from cattle raids, but participants described how people sent their livestock all over the kebele, simply dependent on availability of grass and water, even to areas traditionally known for conflicts with the Dassenech (red X, Figure 1B). … yesterday, my goats went there, now they went this way. They would go as they wish to go. Today this direction, tomorrow that direction. [M., Hamar] Towards the end of the dry season, this regularly also included the area designated as Mago National Park, which, albeit illegal, tended to be tolerated by park staff. In February 2011, at the end of a particularly hard dry season, herders left the 1 It could be argued that decreasing resources would increase the potential for conflicts. Indeed, our findings here of reduced conflict seems to contradict Gebre's (2012) observation that conflicts, in particular those with fatal outcomes due to the use of firearms, between Dassenech and Hamar had increased in recent years. However, as Gebre (2012) points out, causal effects of resource scarcity on incidence of conflicts are difficult to identify, and there are numerous other socio-cultural, political (e.g. the role of governmental mediation efforts, or of law enforcement across country borders) and technological factors (e.g. availability of semi-automatic rifles) that interact with availability of resources in triggering or preventing conflictive action. national park well before the rain started, leaving their cattle behind with the aim to retrieve them later, after the message spread that two herders had shot two of the very few remaining giraffes, and that the district government therefore would now strictly enforce the non-grazing policy in the park and expel all livestock herders. Although the people of Gembella kebele had received advice similar to that given to the Bashada villages, they had not established a set-aside area, and were therefore reliant on other grazing land when their own resources were exhausted towards the end of the dry season or in times of draught. Grazing was generally scarce in Gembella in comparison to the area of the Bashada villages or Kara, which was seen as much more fertile and less prone to shrub encroachment. Our participants from Gembella described how they, in the event of shortage of water and pasture for livestock, would search for places with available resources, and then negotiate access with the people present in that area. Usually, this was done by vividly describing the difficulties that the livestock were facing, and by picturing the possible consequences for the livestock and their owner if they were denied access. Our Bashada participants illustrated how these negotiations worked: First the guest would come to the house and would see the host's cattle condition. Since he is pastoralist he would observe the cattle when they come back from grazing and whether they are full or not. Then he would say "your cattle are in good condition and their hair shines. But since there is no pasture in our area, our cattle are not well, their hair became shaggy and they lost weight". Then when the host asks "don't you have a pasture for the cattle?" then he would answer, "There is no grass in our area and all the cattle are failing and are about to die because of hunger". Then again he would say "how come your cattle are so well, their hair is so shiny, do they graze in a good place?" The host would say "they are using the demarcated area for the dry season". Then the guest would ask "please save about two of my cattle". He won't tell exactly the number of his cattle, he would mention only two or three. Then the host would answer "for the time being there is grass, no problem, just bring them". Then after the guest returned to his house, he would send ten, fifteen or twenty of them but he mentioned only two or three when he first asked. Once they are mixed with his cattle, then the host would take them to his pen. Then after some time the guest would come again to see the conditions of his cattle. [P., Bashada] Overall, thus, traditionally, clear resource-related, operational rules for grazing seemed to be absent in both Bashada and Gembella (Hamar) and access to pasture was largely negotiation-based, with the Bashada giving more importance to the collective coordination of grazing activities than the Hamar. The recently established set-aside area in Bashada, which did require the collective agreement of very clear resource-related rules, seemed to work well, whereas a similar initiative in Gembella had not been supported by the community. This was seen to be due to fundamental objections to property-related rules: M: You can't protect it […]. The only choice is to be together and starve together. Here, there is no culture to protect this. Recent changes: understandings of property However, some Bashada interviewees also described how the idea of private property, slowly being introduced through agricultural extension and governmental education measures, started to undermine the social balance in their community. For example, acting on recent government advice, people had started to fatten up selected bulls in their derr for sale on the market. This speaker criticised the increasing privatisation of land and other resources: Making an area closure in a faraway place is good. But this thing near me which we're being advised to do "making a closure using fences here near my house" is difficult. It would create conflict among us; it would get us into quarrels with my neighbours. This education which is being given [from the government] to make a grazing closure in the village in between the people may get us into conflict. Discussion We set out to explore the role of property rights and operational rules in relation to resource use among the Hamar and Bashada of the lower Omo valley; thereby unpacking what the presence, absence or simply the character of such rights and rules might mean for open access vis-à-vis common property regimes. We found a complex mosaic of ways in which access to grazing was practiced and sanctioned, characterised largely by negotiations and interplay between individual actors rather than by firm resource-related rules that just needed to be enforced. This was in contrast to the arrangements described for other traditionally pastoralist communities in the wider region and elsewhere in the world, which often take the existence of indigenous resource-related rules as a given, even where these are challenged by governmental interventions or the increasing heterogeneity of users (Berger 2003;Axelby 2007;Gilbert 2013;Conte and Tilt 2014;Yembilah and Grant 2014). However, it was in line with and provided further and more detailed insights into a line of thought in the transhumance literature (mainly from western and northern Africa) that maintains that open access regimes might not be governed through clear resource-related institutions, whether formal or informal, but through intricate mechanisms based on "procedural rules that allow resource access to be malleable to political negotiation/bargaining" (Turner 1999, 643-645;see also Niamir-Fuller and Turner 1999) and an "ethos of open access" (Moritz et al. 2013, 355). These studies show that open access cannot simply be equated to resource degradation and a tragedy of the commons (Moritz et al. 2013), as these more procedural and flexible arrangements might fit well with both the social and ecological conditions (Cole et al. 2014;Moritz et al. 2015). The prominent role of human-human relations in grazing management -as opposed to rules that govern human-nature interactions -mirrored our previous findings in relation to hunting (Tadie and Fischer 2013) that emphasised that human-nature relations cannot be understood in isolation from human-human relations. The present study expands on the nature of these social relations as processes rather than resource-related rules as usually understood in the common property literature. Thereby, it adds nuance to the anthropological perspective on property as relations between people (as opposed to relations between people and things; Hann 1998). Some of our observations, for example, the common practice to ask other resource users who might have a claim on the grazing area for their agreement -although it is understood that access will almost never be deniedmight evoke expressions of territoriality and social boundary defense as described by Myers (1982) and Cashdan et al. (1983) for several hunter-gatherer societies. However, a closer look at the Hamar and Bashada cases suggests important differences. For example, there was neither an explicit expectation of reciprocity of access nor the perception that the people local to the grazing land had a right to be asked, thus implying some form of ownership (as described by Myers 1982 for the Pintupi Aborigines); rather, the negotiations carried out before letting cattle graze close to someone else's livestock seemed to be a way to avoid potential conflicts. There also did not seem to be a clear distinction between in-and outgroups as highlighted by Myers (1982): Negotiations took place within and between ethnic groups, with gradual differences rather than a clear-cut distinction. These negotiations also depended on locality, with no clear 'territories' defined but instead a spectrum from grazing land close to a settlement to areas further away. Lastly, negotiations did not seem to be used to plan and allocate grazing areas in any systematic way as proposed by Cashdan et al. (1983) for foraging bushmen -again, in our case, the process of agreeing access largely seemed to have a social function, namely to avoid conflict. In addition, the differences we found between resource access arrangements in the Bashada and Gembella villages seemed to reflect larger differences in social arrangements between the two groups. In Bashada, decision making tended to be more strongly influenced by groups of elders and the spiritual leader than in Degu Tadie and Anke Fischer Gembella. There, adult men as the owners of the livestock, or their older sons as the most experienced herders, would individually take decisions where to take the livestock for feeding and watering, with little or no interference being expected from other users. Adult men appeared neither willing to accept the authority of individual leaders nor to co-operate in collective action. In property rights terms, no-one, not even a collective, was seen to have the rights to exclude others from the use of grazing land. As a consequence, there were also no clearly specified rights at the operational level, and access and withdrawal (i.e. grazing) had to be negotiated on an ad-hoc basis. While our interviews touched on such arrangements only in relation to natural resource governance, our findings align with those by Epple (2010) and Lydall and Strecker (1979a) on the organisation of social life more generally (see also Girke 2011). Juxtaposing the governance of grazing access among the Bashada vis-àvis the Hamar of Gembella allowed us to move beyond a mere diagnosis of an absence of resource-related rules as the reason for open access and ultimately, resource degradation. In fact, we do not make claims about causal relationships between presence of such rules and the current state of the land (see Turner 1999 and Gilbert 2013 for a critical discussion on the assumed links between pastoralism and "overgrazing") -and indeed, rules might only be meaningful where some degree of (potential) resource scarcity is perceived (see examples in Scoones 1999). Our interviewees seemed to share the view that scarcity of pasture land had only recently become an issue, possibly due to increased human and livestock populations, which might have reduced resilience in the face of variations in rainfall. This view appeared to concur with that of the Mursi (as identified by Gil-Romera et al. 2011) who suggest that overgrazing and subsequent woody encroachment caused by growing livestock numbers has been one of the main reasons for the decreasing availability of pasture. Interestingly, a reduction in rangeland because of the designation of protected areas (such as Murulle Controlled Hunting Area or Mago National Park; Turton 1987) was not mentioned as a reason for a lack of grazing, possibly because at least Mago, but to some degree also Murulle were de facto used, in spite of their formal designation (see also Turton 2011). In the face of this scarcity, our two study groups reacted very differently. While the Hamar of Gembella saw the independence of individuals' decision making (Lydall and Strecker 1979a;Girke 2011) as paramount and rejected governmental suggestions for set-aside areas, the Bashada, whilst only slightly less egalitarian and individualistic than the Hamar (Epple 2010), appropriated and adapted these suggestions to their own needs, but at the same time discarded and criticised governmental advances that would have undermined their understanding of common property. Our analysis highlights that cultural acceptance of rights and rules in resource governance should not be considered as a given, and that the level of such acceptance can vary even between neighbouring and culturally very closely related groups. Where arrangements for resource governance are based on negotiation rather than on rules, such negotiation might follow, of course, its own set of (procedural) rules, which, as illustrated here, might be guided more by social than by environmental considerations. We argue here that this can have two types of implications. First, it can hamper adaptation to environmental change, for example, to changes in resource availability. With the effects of climate change and decreasing availability of rangelands, adaptability will be crucial, but our understanding of adaptation of community-based governance of natural resources over time is still quite rudimentary (Wakjira et al. 2013). Our findings seemed to suggest that there are differences in adaptability even between two otherwise very similar cultural groups: In both groups, grazing was governed largely by process-based and social rather than resource-related rules. But while the Bashada had accommodated some resource-related rules (e.g. concerning the set-aside area) into their governance, the Hamar did not, and explicitly objected to these; a finding that seems hugely intriguing from a cultural psychology perspective (Moritz 2008). In practical terms, the use of scenarios in participatory workshops and group discussions that elicit reactions to and evaluations of different ecological and social scenarios together with governance options could be a first step to explore these differences. In more conceptual-analytical terms, future studies could also investigate how rules that guide access and use of resources are embedded in collective choice and constitutional arrangements. Epple (2010) and Girke (2011) describe such higher level governance structures among the Bashada and Hamar for social life in general, but little is known about this in relation to resource use. Such work could shed light on the ways in which such rules can realistically be altered. Second, where communication with other actors is involved, implicit assumptions about the acceptability of property rights and resource use rules might render interaction between resource users (such as the Hamar) and external actors (such as agricultural extension advisers) ineffective. There needs to be, therefore, an even greater awareness and appreciation of the role of negotiation-based governance systems among both practitioners and academics. More generally, and looking ahead, our analysis illustrates the potential usefulness of a more psychological and sociological perspective on governance: Future work could investigate the factors and dynamics underpinning communities' acceptance and interpretations of governance arrangements. Places such as Hamar and Bashada would provide excellent cases for a comparative exploration of the role of everyday activities and the resulting socialisation (Moritz 2008) in producing and reproducing a local governance culture. Literature Cited Almagor, U. 1978. Pastoral Partners: Affinity and Bond Partnership among the Dassanetch of South-West Ethiopia. Manchester, UK: Manchester University Press.
2019-05-30T13:18:12.201Z
2017-03-21T00:00:00.000
{ "year": 2017, "sha1": "65ffc8e17cf9123840f5db96136a9b291902f9c9", "oa_license": "CCBY", "oa_url": "http://www.thecommonsjournal.org/articles/10.18352/ijc.716/galley/662/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "be7b862136c2e2d325d39af363057e3fc0db05c4", "s2fieldsofstudy": [ "Environmental Science", "Political Science" ], "extfieldsofstudy": [ "Business" ] }
227166866
pes2o/s2orc
v3-fos-license
Replicating bacterium-vectored vaccine expressing SARS-CoV-2 Membrane and Nucleocapsid proteins protects against severe COVID-19 disease in hamsters An inexpensive readily manufactured COVID-19 vaccine that protects against severe disease is needed to combat the pandemic. We have employed the LVS ΔcapB vector platform, previously used successfully to generate potent vaccines against the Select Agents of tularemia, anthrax, plague, and melioidosis, to generate a COVID-19 vaccine. The LVS ΔcapB vector, a replicating intracellular bacterium, is a highly attenuated derivative of a tularemia vaccine (LVS) previously administered to millions of people. We generated vaccines expressing SARS-CoV-2 structural proteins and evaluated them for efficacy in the golden Syrian hamster, which develops severe COVID-19 disease. Hamsters immunized intradermally or intranasally with a vaccine co-expressing the Membrane (M) and Nucleocapsid (N) proteins, then challenged 5-weeks later with a high dose of SARS-CoV-2, were protected against severe weight loss and lung pathology and had reduced viral loads in the oropharynx and lungs. Protection by the vaccine, which induces murine N-specific interferon-gamma secreting T cells, was highly correlated with pre-challenge serum anti-N TH1-biased IgG. This potent vaccine against severe COVID-19 should be safe and easily manufactured, stored, and distributed, and given the high homology between MN proteins of SARS-CoV and SARS-CoV-2, has potential as a universal vaccine against the SARS subset of pandemic causing β-coronaviruses. The ongoing pandemic of COVID-19, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has caused over 50 million cases and 1.2 million deaths as of this writing 1 . A safe and potent vaccine that protects against severe COVID-19 disease is urgently needed to contain the pandemic. Ideally, such a vaccine would be safe, inexpensive, rapidly manufactured, and easily stored and distributed, so as to be available quickly to the entire world population. Previously, our laboratory developed a versatile plug-and-play Single Vector Platform Vaccine against Select Agents and Emerging Pathogens wherein a single live multi-deletional attenuated Francisella tularensis subsp. holarctica vector, LVS ΔcapB, is used to express recombinant immunoprotective antigens of target pathogens 2,3 . The LVS ΔcapB vector was derived via mutagenesis from Live Vaccine Strain (LVS), a vaccine against tularemia originally developed in the Soviet Union via serial passage and subsequently further developed and tested in humans in the USA 4,5 . As with wild-type F. tularensis, LVS is ingested by host macrophages via looping phagocytosis, enters a phagosome, escapes the phagosome via a Type VI Secretion System, and multiplies in the cytoplasm [6][7][8] . While much more attenuated than LVS, the LVS ΔcapB vector retains its parent's capacity to invade and multiply in macrophages 9 . Using this platform technology, we have developed exceptionally safe and potent vaccines that protect against lethal respiratory challenge with the Tier 1 Select Agents of four diseases -tularemia, anthrax, plague, and melioidosis 2,3 . These vaccines induce balanced humoral (antibody/neutralizing antibody in the case of anthrax toxin) and cell-mediated immune responses (polyfunctional CD4+ and CD8+ T-cells) against key immunoprotective antigens of target pathogens 3 . We have now used this platform to develop a COVID-19 vaccine. SARS-CoV-2 has four structural proteins -the Spike (S) glycoprotein, Membrane (M), Envelope (E), and Nucleocapsid (N) proteins. Virtually all COVID-19 vaccines in development have focused on the S protein, which mediates virus entry into host cells via the Angiotensin Converting Enzyme 2 (ACE2) receptor 10,11 . These vaccines have been tested for efficacy most prominently in the rhesus macaque model of COVID-19. However, this is primarily a model of asymptomatic infection or mild disease, as animals typically do not develop either fever or weight loss; hence, vaccine efficacy in the rhesus macaque is quantitated primarily in terms of the vaccine's impact on viral load rather than on clinical symptoms. In contrast, the golden Syrian hamster develops severe COVID-19 disease, akin to that of hospitalized humans 12 , including substantial weight loss and quantifiable lung pathology. Herein, we have employed the LVS ΔcapB vector platform to construct six COVID-19 vaccines expressing one or more of all four structural proteins of SARS-CoV-2 (S, SΔTM, S1, S2, S2E, and MN) and tested the vaccines for efficacy, administered intradermally (ID) or intranasally (IN), against a high dose SARS-CoV-2 respiratory challenge in hamsters. We show that the vaccine expressing the MN proteins, but not the vaccines expressing the S protein or its subunits in various configurations, is highly protective against severe COVID-19 disease including weight loss and lung pathology, and that protection is highly correlated with serum anti-N antibody levels. Construction and verification of rLVS ΔcapB/SCoV2 vaccine candidates. We constructed six recombinant LVS ΔcapB vaccines (rLVS ΔcapB/SCoV2) expressing single, subunit or fusion proteins of four SARS-CoV-2 structural proteins: S 13 , E, M, and N (Fig. 1A). The S protein is synthesized as a single-chain inactive precursor of 1,273 residues with a signal peptide (residue 1-15) and processed by a furin-like host proteinase into the S1 subunit that binds to host receptor ACE2 10 and the S2 subunit that mediates the fusion of the viral and host cell membranes. S1 contains the host receptor binding domain (RBD) and S2 contains a transmembrane domain (TM) (Fig. 1B, top panel). We constructed rLVS ΔcapB/SCoV2 expressing S (stabilized) and, so as to express lower molecular weight constructs, SΔTM, S1, S2, and the fusion protein of S2 and E (S2E), and additionally, a vaccine expressing the fusion protein of M and N (MN) (Fig. 1B, bottom panels). A 3FLAG-tag was placed at the N-terminus of the S, SΔTM, S1, and MN proteins. The antigen expression cassette of the SARS-CoV-2 proteins was placed downstream of a strong F. tularensis promoter (Pbfr) and a Shine-Dalgarno sequence (Fig. 1B) All six rLVS ΔcapB/SCoV2 vaccine candidates, abbreviated as S, SΔTM, S1, S2, S2E, and MN, expressed the recombinant proteins from bacterial lysates. As shown in Fig. 1C, three protein bands -a minor 75 kDa, a major 46 kDa, and a minor 30 kDa band -were detected from lysates of 4 individual clones of the MN vaccine candidate (Fig. 1C, lanes 3-6), but not from the lysate of the vaccine vector (lane 2) by Western blotting using guinea pig polyclonal antibody to SARS-CoV, which also detected the N and S protein of SARS-CoV (lanes 7 and 8, respectively). The 75-, 46-, and 30-kDa protein bands represent the full-length MN, the N, and degradations of the MN protein. The S, SΔTM, S1, S2, and S2E proteins were also expressed by the rLVS ΔcapB/SCoV2 vaccines, as evidenced by Western blotting analysis using monoclonal antibody to FLAG to detect S, SΔTM, and S1 (each with an N-terminus FLAG tag) and polyclonal antibody to SARS-CoV to detect non-tagged S2 protein (Fig. S1, A-D). Of note, SΔTM and S1 ( Fig. S1B) were expressed more abundantly than the full-length S protein (Fig. S1A), possibly as a result of the removal of the TM domain and reduced size of the protein. (Table S1A). All animals lost weight during the first 2 days after challenge; however, hamsters immunized with the MN vaccine, alone or in combination with the SΔTM or S1 vaccine, began to recover from the weight loss starting on Day 3, whereas shamimmunized animals continued to lose weight until euthanized on Day 7, by which time they had lost a mean of 8% of their total body weight. Hamsters immunized with the vector control continued to lose weight until Day 5 and then exhibited a small partial recovery, possibly reflecting a small beneficial non-specific immunologic effect as has been hypothesized for BCG and other vaccines. In contrast to hamsters immunized with the MN vaccine, hamsters immunized with the S, SΔTM, S1, S2, or S2E vaccines, administered ID or IN, were not protected against severe weight loss (Fig. 2B, To evaluate viral replication in the lungs, we assayed cranial and caudal lungs for viral load on Day 3 post-challenge, which peaks at this time point in unvaccinated animals. Hamsters immunized ID with the MN vaccine, alone or in combination with the SΔTM or S1, showed significantly reduced viral loads in their cranial and caudal lungs compared with sham-or vectorimmunized animals (Fig. 4B, left panel). Hamsters immunized ID with the MN vaccines as a group showed a mean reduction of 0.8±0.1 log compared with Sham (P< 0.0001). In contrast, hamsters immunized ID with the S (S, SΔTM, S1, S2, S2E) protein vaccines did not show reduced viral loads in their cranial and caudal lungs (data not shown). Similar results were observed in hamsters immunized IN (Fig. 4B, right panel). MN expressing vaccines induce antibody to N protein with a TH1 bias. To assess antibody responses to SARS-CoV-2 proteins expressed by the vaccine, we analyzed antibodies to the RBD of the S protein and to the N protein (Fig. 5). As expected, sera from sham-and vectorimmunized hamsters lacked antibody to either antigen ( Fig. 5A-C). In contrast, sera from hamsters immunized once with the MN vaccine, alone or in combination with the SΔTM or S1 vaccine, showed high levels of N specific IgG, whether immunized ID or IN, at 3 weeks postimmunization ( Fig. 5A), which somewhat increased at Week 8, 5 weeks after the second immunization at Week 3 (Fig. 5B), displaying a TH1 type bias, with IgG2 dominating the response (Fig. 5C). Differences in serum anti-N IgG titers between hamsters immunized with the MN vaccine, alone or in combination with S protein vaccines, and sham-or vector-immunized hamsters were highly significant at both Week 3 and Week 8 (P<0.0001) (Fig. 5D). Surprisingly, hamsters immunized with S protein vaccines did not show anti-RBD antibody at Week 3 (Fig. 5A), nor SARS-CoV-2 neutralizing antibody at Week 8 (data not shown). In mice immunized at Weeks 0 and 3 with second generation vaccines expressing MN in combination with S1 or SΔTM, serum obtained at Week 4 showed anti-RBD antibody as well as anti-N antibody (Fig. S2). Anti-N IgG antibody displayed a TH1 type bias both in hamsters (Fig. 5C), where IgG2 dominated the IgG response, and in mice, where IgG2a dominated the IgG response (Fig. S2). This TH1 bias was also reflected by murine splenocyte secretion of IFN- in response to S and N peptides (Fig. S3). Serum anti-N antibody correlates with protection in hamsters. We assessed the correlation coefficient between serum anti-N IgG antibody just before challenge at Week 8 and lung (cranial + caudal) histopathological scores at Day 7 post-challenge by linear regression analysis. Anti-N antibody was highly and inversely correlated with histopathology score (R 2 = 0.9903, P< 0.0001) (Fig. 5E). This antibody, which does not neutralize SARS-CoV-2 (data not shown), likely is not itself protective but instead correlates with a protective T cell response such as that shown in Fig. S3. Discussion We show that a replicating LVS ΔcapB-vectored COVID-19 vaccine, rLVS ΔcapB/SCoV2 MN, that expresses the SARS-CoV-2 M and N proteins, protects against COVID-19 disease in the demanding golden Syrian hamster model. The vaccine significantly protects against weight loss and severe lung pathology, the two major clinical endpoints measured, and significantly reduces viral titers in the oropharynx and lungs. The vaccine was protective after either ID or IN administration. Surprisingly, of the six vaccines expressing one or more of the four SARS-CoV-2 structural proteins, only the vaccine expressing the MN proteins was protective. Such a vaccine has the potential to provide cross-protective immunity against the SARS subgroup of β-coronaviruses including potential future pandemic strains. While the S protein shows only 76% sequence identity between SARS-CoV and SARS-CoV-2, the M and N proteins each show 90% identity 14 . In an analysis of T-cell epitopes in humans recovered from COVID, the M and N antigens together accounted for 33% of the total CD4+ T cell response (21% and 11% for M and N, respectively) and 34% of the total CD8+ T cell response (12% and 22% for M and N, respectively), an amount exceeding the 27% and 26% CD4 and CD8 T cell responses, respectively, of the S protein 15 . Hence, the MN vaccine has potential for universal protection against this group of especially severe pandemic strains. We evaluated our vaccines in the hamster model of SARS-CoV-2 infection because of its high similarity to serious human COVID-19 disease, which likely reflects at least in part the high genetic similarity of the hamster and human ACE2 receptor -S protein interface. A modelling of binding affinities showed that the hamster ACE2 has the highest binding affinity to SARS-CoV-2 S of all species studied with the exception of the human and rhesus macaque. In our previous studies of vaccines utilizing the LVS ΔcapB vector platform, three immunization doses consistently yielded superior efficacy to two doses. Here, given the urgency for a COVID-19 vaccine and the desire to simplify the logistics of vaccine administration, we opted to test only two immunizations, while still maintaining a reasonably long immunization-challenge interval (5 weeks after the second immunization). Future studies will examine if three doses are superior to two and the longevity of immunoprotection. These last three advantages are particularly important with respect to making a COVID-19 vaccine available rapidly and cheaply to the entire world's population. Safety is always a major consideration in vaccine development, especially so in the case of replicating vaccines. In our vaccine's favor, its much less attenuated parent (LVS) was already considered safe enough to justify extensive testing in humans, including recently, and it has demonstrated safety and immunogenicity 5,[23][24][25][26][27][28][29] . LVS has two major attenuating deletions and several minor ones 30 . As many as 60 million Russians were reportedly vaccinated against tularemia with the original LVS strain 31 , and over 5,000 laboratory workers in the United States have been vaccinated with the modern version of LVS by scarification 5 . Our further attenuation of LVS by introduction of the capB mutation reduced its virulence in mice by the IN route by >10,000-fold 9 . Hence, rLVS ΔcapB/SCoV2 MN and other LVS ΔcapB-vectored vaccines are anticipated to be exceedingly safe. Correlates of protective immunity to COVID-19 are not well understood. Almost all of the vaccines in development are centered on generating immunity to the S protein -especially neutralizing antibody to this protein. However, neutralizing antibody alone may not be sufficient for full protection; vaccines generating strong neutralizing antibody responses against SARS-Co-V were not necessarily highly protective, especially in ferrets, which exhibit SARS disease more akin to that in humans 32,33 . T-cell responses may be as or more important. T cell responses were demonstrated to be required to protect against clinical disease in SARS-CoV challenged mice and adoptive transfer of SARS-CoV specific CD4 or CD8 T-cells into immunodeficient mice infected with SARS-CoV lead to rapid viral clearance and disease amelioration 34 . Our S protein vaccines were ineffective, likely due to suboptimal S protein immunogenicity, reflected by the rapid decline of antibody titer in mice and the negligible antibody neutralization titers in hamsters just before challenge (data not shown). Possibly, enhanced or alternative expression of the S protein, for example display on the bacterial surface in addition to secretion, would improve immunogenicity, as reported for the S protein of SARS-CoV 35 . This would allow immune responses to the S protein to contribute to the already substantial protective efficacy provided by immune responses to the M and N proteins. Our replicating bacterial vaccine expressing the M and N proteins has demonstrated safety and efficacy in an animal model of severe COVID-19 disease. If its safety and efficacy is reproduced in humans, the vaccine has potential to protect people from serious illness and death. Considering the ease with which our vaccine can be manufactured, stored, and distributed, it has the potential to play a major role in curbing the COVID-19 pandemic, thereby saving thousands of lives and more rapidly restoring the world's battered economy.
2020-11-20T14:11:09.491Z
2020-11-18T00:00:00.000
{ "year": 2020, "sha1": "d627dc485dba975b7cd00edff1ab5b9436214279", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41541-021-00321-8.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "35d05c8b4e78e467ffbfc578a697352741db467c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16694216
pes2o/s2orc
v3-fos-license
Amphiregulin and PTEN evoke a multimodal mechanism of acquired resistance to PI3K inhibition. Phosphoinositide-3 kinase (PI3K) signaling pathway alterations occur broadly in cancer and PI3K is a promising therapeutic target. Here, we investigated acquired resistance to GDC-0941, a PI3K inhibitor in clinical trials. Colorectal cancer (CRC) cells made to be resistant to GDC-0941 were discovered to secrete amphiregulin, which resulted in increased EGFR/MAPK signaling. Moreover, prolonged PI3K pathway inhibition in cultured cells over a period of months led to a secondary loss of PTEN in 40% of the CRC lines with acquired resistance to PI3K inhibition. In the absence of PI3K inhibitor, these PTEN-null PI3K inhibitor-resistant clones had elevated PI3K pathway signaling and decreased sensitivity to MAPK pathway inhibitors. Importantly, PTEN loss was not able to induce resistance to PI3K inhibitors in the absence of amphiregulin, indicating a multimodal mechanism of acquired resistance. The combination of PI3K and MAPK pathway inhibitors overcame acquired resistance in vitro and in vivo. INTRODUCTION The phosphatidylinoisitol 3'-kinase (PI3K) signaling pathway can be activated by a variety of extracellular signals and is involved in cellular processes such as survival, proliferation, migration and protein synthesis [1]. Aberrant activation of this pathway has been widely implicated in cancers. Two major hot spot mutations in the PI3K catalytic subunit have been reported, one in the helical domain (E545K) and the other in the kinase domain (H1047R). Both mutations are transforming and result in increased pathway signaling [2][3][4]. The tumor suppressor protein phosphatase and tensin homologue (PTEN) acts to inhibit PI3K pathway signaling and is commonly mutated, deleted or epigenetically repressed in human cancers [5,6]. Due to the dysregulation of the PI3K pathway in many cancers, there are increasing efforts in the development of PI3K pathway inhibitors as potential therapeutics with reports of efficacy being reported [7]. Although PI3K inhibitors offer an additional line of treatment, as with other targeted therapies, acquired resistance is likely to arise. To investigate resistance to PI3K inhibitors, it is important to examine mechanisms that are upstream of PI3K signaling. The PI3K pathway can be activated by mutations or overexpression of upstream signaling molecules in the ErbB family of receptor tyrosine kinases, such as EGFR/ErbB1, HER2/ErbB2, and HER3/ErbB3 [8][9][10][11]. EGFR ligands bind and activate the EGF receptor and include EGF, amphiregulin (AREG), βcellulin (BTC), epiregulin (EPR), transforming growth factor α (TGFα), heparin-binding EGFR-like growth factor (HB-EGF), and epigen [12]. The activation of EGFR is prevalent in cancer signaling and not only activates PI3K by recruiting the regulatory subunit, p85 [13], but also induces activation of the mitogen-activated protein kinase (MAPK) pathway by either Grb2 or Shc adaptor proteins [14]. EGFR signaling has been implicated as a mechanism of resistance to several targeted cancer therapies, such as crizotinib [15], trastuzumab [16,17], and vemurafenib [18]. Not only has dysregulation of EGFR conferred drug resistance, but stimulation by EGF ligands has been shown to subvert inhibition of targeted inhibitors as well [19]. Despite the amount of activity in the development of PI3K inhibitors, less is known about acquired resistance to these inhibitors. Engineered mouse models that express an activating H1047R mutation in PIK3CA have found up-regulation of c-Myc to be involved in PI3K inhibitor resistance [20]. In these studies, MET amplified tumors remained dependent on endogenous PI3K, while c-Myc amplified tumors became pathway independent. Additional studies using engineered cancer cells have also identified increases in c-Myc as well as eI4FE and Notch1 as potential mechanisms of resistance [21,22]. GDC-0941 is an orally bioavailable inhibitor of Class I PI3K that is in clinical development for several solid tumor indications [23][24][25]. In these studies we investigate mechanisms of resistance to GDC-0941 in the SW48 CRC line that is wild-type for PI3Kα or harbors an oncogenic H1047R PI3Kα mutation. Parental SW48 and SW48 H1047R cells are able to overcome growth suppression by GDC-0941 by the addition of EGFR ligands. In addition, SW48 cell lines that have acquired resistance to GDC-0941 initiate secretion of the EGFR ligand AREG, which allows the cells to continue to grow and survive in the presence of GDC-0941. We also found that resistant cells lose PTEN after long-term culture, thereby increasing PI3K pathway signaling. These results may provide guidance on potential clinical treatment regimens. EGFR ligands confer resistance to GDC-0941 in SW48 isogenic cells A CRC cell line, SW48, and a version of this cell line with a knock-in H1047R PI3Kα mutation at one of the endogenous loci were used to investigate cellular changes associated with oncogenic PI3K. The introduction of the H1047R mutation to the SW48 cell line resulted in increased cell growth and increased PI3K pathway signaling as measured by pAKT T308 , pAKT S473 , pPRAS40 T246 , pp70S6 T389 , and pS6 S235/236 (Supplemental Figure 1A and [26]. We found the parental and PI3K mutant cell lines were sensitive to the PI3K inhibitor, GDC-0941 (Supplemental Figure 1B). To investigate the potential role of soluble ligands in resistance to GDC-0941 we utilized a screen of commercially available factors to identify candidates that rescue GDC-0941-induced growth inhibition. For the screen, SW48 and SW48 H1047R cells were dosed with a 90% maximum inhibitory concentration (IC 90 ) of GDC-0941 (1 uM) as well as 50 ng/ml of one of 418 soluble ligands for 72 hours (Supplemental Table 1). We found 11 factors (3% of total) were able to rescue GDC-0941-induced growth inhibition greater than 25%. Of the 11 factors, 8 belonged to the epidermal growth factor receptor (EGFR) ligand family: AREG, Βcellulin, EGF, Epigen, Epiregulin, HB-EGF, HRG-11, and TGFα ( Figure 1A). To confirm the ability of EGFR ligands to overcome GDC-0941 growth inhibition, the EGFR ligands AREG, EGF and TGFα,were tested for their effect on GDC-0941 cellular potency. In the SW48 line, AREG, EGF, and TGFα were able to decrease GDC-0941 sensitivity, 2.5fold, 4.2-fold, and 7.5-fold respectively ( Figure 1B). The effects were comparable in SW48 H1047R cells, where GDC-0941 inhibition was reduced 2-fold by AREG, 3.3fold by EGF, and 5.2-fold by TGFα. To find the underlying mechanism of EGFR ligand ability to decrease GDC-0941 sensitivity, we investigated downstream signaling in the PI3K and MAPK pathways. All three ligands tested (AREG, EGF, and TGFα, were shown to increase pERK1/2 T202/Y204 in both cell lines irrespective of the presence of GDC-0941, suggesting the cells could use MAPK pathway activation under conditions of ligand stimulation ( Figure 1C). To confirm this effect was not specific to the SW48 line, an additional 6 colorectal cancer cell lines were screened for the ability of EGF to negatively impact the cellular potency of GDC-0941. Of the 7 total lines (including SW48), 5 showed decreased sensitivity to GDC-0941 in the presence of EGF that was greater than 1.7-fold (Supplemental Table 2). We investigated PI3K and MAPK pathway signaling in these cell lines. In the two lines where GDC-0941 potency was minimally influenced by EGF (HCT116 and SW620), stimulation with EGFR ligands (AREG, EGF, and TGFα, did not effect levels of pERK1/2 T202/Y204 in the presence of GDC-0941 (Supplemental Table 2, Figure 1D and Supplemental Figures 2A and 2B). It is important to note that of the 6 additional cell lines only SW620 did not have detectable levels of EGFR by western blot (Supplemental Figure 2C). This suggested that these cells could not use MAPK pathway activation under EGFR ligand stimulation conditions. The four cell lines where GDC-0941 potency was most affected by the presence of EGF were shown to increase pERK1/2 T202/Y204 irrespective of the presence of GDC-0941, suggesting the cells could activate the MAPK pathway under conditions of EGFR ligand stimulation. The results in these four cell lines are similar to those observed with SW48 cells. GDC-0941 resistant cells exhibit increased PI3K pathway signaling when GDC-0941 is removed In addition to investigating the role of EGFR ligands in acute or innate PI3K inhibitor resistance, we sought to examine factors involved in acquired GDC-0941 resistance. Pools of SW48 or SW48 H1047R cells were treated at increasing doses of GDC-0941 over 6 months. At the end of dose escalation, the cells were able to grow at a GDC-0941 concentration 10-fold higher (1.5 uM) than the initial EC 50 dose (0.15 uM). In a viability assay, the SW48 resistant pool was 10-fold less sensitive than the parental SW48 cell line to GDC-0941 while the SW48 H1047R resistant pool was 13-fold less sensitive than parental SW48 H1047R cells (Supplemental Figure 1B). Clones from GDC-0941 resistant pools were generated by plating single cells of each line in the presence of 1.5 uM GDC-0941 and isolating proliferating clones. Two resistant clones of each cell type were assessed and characterized for SW48 (clones 2F and 2G) and SW48 H1047R (clones 10A and 10B). SW48 resistant clones 2F and 2G had a 120-and 36-fold decrease in GDC-0941 sensitivity, respectively, compared to the SW48 parental line. SW48 H1047R resistant clones 10A and 10B had a 22-and 12fold decreased sensitivity to GDC-0941, respectively, compared to the SW48 H1047R parental line ( Figure 2A). To confirm that the observed resistance was not specific to GDC-0941, the potency of another PI3K inhibitor, GDC-0980 (27), was evaluated in resistant clones. We found the resistant clones were also not sensitive to GDC-0980 (Supplemental Table 3). Once resistant pools and clones were confirmed to retain resistance to GDC-0941, signaling downstream of PI3K was assessed by reverse phase protein array (RPPA) and western blot. Both resistant pools and all four resistant clones were shown to have increased levels of pAKT at both the T308 and S473 phosphorylation sites, which was strongly enhanced when clones were released from the 1.5 uM dose of GDC-0941 ( Figure 2B, Supplemental Figure 3). Further exploration of PI3K pathway members revealed that all resistant cell lines had lost the tumor suppressor, PTEN, which helps to explain the observed Genes & Cancer 117 www.impactjournals.com/Genes & Cancer pAKT increase. The PTEN protein loss observed by western blot was confirmed with a PTEN expression decrease by microarray (Supplemental Figure 4). To assess how these cells grow due to PTEN loss, cells were imaged every 4 hours for 140 hours. In the SW48 line, the parental line did not grow in the presence of GDC-0941 while the resistant clone grew at the same rate as the parental line in the presence or absence of GDC-0941 ( Figure 2C). Comparably, the SW48 H1047R parental line was growth arrested in the presence of GDC-0941, while the SW48 H1047R resistant clone grew in the presence of GDC-0941 ( Figure 2C). While the loss of PTEN and corresponding increase of phosphorylated AKT may play a role in the resistant clones becoming insensitive to GDC-0941, we also found muted GDC-0941 responses in clones with respect to both pERK1/2 T202/Y204 and pS6 S235/236 when compared to sensitive clones, suggesting alternate pathways may be playing a role in the resistance ( Figure 2B). ERK Inhibition has been observed with other PI3K inhibitors, but the mechanism is unknown (28). To confirm that loss of PTEN was not exclusive to SW48 cell lines, four additional colorectal CRC lines, DLD-1, HCT-116, LS 180, and SW620 were made resistant to GDC-0941 and cloned as described for SW48 cells (Supplemental Figure 5A). All clones were assessed for PTEN protein expression ( Figure 2D and Supplemental Figure 5B). We discovered that 13 out of 21 LS 180 resistant clones had PTEN protein absent. GDC-0941 resistant clones from DLD-1, HCT-116, and SW620 lines expressed normal PTEN protein levels. GDC-0941 resistant clones secrete AREG that activates the MAPK pathway in the presence or absence of GDC-0941 Since EGFR ligands were able to overcome GDC-0941 sensitivity in both SW48 and SW48 H1047R cell lines ( Figure 1B), resistant clones from these lines were assayed for a possible increase in EGFR ligand secretion. Levels of AREG, βcellulin, HRG-β1, EGF, HB-EGF, and TGFα were all assayed in resistant clone media and only AREG was detected. AREG levels in cell supernatants increased over time and were unchanged with PI3K inhibition in resistant clones, while no increase was observed in parental lines ( Figure 3A). We previously discovered that AREG was able to activate the MAPK pathway in SW48 and SW48 H1047R cells in the presence of GDC-0941 ( Figure 1D). Treatment of the cells with erlotinib under these conditions, however, was able to block signaling to pERK1/2 T202/Y204 ( Figure 3B). MAPK signaling was assessed in parental and resistant clones in normal cell culture conditions and we found that pERK1/2 T202/Y204 increased over time, likely due to entry of cells into the cell cycle after plating ( Figure 3C). A dose of 1.5 μM of GDC-0941 prevented the pERK1/2 T202/Y204 increase in parental lines, while it had a muted effect in resistant clones, but erlotinib was able block the increase in all lines ( Figure 3C). Resistant clones were assayed for GDC-0941 sensitivity with and without the presence of erlotinib ( Figure 3D and Supplemental Figure 7A). Sensitivity to GDC-0941 was increased by addition of erlotinib 3.2-fold in SW48-R clone 2F and 3.6-fold in SW48 H1047R-R clone 10A, suggesting that EGFR inhibition may aid in overcoming resistance ( Figure 3D). Similar results were obtained for Cetuximab, a monoclonal antibody that blocks EGFR and is approved for metastatic colorectal cancer (Supplemental Figure 7D). PTEN loss alone does not cause PI3K inhibitor resistance Changes in the production of AREG and PTEN expression were observed in clones made resistant to GDC-0941. To determine how these changes contributed to GDC-0941 resistance, we assessed AREG production and PTEN loss independently. We have already shown that AREG and other EGFR ligands subvert growth inhibition by GDC-0941 in SW48 and SW48 H1047R parental lines ( Figure 1B). To investigate the consequence of PTEN loss on SW48 and SW48 H1047R lines, PTEN levels were reduced by siRNA in parental lines and tested for sensitivity to GDC-0941 ( Figure 4A). In both cell lines PTEN knockdown failed to change GDC-0941 sensitivity when compared to the same cell lines transfected with a non-targeting control siRNA ( Figure 4A). Levels of PTEN knockdown were assessed by western blot 24 hrs after transfection, at the time when cells were dosed with GDC-0941 (Supplemental Figure 6A) and at 72 hrs, the same time viability assays were performed ( Figure 4A). To confirm the siRNA data, a matched set of isogenic SW48 cell lines that included parental cells and a clone altered to have deletion of PTEN were assessed for sensitivity to GDC-0941 ( Figure 4B). Complete PTEN loss in SW48 lines did not change the response to GDC-0941, which is in agreement with our findings with PTEN siRNA knockdown ( Figure 4B). While resistant pools were being generated, they were frozen and stored at various times (2, 3, and 6 months). These cells were utilized to establish the time at which PTEN was lost and AREG secretion was initiated ( Figure 4C). PTEN loss and increased pAKT T308 were not observed until the 6-month time point. However, AREG secretion was observed in resistant pools at 2 months ( Figure 4C). PTEN loss did not appear to cause GDC-0941 resistance, but we found that it enhances the resistance generated when SW48 cells are stimulated by EGFR ligands ( Figure 4D). When the decrease in GDC-0941 sensitivity in both SW48 cells and SW48 PTEN-/-cells are compared in the presence of AREG, the SW48 PTEN-/-cells are 5-fold more resistant than parental SW48 cells www.impactjournals.com/Genes & Cancer ( Figure 4D). Other EGFR ligands, EGF and TGFα were 4.5-fold, and 3-fold, respectively, more resistant in the SW48 PTEN-/-line (Supplemental Figure 6B). Resistant clones are sensitive to GDC-0941 in combination with MAPK pathway inhibitors We next wanted to determine how to treat tumor cells once they acquired GDC-0941 resistance. In standard cell culture conditions, we found evidence of MAPK pathway activation ( Figure 3C), which suggested they might be sensitive to inhibitors of this pathway. For these studies we utilized an allosteric MEK inhibitor, G-573 [29] and an ERK inhibitor, G-824 [30]. When SW48 and SW48 H1047R GDC-0941 resistant clones were tested for response to these inhibitors all clones tested were highly resistant ( Figures 5A and B, Supplemental Figures 7B and 7C). Notably, both G-573 and G-824 were able to suppress MAPK signaling in resistant clones as measured by pERK T202/Y204 and pRSK T359/S363 ( Figure 5C, Supplemental Figure 8). Even when the MAPK pathway was reduced with these treatments, the resistant clones still retained high phosphorylated AKT levels due, in part, to PTEN absence ( Figures 2B and 5C, Supplemental Figure 8). When the same GDC-0941 resistant clones were dosed with G-573 or G-824 in media containing a 1.5 uM dose of GDC-0941, sensitivity to both MAPK inhibitors was restored ( Figure 5A and B, Supplemental Figure 7B and 7C). For SW48 resistant clones (2F and 2G), sensitivity to both the MEKi and ERKi was greater than in the parental line ( Figure 5A and B, Supplemental Figures 7B and 7C). We also found that S6 phosphorylation was only fully repressed when resistant clones were treated with GDC- Resistant clone is less responsive to GDC-0941 in vivo To confirm our in vitro findings, GDC-0941 potency against sensitive (9A) and resistant (10A) SW48 H1047R clones was evaluated in vivo. Both models were dosed daily with 50 mg/kg of GDC-0941 for 17 days and assayed for GDC-0941 resistance (Supplemental Figure 9A and 9B). At 50 mg/kg of GDC-0941 no loss of body weight was observed (Supplemental Figure 9A and 9B). Consistent with in vitro results, SW48 H1047R (10A) tumors were more resistant to GDC-0941 when compared to the SW48 H1047R (9A) model, with tumor growth inhibition (TGI) of 6% and 42%, respectively. We also evaluated PI3K and MAPK pathway markers in 9A and 10A vehicle treated tumors that were collected 1 hr post-final dose. We found that SW48 H1047R (9A) tumors retained PTEN protein expression, while SW48 H1047R (10A) did not have detectable PTEN protein levels (Supplemental Figure 9C). Clone 10A tumors also had substantially elevated AKT phosphorylation compared to clone 9A (Supplemental Figure 9C). Resistant clone efficacy can be restored by MAPK inhibition To further confirm our in vitro findings, the resistant SW48 H1047R clone (10A) was evaluated in xenografts in combination with a MEK inhibitor, G-573 ( Figure 6A). Animals were dosed daily for 21 days with 75 mg/ kg of GDC-0941, 50 mg/kg of G-573, or a combination of both drugs. No body weight loss was observed, and the fitted tumor volumes were used to calculate percent tumor growth inhibition (TGI) ( Figure 6A). Single agent treatment with GDC-0941 and G-573 showed decreased tumor growth relative to vehicle, with 48% and 33% TGI, respectively. However, consistent with in vitro modeling, the combination showed an increase in efficacy when compared to either single treatment alone, with a TGI of 77% ( Figure 6A). Tumors were collected 1 hr after the Genes & Cancer 123 www.impactjournals.com/Genes & Cancer final dose was administered and assayed for pathway signaling. GDC-0941 treatment decreased levels of phosphorylated AKT; while G-573 treatment decreased phosphorylated ERK. The drugs in combination decreased phosphorylated AKT and ERK, as well as phosphorylated S6 ( Figure 6B). We observed similar efficacy results in clone 10A xenografts treated with the combination of GDC-0941 and erlotinib ( Figure 6C). While erlotinib did not show any single agent activity, the GDC-0941 and erlotinib combination resulted in an 89% TGI, which was a substantial increase over single agent GDC-0941 and erlotinib treatment (TGI of 48% and -3%, respectively). Some weight loss was observed with the GDC-0941 and erlotinib combination, however all mice were otherwise healthy and remained on study throughout. A reduction in phosphorylated S6 was also detected in tumors collected 1 hr after the final GDC-0941 and erlotinib combination dose ( Figure 6D). DISCUSSION With the emergence of several PI3K inhibitors in clinical trials, it has become increasingly important to study molecular mechanisms that cancer cells may utilize to resist the beneficial effects of these inhibitors [7]. Here we describe the role of EGFR ligands and loss of PTEN in acquired GDC-0941 resistance. By using SW48, SW48 H1047R and a panel of CRC cell lines, we have shown that GDC-0941 growth inhibition can be overcome with the addition of EGFR ligands through MAPK pathway activation. In addition, SW48 and SW48 H1047R cells with acquired resistance to GDC-0941 begin to secrete AREG to bypass suppression of the PI3K pathway. To drive PI3K pathway signaling, resistant cells eventually lose PTEN, which results in increased levels of phosphorylated AKT in the absence of GDC-0941. The loss of PTEN alone is not able to induce resistance to GDC-0941, but it enhances the resistance induced by EGFR stimulation. AREG has been implicated in resistance to other therapies [31,32]. Consistent with our findings linking signaling upstream of the PI3K pathway and PI3K inhibitor resistance, other studies have implicated MET in mouse models of acquired resistance and KRAS as a marker for intrinsic resistance [33]. Due to its location downstream of EGFR, the MAPK pathway remains active despite PI3K inhibition. This is consistent with described mechanisms of PI3K inhibitor resistance that demonstrate a reliance on other pathways such as, c-MYC, eIF4E, Notch1, and RSK3/4 [20][21][22]. It is also worth noting that a gatekeeper resistance mutation has not been described in the PI3K enzymes [34]. In our studies, we sequenced the PIKC3A exons and found no mutations in SW48 sensitive or resistant models. The parental SW48 isogenic lines are initially sensitive to GDC-0941. Thus, the cells have reliance on the PI3K pathway that is changed over time in culture in the presence of GDC-0941. MAPK pathway activation through AREG and EGFR allow the cells to grow and survive while the PI3K pathway is blocked by GDC-0941. However, it appears that SW48 cells in culture with GDC-0941 may attempt to maintain signaling through the PI3K pathway through loss of PTEN ( Figure 7A). Autocrine signaling through AREG and EGFR activation would activate the PI3K pathway in the absence of GDC-0941. The observed loss of PTEN in resistant clones also supports a mechanism for resistant clones to continue PI3K pathway signaling, especially since PTEN loss did not appear to have a substantial role in GDC-0941 resistance. In the absence of GDC-0941, PTEN null resistant clones continue to secrete AREG and stimulate the EGFR activating the MAPK and PI3K pathways ( Figure 7B). Without a significant role in GDC-0941 resistance, the PTEN loss remains important because it may influence treatment decisions for patients in the clinical setting once resistance occurs. We have shown that once PI3K inhibition is removed, cells that have acquired GDC-0941 resistance also become resistant to MEK or ERK inhibition. The resistance to MAPK pathway inhibition is likely linked to PTEN absence and subsequent hyperactivation of the PI3K/AKT pathway that is observed with the increase in AKT phosphorylation upon release of GDC-0941. The loss of sensitivity to ERK and MEK inhibition can be restored if GDC-0941 is retained in the media, which prevents PI3K/AKT pathway activation. Taken together, these findings may suggest maintenance therapy in the clinic for patients that become resistant to PI3K inhibitors. Compounds and reagents GDC-0941, erlotinib, G-573, and G-824 were all obtained from Genentech. Cetuximab was produced by ImClone Systems, Inc.
2016-05-12T22:15:10.714Z
2014-03-01T00:00:00.000
{ "year": 2014, "sha1": "23ca34a3ec52b346a4fbf8098b1690a87e0fcca3", "oa_license": "CCBY", "oa_url": "http://www.impactjournals.com/Genes&Cancer/files/papers/1/10/10.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9a7c91e740eff24599c82dcbf85c05076bee339", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246980760
pes2o/s2orc
v3-fos-license
Politics and the natural resource curse: Evidence from selected African states Abstract This paper analysed political aspects of the resource curse in selected African states. The paper drew from the fact that the African region has extensive (untapped) natural resources deposits that, if utilised, can promote sustainable economic development. But despite the presence of these natural resources, the African region is poverty-stricken and still under-developed. Literature identified politics as among the factors that affect Africa’s capacity to invest revenue from natural resources. Studies showed that there are close links between politics and the management of extractives. The study employed the PMG (ARDL) and the FMOLS for the period 1995–2016. Results from the study showed a positive relationship between efficient functioning of government and resource rents. There is also evidence for causality running from efficient functioning of government to resource rents and not vice versa. This shows that government performance is crucial for better performance in the natural resource extraction sector. Based on the findings, this study recommends that governments in African countries should improve on governance and the way the public sector institutions function. Harnessing the weak and politically fragile public institutions is important in order to kick start markets. The effective functioning of government institutions can be strengthened by eliminating corruption, stabilising property rights and investing in fiscal capacity. PUBLIC INTEREST STATEMENT This research looked at the political aspects of the resource argument in a few African countries. The article drew on the notion that Africa contains vast (untapped) natural resource deposits that, if exploited, can help the continent achieve longterm economic growth. Despite the abundance of natural resources, the African continent remains impoverished and underdeveloped. Politics has been highlighted as one of the issues affecting Africa's ability to invest earnings from natural resources, according to the literature. According to studies, politics and extractive management are inextricably linked. The study's findings revealed a favorable association between government efficiency and resource rents. This shows that government performance is crucial for better performance in the natural resource extraction sector. Introduction and background to the study Conventional wisdom suggests that revenue from natural resources should generate wealth and promote economic development. Natural resources and other sectors such as agriculture and the manufacturing sector normally serve as pillars in promoting economic development and growth. When used productively, proceeds from natural resources can be an essential catalyst for sustainable economic development. Natural resources are, in a way, a stock of natural capital (Davis & Tilton, 2005;Harvey, 2021;Muigua, 2020) and natural resource availability can thus be seen as a pool of wealth (Alhassan & Achaamah, 2021). Revenue from natural resources can be used to create a base from which economic growth can take off. The idea that natural resource wealth can support sustainable economic development is thus admitted. However, it must be noted that natural resources can bring negative outcomes. If there are no mechanisms that facilitate the transformation of natural resources into development outcomes, natural resources can be a curse (Henri, 2019;Sun et al., 2018). The way a society is structured and the policy making process-in other words, the political atmosphere-are of vital importance for the management of natural resources (Jamart & Rodeghier, 2009;Kayitare et al., 2015). Politics and government approaches concerning the managing of natural resources are central to a sustainable society (Eckerberg, 2019). This shows that one of the most important mechanisms that supports resource extraction and management is the political environment. A stable political environment is crucial for resource extraction and growth. Political aspects such as the rule of law, effective institutions, an efficient public administration and absence of corruption enable a favourable business climate (Hussain, 2014). Proper political administration and natural resources can, under the right conditions, combine to create a virtuous circle of growth and prosperity. Empirical literature shows that resource-rich countries seem to be prone to bad government, a phenomenon scholars have termed "good economics, bad politics" (Lewin, 2011). Good economics implies adopting economic strategies that promote economic growth. Bad politics implies adopting political strategies that advance the interests of politicians at the expense of the citizens. Generally, the type of governance and existing political arrangement determine whether good economics is good politics (Agarwal, 2018). In African countries, where there is ineffective government and state institutions, politics rules and economic policy often go astray. The compulsion of politicians takes precedence over economic policies resulting in bad politics. Many African countries place the ownership of natural resources to the government and the government is also the sole recipient of the funds obtained from resource extraction. Lewin (2011) states that this concentration of resource funds with the government may lead to a number of problems such as corruption, rent seeking and efficiency loses that result from maladministration. This shows that, in developing countries, resource revenue may make the government to engage in unproductive and inefficient ways. Instead of investing the resource rents in developmental programmes such as infrastructure projects and human capital, individuals in government channel mush of the resources to activities that make them to stay in power, and they also concentrate on projects that are not developmental (Baldwin, 2017). This, in many cases, consolidates the power of entrenched elites and regime supporters, sharpening income inequality and stifling political reform. In this way, it can be said that resource wealth worsens the quality of institutions, since it allows governments to pacify dissent, avoid accountability and resist modernization (Isham et al., 2003). Resource revenue flows facilitate corruption by making it easy for officials to siphon profits for personal gains. Revenues also generate staggering wealth that facilitates corruption and patronage networks (Sterwart, 2012). This is commonplace in Africa where government funds are misappropriated by corrupt political elites. Africa has long been argued to suffer from a so-called "resource curse", where countries' natural resource endowments have not translated into positive economic growth (Chachu & Nketiah-Amponsah, 2021;Henri, 2019;Ziaba, 2020). In some cases, they have led to contraction. Using natural resources to build sustainable economic development is the challenge faced by many African governments (Sudhir, 2011;Aragbonfoh, 2015). Weak local governance in several African governments means that these African communities are suffering the most and receiving few tangible benefits. For autocrats, what appears to be bad policy is often good politics (Robinson et al., 2006). Kayitare et al. (2015) concur and state that these policies are often based on political calculations for electoral victory rather than clear internal policies resulting from internal discussion. This is the case in Africa where many countries have not yet achieved proper governance that can prevent dictatorship, corruption, and the resource curse (Crocker, 2019;Mbaku, 2020). This raises questions about how political factors affect natural resource extraction and management. What is the relationship between politics and natural resource management? This study, thus, seeks to examine the interrelationship between politics and resource extraction and management in selected African states. Although several studies have attempted to explore the factors that cause the resource curse in Africa, little effort has been done to test the relationship between political factors and natural resource management. Freehills (2020) notes that where the Resource Curse exists, it probably lies in the deeper political economy of institutions, rather than in economic management per se. The study argues that it is important to include political factors to analyze the resource curse in Africa and to observe to what extent and in what direction political factors affect resource extraction and management. Literature review There exists a huge literature on the interconnection between political factors and resource extraction and management. Political explanations of the resource curse can either be theoretical, institutional or a combination of the two. This study focusses on the following explanations: Rentier Effect, Staple Trap model, Institution theory and the Dependency theory. Rentier effect The "rentier effect", which posits that states that have access to natural resource revenues and other types of windfall gains are less reliant on tax revenues, which makes them less responsive to citizens' demands (Desierto, 2018). Building on a number of studies of what has become known as the "rentier state", Pritchett (2000) argue that revenue flows generated by the sale of resources increase the power of existing elites. This is commonplace in Africa. African governments fail to manage resource revenue. In several African states such as Sudan and DRC, the revenue management process is filled with corruption and mismanagement, and this reduces the quality of institutions and the government itself (Kurečić & Seba, 2016). Staple trap model The term "staple" refers to the chief commodity produced by a region (Vahabi, 2017). It occurs when a country heavily relies on a single commodity for growth. This commodity becomes staple for growth. While the economy may get caught in a "staple trap" due to its reliance on a "bad" staple, it can enjoy sustainable growth because of diversification induced by a "good" staple. Staple trap is the equivalent of a resource curse since it blocks the diversification process. By failing to make adequate investments in other sectors, countries can become vulnerable to declines in commodity prices, leading to long-run economic underperformance. In Africa, several countries are dependent on a single commodity. For example, Zambia is dependent on copper (Crain, 2020). The fall of copper prices has always affected Zambia's growth (Atanesyan, 2016). The strap model is also related to the Dutch disease. Dutch disease hypothesized that a booming natural resource sector can lead to a decline in the development of other tradable sectors (Neo, 2009). Dependency model The capitalist world economic system is organized to ensure a perpetual domination of the periphery by the core and dependence of the periphery on the core thereby ensuring a continual flow of economic surplus from the satellite/periphery to the metropolis/center (Eme, 2013). Dependency theory looks at the unequal power relations that have developed as a result of colonialism. In the colonial period, newly industrialized colonial nations expanded into areas that were unclaimed by other colonial powers (Ikwechukwu, 2013). The result was that the natural resources of less-developed nations were used to fuel the colonial nations' factories. The benefits of this system of relationship accrue almost entirely to the rich nations, which become progressively richer and more developed, while the poor nations, which continually have their surpluses drained away to the core, do not advance, rather they are impoverished. This theory has also been applied to Africa. It is believed that Africa's poverty is not natural but an engineered position (Matunhu, 2011). Matunhu (2011) maintains that Africa is positioned to specialize in marketing raw material, while the developed world market finished products. Furthermore, the ownership of the means of production in the mining or natural resource extraction sector is in the hands of foreign investors. These foreign investors do not make significant investment in Africa. Rather, they take profits to their home countries. From a dependency perspective, repatriation of profits represents a systematic expatriation of the surplus values created by African labour using African resources. Institution theory Institutional economics is an extension of neoclassical theory, which results from cooperation between economists and political scientists studying the role of institutions in economic growth (Demissie, 2014). For natural resources and economic growth, institutional theorists argue that weak governments and corruption are major factors for what is known as the natural resource curse phenomenon. In developing countries with weak institutions, such resources tend to be channeled, if not monopolized, through government, which then becomes corrupted, less responsive to the desires of citizens, and less interested in advancing policies and institutions that create wealth (Vásquez, 2011). Natural resource wealth quite easily can become a curse and cause many problems if the country suffers from certain weaknesses and if there is no pre-based high-quality institutional framework (Kurečić & Seba, 2016). In African cases such as Angola, Nigeria and the former Zaïre, mineral and other natural resources have been linked to systemic corruption and the weakness of state institutions (Basedau, 2005;Kurečić & Seba, 2016). Fearon and Laitin (2003) and Fearon (2005) found that oil revenues play a huge role in weakening state capacity. In other words, their studies showed that oil revenues contributed to maladministration and mismanagement of state funds. Besley and Persson (2010) also made an attempt to examine the effect of natural resource revenues on state capacity and conflict. Their study showed that natural resource-rich countries have a tendency of under investing in state capacity formation and this makes the natural resource countries to be susceptible to civil conflicts and instability. Hodler (2006) found that natural resource-rich countries tended to have low income when fractionalisation was high. Furthermore, it was seen that natural resource countries with high levels of fractionalisation tended to have weak property rights. Melhum et al. (2006) argue that the quality of institutions in a country plays a crucial role in determining whether natural resource revenue will bring development or not. They claim that the presence of natural resources is associated with lower incomes when institutions are grabber friendly, while more resources increase aggregate income when state institutions are investor friendly. The study came to the conclusion that the quality of institutions conditions a resource curse with the presence of poor quality institutions leading to low development. Kayitare et al. (2015) notes that political parties make promises to citizens through their electoral manifestos on how they will transform a country's natural resources into sustainable development. Unfortunately, in many countries, these promises are often based on political calculations for electoral victory rather than clear internal policies resulting from internal discussion. Robinson et al. (2006) conducted a study on the political foundations of the resource curse. Their research found that resource booms boost resource misallocation in the rest of the economy by increasing the value of being in power and providing politicians with more resources to utilize to influence election outcomes. According to the study, nations with institutions that promote accountability and state competence gain from resource booms because these institutions mitigate the skewed political incentives that such booms produce. Empirical literature Natural resource abundance is also associated with higher inequality levels (Gylfason & Zoega, 2003) and less political freedom, which then lead to poor growth and low economic development. Yartey claims that African countries that are natural resource abundant and also having weak institutions tend to be corrupt, and they also have high incidences of civil war (Guenther, 2008). Bhattacharyya and Cuaresma et al. (2010) did an econometric study using panel data and revealed that the quality of institutions determined the relationship between resource abundance and corruption. The study concluded that resource revenue was correlated with corruption in states with poor institutions. Aslaksen (2011) did an econometric study that used a large data set and showed that the effect of natural resources on corruption and development was non-uniform across different resource types, and so it its conditioning on the effectiveness of institutions. In particular, an improvement in democracy score lowers the negative effect of mineral wealth on corruption, but not the effect of oil on corruption. According to Oyinlola et al. (2015), there is a positive association between natural resource richness and economic growth, as well as a beneficial influence of political stability, rule of law, and voice and responsibility on economic growth in African countries. Furthermore, the relationship between natural resources and institutions demonstrated that economic progress is triggered by excellent governance in the presence of abundant natural resources, not by the simple quantity of resources. Alpha and Ding (2016) examined the influence of Mali's natural resource endowment on economic growth from 1990 to 2013 and found that natural resource export had a favorable impact on economic growth. However, when natural resource exports are combined with corruption, economic growth suffers. According to Kaznacheev (2017), resource economies with quality political institutions manage their revenues and attain economic growth and social development more effectively than those with inadequate political institutions. According to Andersen and Aslaksen (2013), oil riches allow authoritarian dictators to stay in power longer. Kimberlite diamonds have a similar impact, according to their research, whereas alluvial diamonds and other minerals can shorten the lives of authoritarian leaders and parties. Natural resource rents, according to Bueno de Mesquita and Smith (2010), assist authoritarian leaders in both avoiding and surviving revolutions. Dunning (2008) proposes a different type of conditional influence, arguing that oil stifles democratization in countries with low levels of inequality while hastening it in countries with high levels of inequality by assuaging wealthy elites' fears that democracy will result in the expropriation of their private assets. The timing of the oil boom, according to Smith (2007), is a critical intervening variable. It is unlikely to promote stability if it occurs before an authoritarian government has created a strong governing party or coalition; if it occurs after the development of a strong ruling party or coalition, it is more likely to foster authoritarian stability. Arezki and Brüchner (2011) did a study that looked at how oil revenue affected state performance and development in non-democratic states. The study showed that oil revenue was correlated with corruption and this was particularly witnessed in countries that had high state participation in oil production. The study also showed that the effect of oil revenue on corruption was low and absent in countries where the oil industry was privately owned. Some studies have shown that more competitive electoral institutions promote greater transparency and accountability of public officials (Montinola and Jackman as cited in Mahdavi, 2019), while freedom of information laws and a free press can work to increase the probability and cost for public officials of getting caught engaging in corrupt behavior (Besley as cited in Mahdavi, 2019). Brollo et al. (2013) use a regression discontinuity design to examine the effects of transfers from the Brazilian federal government to local governments, concluding that a ten percent increase in these windfalllike transfers is associated with a ten to twelve percentage point increase in corruption detected by the federal government's random audit program. A second study of Brazilian municipalities (Caselli & Michaels, 2013) discovered that plausible exogenous increases in oil revenues were associated with increased spending on public goods and services; however, much of this money went missing and was most likely absorbed by a combination of increased patronage and top-level embezzlement. Kelley (2016) argued that bad institutions and associated dysfunctions are both the cause of the presence of an intensive natural resource sector and the cause of their political and economic underdevelopment. In the DRC, Shekhawat (2009) found that corruption continued to destabilize the economy and administration. The findings from the study showed that state resources were being siphoned off to fund election campaigns and private accounts. The study also found that in the DRC, "between 60 and 80 per cent of the customs revenues were estimated to be embezzled, a quarter of the national budget was not properly accounted for, and millions of dollars are misappropriated". The abuse of office for individual gains was noticed by clerical staff to the highest members of government. Henri (2019) investigated the institutional and economic indicators that are more negatively affected by natural resource rents in Africa. The results showed that the most institutional problems caused by natural resources rents are by order: corruption; problem of rule of law or justice; inefficient public administrations; bad regulation; lack of voice and accountability; political instability. Natural resources rents also cause volatility of GDP per capita, leading to low level of physical and human capital accumulation. The study concluded that African countries should promote good governance and diversify their economies. According to Alhassan and Achaamah (2021), the interplay between political system and resources reveals that democracy increases the favorable effects of natural resources on economic growth, while the results are mixed in the short run. Specific natural resource rents (oil, mineral, and forest rents) have a favorable impact on sectoral growth. Oil, mineral, and forest rents interacted with the political regime to drive agricultural expansion. The research found that a democratic system is essential for the country's successful resource use and long-term economic prosperity. Data sources This study makes use of secondary data. Information and statistics were sourced from the World Bank publications, and Economist Intelligence. The study used panel data, which spanned from 2000 to 2019. The study selected the following African countries: Angola, Central African Republic, DRC, Equatorial Guinea, Gabon, Libya, Nigeria, Sierra Leone, Sudan and Zimbabwe (US Committee On Foreign Affairs, 2013). These countries were chosen because they share some of the characteristics that resource cursed countries exhibit. Common to many resource-rich countries are stagnant growth, poor social welfare indicators, high levels of poverty, inequality, and unemployment, and social anomie in the midst of extraordinary wealth (Sudhir, 2011). These characteristics are found in the countries selected in this study. Furthermore, all of these countries are aptly described as being "resource-cursed" (Aragbonfoh, 2015). Model specification This study adopts Masi, Savoia and (2017) model. Masi et al. (2017) used panel methods covering the period 1981-2011 and 98 developing countries to test the relationship between resource rents, fiscal capacity and political institutions. The adoption of this model is appropriate for this study because the African countries under investigation in this study are developing countries. A comparison with developing countries, who share Africa's poor economic status, is instructive. Based on the model employed by Masi et al. (2017) model, the study developed the following regression model: where EG is efficient functioning of government, RR is Resource Rents, GDP is Gross Domestic Product, PP is political participation, PS is political stability and ε it is an error term. The description of the variables is presented in Table 1 below. Estimation techniques The study followed an estimation process that was done by Erkisi and Boga (2019) and Oyelami and Ogundipe (2020). The study subjected its data to several pre-tests in order to determine the correct estimation technique. The preliminary tests that were done are cross dependence, unit root tests and cointegration tests. After the presence of cointegration was detected, the study proceeded to conduct a panel cointegration estimation using the PMG estimator. In addition, a substitute cointegration method (Fully Modified Ordinary Least Squares) was used to confirm the validity of the PMG estimator. Unit root tests The second step was to examine the stochastic characteristics of the data. Hence, the study conducted some unit root tests. The preferred technique was the Levin, Lin and Chu, and Lm, Pesaran and Shin tests. These are first-generation unit root tests. They were chosen after it was ascertained that there was no cross dependency in the sample. Cointegration After ascertaining the order of integration levels of variables, possible cointegration among variables must be checked. The reason for performing cointegration tests is to examine whether there is a long-run association amongst the variables. The Pedroni panel cointegration test and the Kao panel cointegration test were applied to test the cointegration among variables. Causality Apergis and Payne (2009) state that when variables in a model are found to be having a long-run association (cointegration), it shows the possibility of causality. In a bid to ascertain whether or not there was a causal link between the variables, the procedure proposed by Dumitrescu and Hurlin (2012) for testing Granger causality in panel datasets was applied. This test is a simple version of ARDL After confirming that the five variables are not integrated of an order equal to or greater than I(2) and that the series are cointegrated, the next step is to estimate the panel ARDL regression through a Pooled Mean Group (PMG) estimation. The ARDL model is a regression model that combines the Autoregressive (AR) and Distributed Lag (DL) models. AR model is a model where the dependent variable y t is influenced by the variable itself in the past y tÀ j (Ardiansyah et al., 2021). The ARDL (p, q) model is specified by the following equation: where i = 1,2, . . ., stands for the country; t = 1,2, . . ., for the time period. The ARDL model has a reparameterization in EC form: The parameter ; i = (1-∑ p j¼1 φ i;j ) is the error-correcting speed of the adjustment term, which captures the speed of adjustment for any deviation from the long-term relationship. The value of this parameter is expected to be significantly negative under the prior assumption that the variables show a return to long-term equilibrium (Smolović et al., 2020). In the case of a zero value, there would be no evidence of the existence of a long-term relationship. FMOLS After confirming the long-run equilibrium relationship between variables by the cointegration test, the long-run coefficients are estimated by the Fully Modified Ordinary Least Squares (FMOLS) estimation technique. The study used the FMOLS because it corrects the inconsistencies that are caused by endogeneity and serial correlation of the regressors (Burdisso & Sangia ́como, 2016). Furthermore, the FMOLS technique can also control for a number of problems such as measurement errors, eliminates sample bias, corrects for serial correlation, and allows for heterogeneity of the long-run parameters (Afawubo & Couchoro, 2017). It is expressed as follows yî represents the serial correlation term. To overcome the endogeneity, y it changes into y þ it . Correlation The data were first tested for correlation amongst the independent variables. This was done to test if there is multicollinearity on the independent variables. Results are shown in Table 2 below. Table 2 shows that correlation results indicated that there was no strong association amongst the independent variables that were used in the study. This may be an indication that there is no multicollinearity in the independent variables. Cross dependence test The cross-dependence test was performed and the results are shown in Table 3 below. The results show that the null hypothesis of no dependency cannot be rejected. This is shown by the p-values of the which are higher than 0.05. Non-rejection means that the residuals are not cross-sectionally dependent. Unit root This study conducted some stationarity tests to check if statistical properties of a time series do not vary with time. The Levin, Lin and Chu, and Lm, Pesaran and Shin tests were used. Results are shown Table 4 below. Results from the unit root tests show that PP, PS and EG were stationary at levels. Results further show that GDP, VA and RR have unit root. The presence of stationary in the data implies that there might be an existence of a long-run relationship. This prompted the study to perform a cointegration test in order to examine whether (or not) there was a long-run association amongst the variables. Cointergation test To test whether there is a long-run relationship between variables in the data, the Pedroni Panel Cointegration Test and the Kao panel cointegration test were used to determine the result. Results show that results show that there is cointegration amongst the variables. The Group ADF statistic, Group PP statistic, Panel ADF statistic and Panel PP statistic showed that there is cointegration. Their p-values are below 0.05, which implies the rejection of the null hypothesis of no cointergration. This shows that the variables are cointegrated. In order to confirm the results from the the Pedroni test, the Kao panel cointegration test was performed. Table 6 shows the results from the Kao Panel cointegration test. The Kao cointegration test confirmed the existence of a cointegrating relationship among the variables. This upholds the Pedroni tests results and validate that there is a long-run association amongst the variables. The next step was to test the causality between the variables using the Dumitrescu and Hurlin (2012) for testing Granger causality and then estimate the long-run coefficients using the Fully Modified Ordinary Least Squares (FMOLS) estimation technique. Causality and ARDL results Results from by Dumitrescu and Hurlin (2012) for testing Granger causality in panel dataset were applied and results are shown in Table 7. Results show that the there is a unidirectional link between EG and GDP. The relationship moves only from EG to economic growth and not vice versa. The same can be said for PP and EG. The link is a unidirectional link in that it moves only from EG to political participation and not vice versa. For PS and EG, the relationship is a unidirectional link in that it moves only from political stability to EG and not vice versa. The same can be said for VA and EG. The relationship is a unidirectional because it moves only from VA to EG and not vice versa. Last but not least, there is evidence for causality running from EG to RR. The Granger causality test results indicate that EG (efficient functioning of government) causes resource rents (RR). Tables 8 and 9 show the long-run elasticities. The ARDL results in Table 8 show that there is a positive relationship between GDP and EG. This link is, however, a unidirectional link in that it moves only from EG to economic growth and not vice versa. Kaufmann, as cited in Mira and Hammadache (2017), state that the existence of reverse causality, from income levels of governance, is feasible if states with high incomes could adopt good policy governance, improving states institutions, government efficiency, rule of law and control of corruption. Growth may increase confidence in the public workforce and this encourages the public workforce to support government policies and work for larger good. When the government is performing better, the economy is likely to perform well. The performance of state institutions ensures that investor-friendly policies are adopted and businesses are promoted. This boasts domestic production and consequently improving economic growth. The results (Cain, 2015;Muronzi, 2019;Zhou, 2021). Mismanagement of state resources has placed DRC among a group of fragile states with the poor economic growth (Henze et al., 2020;Lee-Jones, 2020). In Sudan, corrupt actors effectively captured all aspects of policymaking and all areas of the public service resulting in poor economic performance (Ahmed, 2021;Ardigo, 2020). Other economies such as Equatorial Guinea, Sierra Leone and Gabon have also been affected by mismanagement and poor governance (US Committee On Foreign Affairs, 2013;Human Rights Watch, 2017;Javed, 2020). Results show that political participation has a positive effect with efficient functioning of government. This is a reasonable outcome. This link is, however, a unidirectional link in that it moves only from EG to political participation and not vice versa. When there is political participation, the public can act as a check and balances player and voice their concerns against government underperformance (Menocal, 2014;Rakabe, 2019). Furthermore, public political participation can allow the government to listen and take into consideration the views of the public when making and implementing policy. Results show a negative relationship between political stability and efficient functioning of government. This link is, however, a unidirectional link in that it moves only from political stability to EG and not vice versa. This is a reasonable outcome. When a country is politically unstable, we expect the functioning of government to be poor. When there is instability the government is likely to be unstable as well. The establishment of a politically stable environment is crucial for good governance and government effectiveness. The guarantee of political stability are fundamental prerequisites that the government must ensure for efficient functioning of a government. The findings of the study are consistent with empirical literature. African countries are faced with a problem of poor state performance and corruption, and this also leads to political instability and civil war (Adefeso, 2018). Abu et al. (2015) and Khan and Farooq (2019) also report the negative effects of political stability on government effectiveness and development. Results show a negative relationship between the efficient functioning of government and voice and accountability. This link is, however, a unidirectional link in that it moves only from VA to EG and not vice versa. When the government is not accountable to anyone, it will have no incentive to perform well. When the government officials and those in power are not answerable to anyone, they will do as they please and government performance will decrease. When citizens have the freedom to express themselves and are able to make the government account, the government is likely to be efficient. In an IGC-sponsored study, Dasgupta (2016) concurs and shows that democratically mobilised communities might be able to put more pressure on their elected representatives and ensure better delivery of services. "democratically mobilised" villages, characterised by extensive civic engagement in the activities of the village council, place greater pressure on local leaders and the higher-level politicians to which they are connected to deliver services. Earlier studies, such as that of Brewer, Gene et al. (2007), have shown that both accountability is Results show a positive relationship between the efficient functioning of government and resource rents. There is evidence for causality running from EG to RR. The Granger causality test results indicate that EG (efficient functioning of government) causes resource rents (RR). This is quite a reasonable outcome. The efficient functioning of government shows that government and its institutions are important in determining the overall performance of the economy including resource extraction. When government performs for the general good, it attracts more investment. The reverse is true; patronage politics distorts the economy and diverts investment away from more productive sectors. Some studies have found that countries with proper governance structures are able to attain and sustain high growth rates. Basedau (2005) finds some empirical support for the idea that institutions are particularly important in the context of natural resources but do not investigate what institutions are important. Furthermore, the results are in line with Wiens (2014) and Masi et al. (2017) who revealed that resource abundance does not lead to worse development outcomes, if a country has the "right" institutions. African countries have suffered because of political reasons. For example, Zimbabwe's lack of effective regulatory mechanisms and the inability to effectively monitor key mining activities and rein in illegal mining have reduced output in volume terms, in addition to wiping away Zimbabwe's competitive advantage compared to its neighbouring states (Institute for Security Studies, 2020; Mahonye & Mandishara, 2015). In Central African Republic, political manipulation coupled with violent conflict different armed groups have stunted economic development, despite the country's rich natural resources, which are well supplied with at least 470 mineral occurrences (Richiello, 2018). Nigeria, Equatorial Guinea and DRC also face similar problems (Freehills, 2020). Aspirant autocrats use natural resource rents to accumulate power for themselves (Harvey, 2021). It can thus be said that Africa's resource curse is largely the result of a leadership deficit in the countries concerned Fabricius (2017). Few African countries have taken a departure from this path. For example, Botswana is commonly cited as a deviant case of the natural resource curse (Durns, 2014;Limi, 2006;Pegg, 2012;Sebudubudu & Mooketsane, 2016). The short-run results are displayed in Table 10. The short-run results show that the ECM coefficient is negative and significant. This shows that there is a cointegrating relationship between dependent variable and the regressors. Results further show that GDP, Political participation, Resource rents and Voice and Accountability have a positive association with economic growth over the short term. Political stability impacts the efficient functioning of government negatively over the short run. Conclusion and Recommendations This paper analysed political aspect of the resource case in selected African states. The paper drew from the fact that the African region has extensive (untapped) natural resources deposits that, if utilised, can promote sustainable economic development. But despite the presence of these natural resources, the African region is poverty-stricken and still under-developed. Using natural resources to build sustainable economic development is a challenge faced by many African governments. Literature identified politics as among the factors that affect Africa's capacity to invest revenue from natural resources. Studies have shown that there are close links between politics and the management of extractives. Resource-rich countries with fragile democratic institutions tend to have weak economies. Unable to control corruption and manage revenues wisely, the government is unable to capture the benefits. Results from the study showed a positive relationship between efficient functioning of government and resource rents. There is also evidence for causality running from efficient functioning of government to resource rents and not vice versa. This shows that government performance is crucial for better performance in the natural resource extraction sector. Based on the findings, this study recommends that governments in African countries should improve on governance and the way the public sector institutions function. This study argues that resource flows from extractive industries can be a lifeline for poor African countries, helping to fund growth and development needs. However, in order to capture the benefits of natural resource abundance, African countries need to develop their governments. Harnessing the weak and politically fragile public institutions is important in order to kick start markets. The effective functioning of government institutions can be strengthened by eliminating corruption, stabilising property rights, and investing in fiscal capacity.
2022-02-20T16:25:18.253Z
2022-02-18T00:00:00.000
{ "year": 2022, "sha1": "2507c65e91bcf47c3213a010584122e53d0ef702", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311886.2022.2035911?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "3ac3e6c46aa365b4669706754ea068fe97b2140c", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
248224572
pes2o/s2orc
v3-fos-license
Clinical and microbiological spectrum of external ventricular drain related infections (EVDRIs) from a tertiary care center Background and Objectives: Insertion of an External Ventricular Drain (EVD) is a common and important lifesaving procedure that can lead to morbidity and mortality. This study was conducted to assess the infection rate, risk factors, causative organisms, and outcome of EVDs. Materials and Methods: A prospective study was undertaken in a tertiary care centre from August 1st to October 30th, 2020. Over 192 patients had undergone insertion of EVDs in the neurosurgical intensive care unit. CSF samples were collected in sterile containers and transported to the laboratory. Results: A total of 214 EVDs were inserted in 192 patients for 691 days. The median duration for EVD in situ and the mean time between catheter insertion and onset of infection were 14.5 days and 8 days. EVD related infection rate was 19.4 for 1000 EVD days. The most common risk factor for EVD insertion were tumors (55%) followed by hydrocephalus (40%).We identified 25 patients out of 192 (12%) who had clinical signs and symptoms with deranged CSF counts. A total of 13/25 (52%) specimens were culture positives out of which 10 (76.9%) were Gram negative pathogens and 3 (23%) were Gram positive pathogens and 3/10 (30%) Gram negative pathogens were Multidrug resistant organisms (MDROs). Conclusion: It was observed that longer duration of catheter in situ was an important risk factor for EVD-related infections (ERIs) and also higher frequency of CSF sampling. A proper EVD infection prevention and control protocol must be followed in the form of a checklist at the time of EVD insertion. INTRODUCTION Ventriculostomy catheters, also known as external ventricular drains (EVDs), are frequently used in neurosurgery to monitor and relief intracranial pressure (1). Insertion of an EVD is a lifesaving procedure carried out in various types of acquired brain injury, such as intracranial haemorrhage with intraventricular extension, subarachnoid haemorrhage, traumatic brain injury, and bacterial meningitis, may benefit from EVD insertion. Many of these conditions are associated with raised intracranial pressure (ICP) above 20 mmHg due to obstruction of cerebrospinal fluid (CSF) outflow (2). EVD also provides a means of monitoring and controlling elevated intracranial pressure (ICP), especially in head trauma. In fact, EVD is the gold standard for ICP monitoring. Insertion of an EVD is perhaps one of the most comhttp://ijm.tums.ac.ir IRAN. J. MICROBIOL. Volume 14 Number 2 (April 2022) 168-173 mon neurosurgical procedures performed worldwide. However, patients with these surgically implanted foreign bodies are at risk of developing drain-related infections such as ventriculitis and meningitis, which may result in significant morbidity and even mortality if not treated appropriately (3). EVDs are associated with very high rates of infection, with estimates of the incidence of EVD infection typically ranging from about 5% to 20% (4). EVD-related infection (ERI) is a significant complication that can lead to increased morbidity prolonged stay, increased healthcare costs. Risk factors that have been associated with EVD infection includes duration of EVD placement, cerebrospinal fluid (CSF) leak, frequency of CSF sampling and underlying systemic infection. Efforts to reduce ERI risk have included the introduction of EVD care bundles, the use of perioperative or continuous prophylactic antibiotics and the development of antimicrobial-impregnated catheters (5). The present study was undertaken to assess the rate of infection, risk factors, observe the trend of acquisition of pathogenic organisms and study the outcome of EVD-related infections. MATERIALS AND METHODS In this prospective study we included all patients having a positive CSF culture while the EVD was in place or an abnormal CSF analysis result or positive blood cultures in the presence of neurological symptoms that were admitted under the neuro surgery department with a diagnosis of intracranial infection over a period of 15 months (August 1 st 2019 to Oct 31 st 2020). We considered only tests that were performed while EVD was in place and up to three days after removal. Patients who had infection prior to EVD placement or a permanent ventriculoperitoneal (VP) shunt were excluded. All catheters were inserted under sterile conditions in the operating theatre using a tunnelled procedure technique and a closed system for drainage. There was no policy of routine CSF sampling during the study period; CSF samples were sent for culture and sensitivity only if there were signs and symptoms of infection. Data was obtained by direct interview and observation of the patient, review of the medical case sheets and electronic medical records. CSF cultures were repeated if the initial results were positive, so as to rule out the possibility of contamination. Microbiological workup. CSF samples were sent to the microbiology laboratory for direct examination, Gram stain, and culture. Culture was performed on blood agar and chrome agar (Biomerieux, France) and were incubated at 37°C for 18-24 hours. Identification and susceptibility testing was done by using Automated Vitek 2 compact system. Gram negative pathogens were identified using ID GN and AST N281 panel whereas Gram positive pathogens were identified using ID GP and P628 panel. We defined Ventriculostomy related infection based on CDC-NHSN definition. Meningitis or ventriculitis must meet at least one of the following criteria: 1. Patient has organism (s) identified from cerebrospinal fluid (CSF) by a culture or non-culture based microbiologic testing method which is performed for purposes of clinical diagnosis or treatment for example, not Active Surveillance Culture/Testing (ASC/ AST). 2. Patient has at least two of the following: • Fever (>38.0°C) or headache • Meningeal sign(s) • Cranial nerve sign(s) And at least one of the following: • Increased white cells, elevated protein, and decreased glucose in CSF (per reporting laboratory's reference range). • Organism(s) seen on Gram stain of CSF. • Organism(s) identified from blood by a culture or non-culture based microbiologic testing method which is performed for purposes of clinical diagnosis or treatment. In Statistical analysis continuous variables were described with medians. Categorical variables were described as percentages. Incidence rate of EVD-related Infection (ERI) was calculated. RESULTS A total of 207 CSF samples were received from the neurosurgical department during the period of study, 192/207 had undergone EVD insertion, out of which 25 (12%) had clinical signs and symptoms of infection post EVD insertion. A total of 13/25 (52%) specimens were CSF culture positives out of which 10 (76.9%) were Gram negative pathogens and 3 (23%) were Gram positive pathogens and 3/10 (30%) ( Table 1), Gram negative pathogens were Multidrug resistant organisms (MDROs). 12/25 (48%) were sterile. CSF protein, glucose and WBC counts were analyzed for those showing clinical infection. Other samples sent for culture and sensitivity such as blood, urine, EVD tip, Tracheal aspirate, pus were also observed for any bacteriological growth. Regarding the demographic data, out of the total 192 patients who underwent EVD insertion, 94 (49%) were males and 98 (51%) were females and among the total 25 patients who had clinical signs and symptoms, 14 (56%) were females and 11 (44%) were males. The median age was 22.5 years for the patients with EVD. 214 EVDs remained in situ for a total of 691 days ( Table 2). For the 25 patients, external drainage was continued in median for 14.5 days. Meantime between catheter insertion and onset of infection by Gram negative bacteria was 8 days. The most common indication for EVD insertion was Tumours (55%) followed by Hydrocephalus (40%) and post traumatic subarachnoid haemorrhage (5%). Among the tumours, most common cause was Pituitary adenoma accounting to about 32% of the cases (n=8), followed by pineal gland tumours (n=5), gliomas (n=3), vestibular schwannomas (n=3), Rhabdomyosarcoma (n=1) and a postoperative case of atypical meningioma with pseudomeningocoele (n=1) ( Table 3). CSF counts were done, 18 (72%) of 25 had raised levels of protein (>15-45 mg/100 mL). 2 showed low levels of CSF glucose while 10 had higher levels of glucose in CSF. 36% (n=9) of the patients were given antibiotic prophylaxis and all of them were given an antibiotic cover of Injection Magnex forte (Cefoperazone-sulbactam) 1.5 g twice and injection amikacin 750mg twice intravenously before the procedure. Antibiotic prophylaxis was continued post EVD insertion with the same antibiotics for 3 weeks .The most common organism isolated was Klebsiella pneumoniae (n=3), followed by, Pseudomonas aeruginosa (n=2), Acinetobacter Among the 25 patients, 5 patients expired of whom three were CSF culture positives (two with Klebsiella pneumoniae and one with Acinetobacter baumanii), one was CSF sterile and blood culture positive (Elizabethkingia meningoseptica). One of the patients was epileptic and died due to cardiac arrest within 11 days of admission. Mortality rate was 25%. Cultured Organisms A total of 214 EVDs were inserted in 192 patients for a period of 691 days. The EVD-related infection rate (ERI) was calculated as 19.4 per 1000 EVD days. DISCUSSION Intraventricular catheters (IVCs) are vital neurosurgical diagnostic and therapeutic tools that provide for continuous intracranial pressure monitoring and external CSF drainage. The incidence of EVD-related infection in the present study (per patient) was 13% which was 18.3% in a study by Camacho et al. . (6, 7). The current notion is that EVD-related infections result from either inoculation of pathogens during EVD placement and/or contamination and colonization of the EVD system during the postoperative period. Postoperative colonization can either arise from endogenous organisms present on the skin, which spread along the intracutaneous tract or by exogenous organisms introduced into the EVD system during manipulation at the EVD system by healthcare workers. Endogenous infections might be prevented by using antimicrobial coated EVD catheters which may decrease bacterial colonization and thus prevent infection (1). In the present study only plane catheters were used. In our study, 76.9% of the infections were caused by Gram-negative bacteria; this was similar to a study by Camacho et al. where 77% of infections and Lyke et al. where 82% of infections were caused by Gram negative microbes (7,8). Camacho et al. (7) showed that mean time between catheter insertion and onset of infection by Gram negative bacteria was 9 days. In the present study the mean time between catheter insertion and onset of infection was 8 days. The most common microorganism identified was Klebsiella pneumoniae similar to a study by Lyke et al. while coagulase negative staphylococci were the most common bacteria identified by Jamjoom et al. (34.5%) and Hagel et al. (62%). We hypothesize that Gram negative bacterial infection could be due to prolonged hospitalisation. Two of the thirteen EVD-related early infections were caused by coagulase-negative staphylococci (CONS) which may have arisen as direct inoculation during manipulation of the EVD by Healthcare workers. In a study by Lyke et al. 3 of 12 patients had CSF leaks which resulted in cerebral ventriculitis (8). This is because of breaks in the integrity of the closed catheter system increased rate of infection. In the present study 2 of 25 patients had CSF leak where no ventriculitis was documented. Several studies examined the relationship between concurrent systemic infections and EVD-related infection, showing that concurrent systemic infections are a risk factor for EVD-related infection (9). Some of the studies have observed that infection in other sites can increase the risk of central nervous system infection (10). Three of the 25 patients (12%) had concomitant systemic infection, where the same organism was observed in other samples of the patients. One of the 25 patients with ventriculitis had Ventilator associated pneumonia (VAP) which was observed probably due to translocation or dissemination of infection into lungs. In a study by Kim et al. concomitant systemic infection existed in 4.7% of patients with ventriculitis (11). It has been estimated that if the patients present with pulmonary infection during EVD catheter placement, the pathogens are probably translocated to the surrounding environment through endotracheal intubation, which might contaminate the EVD system during drain insertion and manipulation, thereby leading to ventriculitis. Moreover, concomitant infection in other sites probably reduces immunity and resistance of the patients thereby increasing the potential risk of infection to these patients (12). The duration of EVD in situ was identified as a risk factor for EVD-related infection by multiple studies. We found that patients developed infection after a mean duration of 8 days after EVD insertion. This was similar to study by Jamjoom et al. In earlier studies, routine collection of cerebrospinal fluid specimen was shown to be not associated with the risk of EVD-related infection (13). EVD should be replaced if the patient develops infections, when inserted more than 10 days (12). Other scholars have suggested that the duration of EVD catheter retention is not correlated with the risk of EVD-related infection (14). Repeated catheter insertion to decrease the drainage time can increase the risk of infection instead was suggested by some authors (15,16). It has been observed in previous studies that patients age, gender and primary diseases are not associated with the risk of EVD-related infection (17). For all patients requiring EVD, preventive antibiotics were not routinely used, only 36% were given pre procedural antibiotics. If the patients were diagnosed with ventriculitis, EVD insertion combined with intravenous administration of antibiotics can yield high clinical efficacy (18). In the present study few patients were treated with perioperative antibiotics such as Injection Magnex forte 1.5gms Intravenously and Injection Amikacin 750gms OD intravenously. To decrease EVD related infections, intravenous antibiotics are administered commonly to cover normal skin flora (19). However, some authors suggest antibiotic prophylaxis may be a reason for development of resistant organisms, or much more morbid Gram-negative ventriculitis (20). Hence, perioperative use of antibiotics at and for short duration after EVD insertion, or continuation for the duration of drainage was followed (19). In our study, all plain EVDs were used, none were coated with antimicrobials so we could not compare the and antibiotic cover was given only to 6 Probably high frequency of CSF sampling and longer duration of EVD in situ were associated with the EVD infections in the present study. Henceforth routine testing of samples from other sites is also necessary to rule out the concurrent systemic infection. CONCLUSION ERIs remains a serious complication of EVD use in neurosurgical units. Patients with an EVD left in situ for ≥8 days and who underwent more frequent sampling had a higher risk of infection. Antimicrobial coated EVD catheters may decrease bacterial colonization and thus prevent infection. EVD catheter should be electively changed whenever a longer duration of CSF drainage is required. There is an urgent need to introduce EVD bundle care approach such as proper hand washing, use of full body drape sterile gloves, gown, cap, mask, chlorhexidene skin preparation during EVD insertion procedure to reduce the risk of EVD infections. Strict asepsis is advised during insertion and handling of EVD and exchange of EVDs to be done to prevent infections also proper wound care need to be given by adhering to Infection control practices Therefore it is suggested that prophylactic measures against drain related infections should be developed and implemented at the earlier onset. Maintenance, troubleshooting, and monitoring for EVD associated complications has essentially become a critical responsibility to prevent untoward events.
2022-04-18T15:03:39.325Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "f36823fa83df070c4562ca69b652df1b54063bd0", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/ijm.v14i2.9183", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c829f2d1bb3ad89cf8532c266cce238ee49a126", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
54901565
pes2o/s2orc
v3-fos-license
Research on GPS Receiver Autonomous Integrity Monitoring Algorithm In the Occurrence of Two-satellite Faults Reliability is an essential factor for GPS navigation system. Therefore, an integrity monitoring is considered as one of the most important parts for a navigation system. GPS receiver autonomous integrity monitoring (RAIM) technique can detect and isolate fault satellite. Based on particle filter, a novel RAIM method was proposed to detect two-satellite faults of the GPS signal by using hierarchical particle filter. It can deal with any system nonlinear and any noise distributions. Because GNSS measurement noise does not follow the Gaussian distribution perfectly, the particle filter can estimate the posterior distribution more accurately. In order to detect fault, the consistency test statistics is established through cumulative log-likelihood ratio (LLR) between the main and auxiliary particle filters (PFs).Specifically, an approach combining PF with the hierarchical filter is used in the process of two-satellite faults. Through GPS real measurement, the performance of the proposed GPS two-satellite faults detection algorithm was illustrated. Some simulation results are given to evaluate integrity monitoring performance of the algorithm. Validated by the real measurement data, the results show that the proposed algorithm can successfully detect and isolate the faulty satellite in the case of non-Gaussian measurement noise. INTRODUCTION Integrity of global navigation satellite system (GNSS) is important for safety-critical applications, such as aircraft and missile applications.With the development of GNSS and the increasing requirements for satellite navigation and positioning performance, the integrity monitoring is an inseparable part of GNSS.Integrity monitoring can be able to detect and exclude faults satellite that could cause risks to the accuracy and reliability of GNSS positioning, so that GNSS receivers can operate continuously without any degradation in performance [1].Because it needs a long time for satellite fault monitoring to alarm through the satellite navigation system itself, usually within 15 minutes to a few hours, that can't meet the demand of air navigation.As a result, to monitor the satellite fault rapidly, namely the receiver autonomous integrity monitoring (RAIM) has been researched a lot.At present, with multiple GNSSs development, there is a need for RAIM to identify multiple outliers.The multiple outliers are more frequent due to the additional affects of non line of sight multipath [2][3].Therefore, the RAIM needs to be able to detect and exclude multiple biases.It is difficult to detect simultaneous multiple faults using conventional snapshot RAIM algorithms, and therefore various filter algorithms have been studied for reducing the measurement noise level so that GNSS receiver can estimate its position more accurately and reliably [4].However, for example, Kalman filter presumes that the measurement error follows a Gaussian distribution, the performance can degrade if this assumption is not correct.Because GNSS measurement error does not follow a Gaussian distribution perfectly [5], Kalman filter will use an inaccurate error model that may cause performance degradation.Particle filters have been researched over the last few years as an alternative for solving nonlinear/non-Gaussian problems.And, the particle filter for fault detection has been widely used [6][7]. Based on the particle filter, the two-satellite faults detection and isolation algorithm was designed.The new integrity monitoring algorithm for RAIM using hierarchical particle filter was proposed.The proposed algorithm estimates a distribution of a measurement residual from the posterior density and detects large residuals to satisfy a false alarm rate.With a non-Gaussian measurement error, the algorithm can estimate the distribution of the state more accurately.The work focused on the effect of a non-Gaussian error distribution of the GPS measurement on the integrity monitoring.The paper is organized as follows.First, a theory of a particle filter is briefly reviewed.Then the general scheme of the approach followed by a hierarchical particle filtering based log likelihood ratio (LLR) approach to fault detection and isolation (FDI) are presented.And the consistency test statistics is derived and established.The next section is a description of the system and measurement equation of GPS receiver.Finally, the GPS receiver autonomous integrity monitoring and its usefulness are presented with numerical simulation and experiment. PARTICLE FILTER ALGORITHM In this section, the principle of PF algorithm will be given.Particle filter is a method based on sequential monte carlo method and sequential importance sampling (SIS).It has a good filtering effect for non-linear and non-Gaussian system state estimation problem by obtaining sampling from the probability density function (PDF) in the state space.These sampling are called particles.Each of the particles has an assigned weight, and the state variable's distribution can then be approximated by a discrete distribution that depends on each of the particles.The probability assigned to each particle is proportional to the weight.These particles are random samples from the priori PDF.With the increasing of number of particles, a good approximation to the required PDF is effectively provided.Through the system state equation and measurement equation, the collection of sampling for approximating random Bayesian estimation of nonlinear system can be predicted and updated.Gordon, first proposed an algorithm of PF.The algorithm is known as the SIR (sampling importance resampling) [8][9].At present, the particle filter has been widely used in location tracking, robot localization, signal estimation and detection, speech recognition and enhancement, dynamic fault detection system and satellite navigation [10], and so on.Let's consider the PF dynamic state space model below. Where k x is a state vector, k z is an output measurement vector. , ( ) f and ( , ) h are state transition function and measurement function respectively.k v is the process noise vector independent of current state, and k n is a measurement noise vector independent of states and system noise. The basic flow of particle filter algorithm can be described as the following steps. (1).Initialized According to the priori probability distribution 0 p( x ) , the initial particles ^0 1 ( ) generated, and the weight of the particles is1/ S N .(2).Prediction Using these particles to generate new samples After the measurement k z attained, the weight of each particle at time k instant is updated.The weights are given by the following equation. From a set of particles / 1 ( , ) x w , according to the value of the importance resampling, a new set of particles / 1 ( ,i 1,..., N) can be gotten.(5).Estimation The set of particles can be used to approximate the posterior PDF, that is and the estimated value is as follows. HIERARCHICAL PARTICLE FILTER FOR TWO-SATELLITE FAULTS DETECTION The problem of fault detection (FD) consists of making the decision on the presence or absence of faults for GPS monitored system, and the problem of fault isolation (FI) consists of deciding the present faulty mode among a number of possible modes.In this paper, a fault detection and isolation (FDI) method is designed for GPS integrity monitoring using hierarchical PF algorithm to detect the consistency of GPS system measurements, and then make the consistency of the test statistic.Finally, the consistency of changes caused by fault compares with the detection threshold to determine the moment of fault and fault satellite.In the algorithm, calculating each time the accumulated LLR function, according to the characteristic of the accumulated LLR function, the characteristic is that under normal circumstances the function curve is smooth with time.When the data fluctuates, it will produce a negative drift before the change and after the change it will produces a positive drift [11].So the fault detection is to decide a model shift or detecting a jump from the normal model. This particle filter combining with LLR is used to detect and isolate satellite faults, namely, through PF generating the state estimates, the LLR at each time is calculated.Among the window time, the cumulative LLR is gotten.The consistency is checked.Then the satellite faults are detected and isolated.Therefore, the MAIN PF and auxiliary PF particle normalized weights are calculated in every moment, which is easy to do for PF algorithm [12].Accumulated LLR can be gotten for consistency test to detect whether there is fault satellite. The fundamental schemes of RAIM are the same, whether multiple-satellite faults or a single-satellite fault using hierarchical PF based on LLR is achieved to FDI.Thus, given the standard deviation of the pseudo-range errors and false alarm probability, the detection thresholds are the same for both situations.The RAIM method to detect two-satellite faults of the GPS is adapted by using hierarchical particle filter based probability test. 1) First, after calculating system state estimation with all N measurements, the corresponding state estimation with remaining N-1 measurements is calculated.Then LLR consistency checking is evaluated, if it exceeds the detection threshold, fault alarm is set, or no fault. 2) If there is a fault, the corresponding state estimation with remaining N-2 measurements is calculated.Then LLR consistency checking is evaluated, if it exceeds the detection threshold, second fault alarm is set, or no fault. 3) Using the method, after two iterations, the detection of two-satellite faults can be implemented.According to the principle of the method, it can also detect multi-satellite faults. EXPERIMENT TESTING AND RESULTS ANALYSIS The experimental raw measurement data are collected by GPS receiver N220 (positioning accuracy is 2.5 meters (RMS)), the measurement data including the position information of satellites and the pseudoranges were generated for each epoch for 418 epochs.The user's position outputs at a frequency of 1Hz.During the period of this collected data, there are six satellites used for PVT solution, the number of GPS satellites is 3, 15, 18, 19, 21, 26 respectively, and the corresponding pseudorange value can be expressed as y y y y y .At the same time, another RCB-4H receiver produced by the ublox company is used to monitor whether the satellite is working normally.In order to simulate the fault satellite, the biases were intentionally injects into the pseudorange of two satellites.Here, the 50m bias was added to No.19 and No.26 satellites at time 90 120(k=90 120).In the simulated experiment, the particle number is chosen as N=100, the calculated decision function of window length is selected as U=30, the simulated experimental data measurement noise obeys Gaussian kernel Laplace distribution.Some results of applying the proposed FDI algorithm for GPS integrity monitoring were shown as follows. In order to conduct fault testing, the bias was added into the pseudoranges measurements.And the detection of anomalies with the proposed FDI method for GPS integrity monitoring was tested.Firstly , by inserting errors into nominal GPS data.In this work, the pseudorange measurements of satellites No.19 and No.26 were modified.Then these modified pseudorange measurements were put back into the FDI system for hierarchical filter.The results of first hierarchical PF are shown as figure 1 and figure 2. Figure 1 and Figure 2 show the experimental results of the hierarchical particle filter for GPS RAIM under the first hierarchical PF under two-satellite faults conditions.From figure 1 and figure 2, it can be seen that the decision function k β appeared a significant jump at k=95 that has over the detection threshold.According to the principle of fault detection of the above described, it can be judged that the first satellite No.19 exists fault.When calculating the PVT (position velocity and time) using the data, among the measurement data of the first hierarchical PF, the satellite No.19 should be abandoned.Then the second hierarchical PF continues to detect the other fault satellite.The results of second hierarchical PF are shown as figure 3 and figure 4. Figure 3 and Figure 4 shows the experimental results of the hierarchical particle filter for GPS RAIM under the second of two-satellite faults conditions.From figure 3 and figure 4, it can be seen that the decision function k β appeared a significant jump at k=95 that has over the detection threshold.According to the principle of fault detection of the above described, it can be judged that the satellite No.26 exists fault.When calculating the PVT using the satellite data, the satellite No.26 should be abandoned.So far, two-satellite faults are both abandoned.And the purpose of two-satellite fault detection for GPS integrity monitoring is achieved.The method based on hierarchical PF for GPS RAIM is feasible and effective. CONCLUSIONS A new FDI method for GPS integrity monitoring by using the hierarchical particle filter was proposed.The proposed method makes it possible to detect two-satellite faults for GPS receiver.The hierarchical PF is executed in turn for detecting and isolating two-satellite faults.The test statistics is established .The likelihood function is established and tested by integrating state estimate from both the main PF and auxiliary PFs.Furthermore, the LLR test is used to detect fault, which compares the consistency of the measurement between the main PF and auxiliary PFs.The evaluation of FDI is conducted through simulation using the real GPS measurement data.The measured data from GPS receiver are deliberately contaminated with the bias.Based on the simulation result, the proposed approach demonstrated that it can successfully detect GPS measurement fault under non-Gaussian measurement noise, particularly showed its outstanding performance in the aspects of processing multi-satellite faults.The proposed RAIM algorithm has certain reference value for BeiDou navigation receiver autonomous integrity monitoring. DOI: 10 .1051/ C Owned by the authors, published by EDP Sciences, 201 Figure 1 . Figure 1.Decision function for fault decision for first hierarchical PF under two-satellite faults condition isolation Figure 2 . Figure 2. Cumulative LLR for fault for first hierarchical PF under two-satellite faults condition Figure 3 . Figure 3. Decision function for fault decision isolation for second hierarchical PF under failure condition Figure 4 . Figure 4. Cumulative LLR for fault for second hierarchical PF under failure condition
2018-12-05T02:40:00.566Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "cd9aa1911696f64664a9aa8f2f7fc61ef98e62b9", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/07/matecconf_iceice2016_01017.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cd9aa1911696f64664a9aa8f2f7fc61ef98e62b9", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
254955749
pes2o/s2orc
v3-fos-license
Comparative sequence analysis of SARS nCoV and SARS CoV genomes for variation in structural proteins SARS-nCoV was identified as corona virus had spread worldwide very quickly and affected more than million people worldwide. To halt this acceleration and for efficient control the knowledge on genomic information is of utmost importance. We attempted to determine the nature of variation i.e., insertion, deletion, substitution, among structural sequences required to code for membrane, spike, nucleocapsid, envelope protein and glycosylation variation between SARS CoV and SARS nCoV spike glycoproteins, respectively. Comparative sequence analysis was performed by using retrieved sequences from the NCBI database. The analyzed sequences revealed, that the sequences coding for envelope protein show minor substituting amino acids. SARS CoV showed 94.74 percent amino acid identities with SARS nCoV amino acid sequences coding for envelope protein. In comparison to SARS nCoV, distinct amino acid residues vary in SARS CoV sequences coding for membrane, nucleocapsid, and spike proteins, respectively. S protein coding sequences of SARS CoV exhibited one deletion, six insertion and six hundred three substitutions in SARS nCoV sequence. Insertion of valine was found in receptor binding domain of SARS nCoV at position 487, and NSPR amino acid residues at position 683–686. Deletions and substitutions were also found in nucleotide sequences of strain B.1.617.2 of SARS nCoV. Additionally, binding interaction pattern of ACE2 receptor protein with original wild-type SARS-CoV-2 strain with the recently evolved Omicron variant was also evaluated. The docking results substantiated that the specific variation in binding residues is likely to impact virulence pattern of both variants. Introduction SARS nCoV is novel coronavirus, with names as SARS-CoV2, SARS-CoV19, and COVID19 (Paraskevis et al. 2020). Coronaviruses (CoVs) are spherical to pleomorphic enveloped single stranded positive sense RNA viruses having club shaped spike glycoproteins, projected from their surfaces. Spike projections from the surfaces of CoVs appear like crown, thus given the name, coronavirus (Tyrrell and Myint 1996) (Fehr and Perlman 2015). Severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) virus was the cause of worldwide pandemic 2019 and it imposed huge health, socio-economic burden with unparallel consequences (Gómez et al. 2021). Taken aback origin of corona virus, it was begun in early 1930 when a respiratory infection was shown in domesticated chicken by a virus, known as IBV. The history of HCoV began in the 1960s when two researchers Bynoe and Tyrrell found the virus, known as human corona virus HCoV (Hamre andProcknow 1966) (McIntosh et al. 1967). With the emergence of SARS CoV, some more HCoVs (HCoV-NL63 and HCoV-HKU1) were added to the identification list of coronavirus, which infect respiratory tract in approximately all age groups (Drosten et al. 2003). SARS-CoV had emerged and transmitted to the human from bats with the help of intermediate host (e.g. civets & bats) and then led to worldwide outbreaks of novel respiratory disease. Earliest confirmed SARS nCoV case was reported in China (Wuhan) on December 2019 (Kopecky-Bromberg et al. 2007) and caused a new infectious disease named as Corona Virus Disease 2019 (Zhu et al. 2020). As per the WHO report, a new type of coronavirus, was identified early on January 2020 and its genomic sequences were shared for studies. SARS nCoV is a zoonotic infection same as MERS (Middle East Respiratory Syndrome) and SARS (severe acute respiratory syndrome) (Hui et al. 2020). Structural proteins include membrane, envelope, nucleocapsid protein and spike protein, required for virions assembly and helps CoVs to cause infection. Since, it's start, a large number of people from all over the world suffered the Covid infection (Gómez et al. 2021). COVID-19 has infected estimated 130 million persons as of April 2021, resulting in more than 2.8 million fatalities in 219 nations. Globally, almost 104 million patients have recovered (Böttcher et al. 2021). COVID-19 testing kits were being developed to rapidly and efficiently check the coronavirus infection. With the publication of the genetic sequence for COVID-19 on 11th Jan 2020, the response to prepare the vaccine for COVID-19 started globally (Li et al. 2021). Our present study is focused on analyzing viral genomic characteristics and understanding the structure and nature of sequences coding for structural proteins (Boheemen et al. 2012). In addition, the variability and identity between SARS nCoV& SARS CoV and other variants were also analyzed using bioinformatics approaches. Our goal is to investigate the nature of variations and locate the probable variable sites in the SARS nCoV genome compared to previously reported SARS CoV genomic sequences (Malik et al. 2020) and to find out the variations in the sequences of SARS nCoV variants. Our detailed investigation unfolds the viral sequences coding for structural proteins viz spike glycoprotein, membrane glycoprotein, nucleocapsid protein, small envelope proteins respectively (S, M, N, E) and to analyze the nature of variation located at specific sites, allows estimating the similarity of function of SARS nCoV when compared with previously identified sequences (Masters 2006;Tan et al. 2006). Usually, RNA virus's nucleotide substitution rates are faster than their hosts. Gene mutations such as insertions, deletion and substitutions have been computed while comparing SARS nCoV with SARS CoV, and SARS nCoV variants (Kumar et al. 2020). The analysis also investigated the differences in N-glycosylation sites of 3 coronavirus isolates as well. We have done the comparative genomic and proteomic analysis of structural sequences coding for spike, membrane, envelope and nucleocapsid protein of SARS-nCoV and SARS-CoV with accession number NC_045512.2, MT499203.1 and NC_004718.3 (https:// www. ncbi. nlm. nih. gov/ genome/ virus es/). In addition to this, comparative genomic and proteomic analysis of structural sequences coding for spike, membrane, envelope and nucleocapsid protein of SARS-nCoV variant B. Phylogenetic tree construction Codon based sequence alignment of ten amino acid sequences of S glycoprotein from different species of corona virus and one from Nor Virus (out-group) was performed for the conserved domain sequence using Multiple Sequence Comparison by Log-Expectation [MUSCLE] program in MEGAX (Edgar 2004). The aligned sequence file was used for phylogeny tree construction by MEGAX using Neighbor joining clustering method, and 1000 bootstrap replicates (Edgar 2004). Phylogenetic analysis with same sequences was also performed using Phylogenyfr (http:// www. phylo geny. fr/ simpl ephyl ogeny. cgi) (Kumar et al. 2018). Genomic and proteomic variations The genomic analysis was performed to find out the percent identity and statistical variability among three isolates NC_045512.2 (SARS nCoV ref seq), MT499203.1 (SARS nCoV) and NC_004718.3 (SARS CoV). It was implied that the composition of both SARS nCoV strains must contain 6 ORFs so that they can be excluded from each of the two isolates and should have structural sequences coding for E, M, N and S proteins. The percentage identity, GC % content variation, and statistical analysis for all ORFs and structural sequences were analysed. To study deletion, insertion and substitution of nucleotide and amino acids, we specifically focus our study on structural sequences coding for E, M, N and S protein. Codon based sequence alignment of three structural sequences coding for E, M, N and S protein was performed for the CDS (Conserved domain sequence) using MUSCLE program in MEGAX (https:// www. megas oftwa re. net/). By analyzing MSA (Multiple sequence alignment), we found the variabilities among nucleotide and amino acid sequences. We have also used the MSA to search the synonymous and non-synonymous substitutions and then analyzed, which structural sequences have more number of synonymous substitutions and conservative missense mutations ( (Lokman et al. 2020;Chatterjee 2020)). With the same method as mentioned above, comparative genomic and proteomic analysis of structural sequences coding for spike, membrane, envelope and nucleocapsid protein of Glycosylation site variations on S (spike) glycoproteins To find out the variations in the attachment sites of S glycoproteins of SARS nCoV and SARS CoV to the surface of the host cell, the glycosylation site was determined using NetNGly 1.0 software (http:// www. cbs. dtu. dk/ servi ces/ NetNG lyc/) and validated these glycosylation sites by N-GlyDE software (http:// www. cbs. dtu. dk/ servi ces/ NetNG lyc/) (Kumar et al. 2020). Three dimensional structure and docking analysis The 3D structures of SARS-CoV-2 strain and Omicron variants were downloaded from RCSB PDB database (https:// www. rcsb. org/) and the protein-protein docking was performed using Cluspro server (https:// clusp ro. bu. edu/). The results were visualized using Pymol software version 2.5.1. Phylogenetic tree analysis All the 11 sequences coding for spike glycoprotein were aligned using MUSCLE program in MEGAX. Evolutionary tree was generated using NJ method as shown in Fig. 1 with the sum of branche length of tree = 5.08122990. The evolutionary distance in tree was computed in MEGAX using Poisson correction method. Phylogenyfr was used for evolutionary tree construction with the same amino acid sequences. Phylogenetic tree analysis shows that SARS Table 1). Genomic and proteomic variations analysis As we have taken the three isolates of coronavirus for the variation analysis, their detailed description is given in the Table 2. We have mainly focused on the analysis of variations in structural sequences of different isolates of coronavirus. For better analysis of sequences, first we have to clearly understand the structure. Different genes code for different proteins and show some specific functions as shown in Figs. 3 and 4. Gene's nsp3 and nsp2 code for papain like protease and 3CL-Protease respectively. Gene's nsp12 and nsp13 code for RNA dependent RNA polymerase and helices enzymes respectively. S, M, E and N genes code for spike, membrane, envelope and nucleocapsid. Homology analysis Percent identity among structural sequences for S, E, N and M proteins of these three isolates nCoVNC_045512 (ref seq), nCoVMT499203 and CoVNC_004718 was found out through BLAST pair wise alignment. Number of nucleotide and %I (% identity) was seen with respect to reference sequence nCoVNC_045512. Nucleotide sequence of S shows highest variability among all in comparison with reference sequence while Orf6 CoVNC_004718 has lowest % AI as compared to all amino acid sequences analyzed in comparison with reference sequence (ord Table 3). We found that by comparing gene sequences of CoVNC_004718 and nCoVNC_045512 (ref seq.), nCoVMT499203.1 CoVNC_004718 gene sequences have higher %GC content. Variation profiling of SARS CoV with two SARS nCoV sequences Variation profiling of sequences coding for E protein Codon based multiple sequence alignment of structural sequences coding for E protein was performed for the CDS. By analyzing MSA we found that ECoVNC_004718 is 3 bp (additional nucleotides GAA) larger as compared to EnCoVNC_045512 and EnCoVMT499203 sequences as shown in Fig. 5. Position of nucleotide is not according to whole genome sequence, rather position No 1 means first nucleotide of gene coding for E protein which are aligned on codon based method by MUSCLE using MEGAX (same for amino acid position). Total 13 substitutions are analyzed in the ECoVNC_004718 sequence with respect to reference sequence. Out of the 13 substitutions, only 5 are non-synonymous substitutions (results in three amino acid change) and eight are synonymous substitutions. Here, synonymous mutations are less which means that change in amino acid constitution is less. All the details about the substitution sites are mentioned in Table 4. and nsp13 code for RNA dependent RNA polymerase and helices enzymes respectively. S, M, E and N genes code for spike, membrane, envelope and nucleocapsid Variation profiling of sequences coding for M protein Pairwise sequence alignment analysis of M sequences shows that MCoVNC_004718 has 85.52% nucleotide identity with 573 identical nt. sites when compared with MnCoVNC_045512. AI% is 90.5% which shows the preserved sequence length of MPnCoVNC_045512 with MPCoVNC_004718) as shown in Fig. 6. On analysis of M coding sequences, it was found that there is total 95 nucleotodes substitution, out of which 27 nucleotides substitutions were non-synonymous are present in MCoVNC_004718 in comparison to other 2 isolates. These 27 non-synonymous substitutions lead to the 16 amino acid change in MCoVNC_004718 seq. A complete substitution of codon TGT > ATG (start position of codon Table 5. Variation profiling of sequences coding for N protein On a detailed analysis of coding sequences of N protein unfolded that length of N sequence of SARS nCoV is 1260 bp, while the sequence length of SARS COV were 1269 bp i.e. NCoVNC_004718 sequence, is nine bps larger than other two isolates as shown in Fig. 7. Our pair wise sequence alignment analysis of N sequences showed that NPCoVNC_004718 has 90.52% AI when aligned with NPnCoVNC_045512. On analysis of N coding sequences, it was found that there are total 141 nucleotide. substitution, out of which 52 nt. substitutions are non-synonymous present in NCoVNC_004718. These 52 non-synonymous substitutions lead to the 37 amino acid changes in NCoVNC_004718 seq. While in case of NCoVMT499203 only one non-synonymous substitution occurs. A com-plete substitution of codon 577AAC > ATG Position of codon is 577, 802GCA > CAG and 1003ACA > CAT leading to the change in amino acid residues are N > G, A > Q and T > H residues in NCoVNC_004718 when compared to other two isolates. Substitution of serine to aspargine amino acid occurs in NPCoVMT499203 in comparison to other two isolates. We know that serine and aspargine are similar type of amino acid i.e. uncharged polar amino acids which may not affect much change in that protein function (Table 6). Fig. 8); large ectodomain, single pass membrane anchor and a small sized intracellular tail. The ectodomain contains 2 subunits; S1 receptor binding and S2 membrane fusion subunit. It is a homotrimer, having 3 S1 heads and S2 trimeric stalk, first S1 binds to host cell receptor for attachment to virus, second S2 mediates the fusion of virus and host membrane, initiates the infection cycle (Li 2016). On analysis of codon-based MSA shows that SCoVNC_004718 has a total of 994 substitutions out of non-synonymous substitutions of 603, leading to 176 amino acid change. SnCoVMT499203 have only one synonymous substitution which is A > T at 1056 nucleotide position as shown in Fig. 9. Pair wise sequence analysis of S protein sequences showed that SARS nCoV isolates and SARS CoV have nearly 75.46% similarity. It was found that 22 amino acids are present in SARS nCoV 2 isolates which results from 6 insertions shown in Fig. 10 . Another insertion 683NSPR686 is present upstream to cleavage site of S1/S2 that leading the formation of PRRARS (furin like cleavage site) in SARS-nCoV isolates. Glycosylation site variations on S glycoproteins The N-glycosylation sites of both SARSnCoV and SARSCoV were presented in Table 7, Fig. 11, where 0.5 N-glycosylation potential was set as cutoff. On comparison of SARS nCoV with SARS CoV we have observed that S protein have different glycosylation sites such as NLTT, NVTW, NGTK, NATN, NKSW, and NATR that can results in sequence variation. We also found some common glycosylation site in all 3 isolates such as NCTF, NITN, NASV is basis of similarity and differences in glycosylation site, it may be suggested that SARS nCoV interacts to ACE2 host receptor using these different glycosylation sites due to that internalization process may be affected [ Fig. 12]. Variation profiling of SARS nCoV with its variants Variation profiling of sequences coding for S protein-On analysis of codon based MSA shows that there is deletion of starting 1259 nucleotides from the B1 variant sequence compared to the reference sequence. Deletion in the B2 variant sequence occurs at 467AGT TCA 472 and in the B3 variant sequence at 722TAC TTG CTT730 compared to the reference sequence. All synonymous and non-synonymous substitutions are shown in Table 8. Variation profiling of sequences coding for E protein- In the sequence coding for E protein, only one non synonymous substitution occurs: C > T at 212 nucleotide position, which leads to change in proline to leucine at 71 amino acid position in B3 sequence. Variation profiling of sequences coding for M protein-There is one synonymous substitution C > T occurs in B1 variant sequence at 159 nucleotide position while one non synonymous substitution T > G and T > C at 245 nucleotide position in B1 and B2 variant sequences as compare to reference sequence respectively. This non synonymous substitution results in different amino acids which is Isoleucine > Serine and Isoleucine > Threonine at 82 amino acid position for B1 and B2 variant sequences as compare to reference sequence respectively. Variation profiling of sequences coding for M protein In case of B2 variant sequence, there is deletion of last 176 nucleotides code for M protein with respect to reference sequence. All the non synonymous substitutions for different variants are mentioned in Table 9. Profiling of variations in SARS nCoV variant B.1.617.2 sequences Comparative analysis of nucleotide sequences of B.1.617.2 strain found out some variations in the sequences with respect to reference sequence (MZ208926.1) as mentioned in Table 10. There is only one non-synonymous substitution at 241 positions of MZ157012.1 sequence codes for membrane protein which results in substitution of Alanine to Serine at 81 positions. As we found variations such as deletions and substitutions in the nucleotide sequences of strain B.1.617.2 of SARS nCoV shows that mutation occurs at fast rate, and this may Fig. 8 Structure of spike protein-Spike consists of 3 domains; large ectodomain, single pass membrane anchor and a small sized intracellular tail. The ectodomain contains 2 subunits; S1 receptor binding and S2 membrane fusion subunit Fig. 9 Showing synonymous substitution in SnCoVMT499203-SnCoVMT499203 has only one synonymous substitution which is A > T at 1056 nucleotide position leads to difference in virulence, infectivity and transmissibility at different regions in the world. We believed that such type of genomic and proteomic analysis need to be done at earliest stages to make easy understanding of disease diagnosis, viral adaptability and transmission dynamics and then correlate with different clinical characteristics. If we monitor the emerging mutations, it will help develop better formulation of vaccine and design of antiviral drugs. Three D Structure and protein-protein docking We have here compared the binding interaction pattern of ACE2 receptor protein with original wild-type SARS-CoV-2 strain with the recently evolved variant Omicron. It was reported that Omicron comprise of 60 mutations with 37 of them present in the spike protein which present the target site for vaccine and antibodies (Yin et al.,Fig. 10 Showing deletion in CoVNC_045512 and CoVMT499203-A 12 nucleotides deletion in SCoVNC_045512 and SCoVMT499203 B 4 amino acid deletion in SPCoVNC_045512 and SPCoVMT499203 as compared to SARS CoV Data availability All the data used in the study are taken from publically available database of NCBI. All the details of genomic sequences and their accession numbers are provided in material and methods as well as wherever cited. Conflict of interest The authors state that there are no conflicts of interest to disclose. R B1 B2 B3 56 C -G C 19 T -R T 79 G -G T 27 A -A S 239 A -A C 80 D -D A 425 G -A G 142 G -D G 644 A
2022-12-22T16:13:18.473Z
2022-12-20T00:00:00.000
{ "year": 2022, "sha1": "466dcb847daa0d7d3d070bed40cafe5cbd621170", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s43538-022-00140-y.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "4808b8e125c33627e84f3caae87472e43bbf65b8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
3327101
pes2o/s2orc
v3-fos-license
Tracking the Turn Maneuvering Target Using the Multi-Target Bayes Filter with an Adaptive Estimation of Turn Rate Tracking the target that maneuvers at a variable turn rate is a challenging problem. The traditional solution for this problem is the use of the switching multiple models technique, which includes several dynamic models with different turn rates for matching the motion mode of the target at each point in time. However, the actual motion mode of a target at any time may be different from all of the dynamic models, because these models are usually limited. To address this problem, we establish a formula for estimating the turn rate of a maneuvering target. By applying the estimation method of the turn rate to the multi-target Bayes (MB) filter, we develop a MB filter with an adaptive estimation of the turn rate, in order to track multiple maneuvering targets. Simulation results indicate that the MB filter with an adaptive estimation of the turn rate, is better than the existing filter at tracking the target that maneuvers at a variable turn rate. Introduction Target tracking has been discussed in many articles due to its military and civil applications, which range from threat warnings, to intelligent surveillance and situational awareness [1][2][3][4][5][6][7][8][9]. Maneuvering target tracking is the most essential ingredient of target tracking and has attracted the attention of many researchers. A number of efficient tracking algorithms for maneuvering targets have been developed and designed in the past few decades [10][11][12][13][14][15][16][17][18]. The interacting multiple model (IMM) algorithms were independently developed in [10,11], in order to track the maneuvering target in systems with Markov-switching coefficients, and in air traffic control, respectively. The mode-set adaptive IMM algorithm and multiple model method with variable structure were designed in [12,13], to improve the performance of IMM algorithms when tracking a maneuvering target. By combining the IMM method with the joint probabilistic data association (JPDA) and multiple hypothesis tracking (MHT) techniques, respectively, the IMM-JPDA filter and the IMM-MHT filter were developed in [14,15], in order to track multiple maneuvering targets. By applying the switching multiple models technique to the probability hypothesis density (PHD) filter and multi-target Bayes (MB) filter, respectively, Pasha developed a PHD filter to track maneuvering targets in the presence of clutter and noise [16], and Liu designed a MB filter for multiple maneuvering target tracking, in the case of low detection probability [17]. As mentioned above, the existing methods for maneuvering target tracking apply the IMM approach, or the switching multiple models technique, to the tracking filter. In these methods, a finite set of dynamic models are used each time. Because the motion mode space of a target is continuous, a sufficiently large set of dynamic models is usually required to cover the range of possible motion modes of the target. Such a large set is impractical because an increase in the number of dynamic models, also leads to an increase in the computational load. Additionally, it is worth noting that the actual motion mode of a target at any given time, may be different from all of the dynamic models, even if a sufficiently large set of dynamic models are used by the filter. When tracking the target that maneuvers at a variable turn rate, a limited set of dynamic models with different turn rates are usually used by the filter [12,16]. Since the turn rate of a target at any given time is unknown and random, its actual turn rate at any given time may be different from the turn rates in dynamic models. This difference causes the filter to provide an inaccurate state estimation of the target at this time. To track the target that maneuvers at a random turn rate, we establish a formula for computing the turn rate of the target. This formula solves the estimation issue of the turn rate by using the state vector of a target at a previous time, and its measurement at the current time. Applying the estimation method of the turn rate to the MB filter, we present the MB filter with an adaptive estimation of the turn rate. Its performance is demonstrated by the simulation results. A Brief Description of Pasha's PHD Filter Pasha's Gaussian mixture PHD filter is applied to the tracking of multiple maneuvering targets in systems with linear Gaussian jump Markov system models, and is used as the comparison object in this paper. We will first give a brief description of this filter. A simplified version of Pasha's Gaussian mixture PHD filter is composed of the following four steps: Step 1: Prediction ) denote the posterior intensity at time k − 1, where N k−1 is the number of Gaussian terms at time k − 1 and N(·; m, P) is a Gaussian distribution with mean vector m, and covariance matrix P; r i,k−1 , x i,k−1 , w i,k−1 (r i,k−1 ), m i,k−1 (r i,k−1 ), and P i,k−1 (r i,k−1 ) are the model label, state vector, weight, mean vector, and covariance matrix of Gaussian term i, respectively. The predicted posterior intensity is given by: where M r is the number of models used, and w i,k|k−1 (r i,k ), m i,k|k−1 (r i,k ), and P i,k|k−1 (r i,k ) are given by: where p S,k is the survival probability; F k−1 (r i,k ) and Q k−1 (r i,k ) are the state transition and process noise covariance matrices, respectively, of model r i,k ; and t k|k−1 (r i,k |r i,k−1 ) is the Markov transition probability from model r i,k−1 to model r i,k . Step 2: Update If the predicted posterior intensity is given by Equation (1), then the updated posterior intensity is given by: where: w e,k|k−1 (r e,k )N(y j,k ;H(r e,k )m e,k|k−1 (r e,k ),H(r e,k )P e,k|k−1 (r i,k )H T (r e,k )+R(r e,k )) where M r is the number of measurements at time k, y j,k denotes a measurement at time k; H(r i,k ) and R(r i,k ) are the observation matrix and covariance matrix of observation noise, respectively; and I, λ c,k , and p D,k denote the identity matrix, clutter rate, and detection probability, respectively. Step 3: Generation of the Birth Intensity The birth intensity is generated from the measurements at time k and is given by: where m j γ,k is taken from measurement y j,k = r j x,k r j y,k T ; and m j γ,k = r j x,k 0 r j y,k 0 T , r j,k = 1, w j γ,k = ρ r , and P j γ,k = P γ where ρ r and P γ are the known parameter and covariance matrix, respectively. Step 4: Combination of the Updated Posterior Intensity and Birth Intensity The posterior intensity at time k is obtained by the combination of the updated posterior intensity in Equation (5), and the birth intensity in Equation (11), which is given by: After this combination, the Gaussian terms whose weight is less than threshold τ are pruned, and the posterior intensity, which is composed of the remaining Gaussian terms, is propagated to the next time step. Those Gaussian terms whose weight is greater than 0.5 are picked as the output of the filter at time k. Estimation of Turn Rate In this Section, we will estimate the turn rate of a maneuvering target by using its state vector at time k − 1 and its position measurement at time k. Figure 1 shows a maneuvering target with turn rate ω k which moves from point O e at time k − 1, to point E at time k. Let x k y k . y k T denote the state vectors of the target at times k − 1 and k, respectively, where (x k−1 , y k−1 ) and (x k , y k ) denote the position coordinates of the target at times k − 1 and k, respectively, and ( y k ) denote its velocities at times k − 1 and k, respectively. Obviously, in the x-y Cartesian coordinate system, the Cartesian coordinates of points O e and E are (x k−1 , y k−1 ) and (x k , y k ), respectively. the next time step. Those Gaussian terms whose weight is greater than 0.5 are picked as the output of the filter at time k . Estimation of Turn Rate In this Section, we will estimate the turn rate of a maneuvering target by using its state vector at time 1 k  and its position measurement at time k . Figure 1 shows a maneuvering target with turn We introduce an e x − e y Cartesian coordinate system, whose origin is located at point e O . The introduced e x − e y Cartesian coordinate system is shown in Figure 2. The coordinate transformation from position coordinates X in the x-y coordinate system, to position coordinates c X in the e x − e y coordinate system, is given by: where 1 k   is given by: We introduce an x e − y e Cartesian coordinate system, whose origin is located at point O e . The introduced x e − y e Cartesian coordinate system is shown in Figure 2. The coordinate transformation from position coordinates X in the x-y coordinate system, to position coordinates X c in the x e − y e coordinate system, is given by: where α k−1 is given by: Similarly, the vector transformation from velocity vector V in the x − y coordinate system, to velocity vector V c in the x e − y e coordinate system, is given by: Using Equations (13) and (15) to address position coordinates X = x k−1 y k−1 T and velocity vector , respectively, we obtain the state vector of the target in the x e − y e Cartesian coordinate system, as: Since the target moves at turn rate ω k , from time k − 1 to time k, the state transition matrix of the target motion is given by: where ∆t k = t k − t k−1 is the interval between times k and k − 1. Using the state transition matrix in Equation (17), we obtain the state vector of the target in the x e − y e Cartesian coordinate system at time k, as: We assume that a sensor observes the position of the target and we use ( x r , y r ) to denote the position measurement of the target in the x − y Cartesian coordinate system at time k . Obviously, the position measurement of the target at time k is its position coordinates at this time, if no measurement error appears in the measurement of the sensor. Therefore, we have: Figure 2. Transformation between two coordinate systems. Based on Equation (18), the position coordinates of point E in the x e − y e Cartesian coordinate system are given by: 19) and the angle β k in Figure 2 is given by: We assume that a sensor observes the position of the target and we use (r x , r y ) to denote the position measurement of the target in the x − y Cartesian coordinate system at time k. Obviously, the position measurement of the target at time k is its position coordinates at this time, if no measurement error appears in the measurement of the sensor. Therefore, we have: where (r e x ,r e y ) is given by: Replacing (x e k , y e k ) in Equation (20) Solving Equation (24), we obtain: where: Obviously, if no measurement error appears in the position measurement of the target, ω k in Equation (25) is its turn rate from time k − 1 to time k. Otherwise, we use ω k in Equation (25) as the estimation of its turn rate from time k − 1 to time k. Thus, by using the state vector of a target at a previous time, and its position measurement at the current time, we may estimate its turn rate at the current time. MB Filter with an Adaptive Estimation of Turn Rate In [19,20], Liu presented the MB filter to track multiple targets in the presence of clutter and noise. In this section, we apply the estimation method of the turn rate to the MB filter in [20], in order to develop the MB filter with an adaptive estimation of the turn rate. This filter consists of the following four steps: Step 1: Prediction In this step, we predict the marginal distribution and existence probability of each target at the current time, according to its marginal distribution, existence probability, and the estimation of the turn rate at a previous time. Let N(x i,k−1 ; m i,k−1 , P i,k−1 ), ρ i,k−1 , and ω i,k−1 denote the marginal distribution of target i, its existence probability, and the estimation of the turn rate at time k − 1, respectively, where i = 1, 2, · · · , N k−1 N k−1 is the target number and N(·; m, P) is a Gaussian distribution with mean vector m and covariance matrix P. The predicted marginal distribution, existence probability, and turn rate of each target at time k are given by: where p S,k is the survival probability, and m i,k|k−1 and P i,k|k−1 are given by: where T denotes the transpose, Q i,k−1 is the covariance of process noise, and F(ω i,k−1 ) is given by: Step 2: Estimation of Turn Rate In this step, we use the measurements at time k, and marginal distribution of target i at time k − 1, to estimate its turn rate from time k − 1 to time k. η i y,k−1 ) denotes its velocities. According to Equation (25), the turn rate of target i that corresponds with measurement y j,k , is given by: where Considering that the turn rate of a target is generally within a known range, the turn rate of target i that corresponds with measurement y j,k , may be given by: where ω max is the maximal turn rate. Step 3: Update In this step, we use the marginal distribution N(x i,k−1 ; m i,k−1 , P i,k−1 ), predicted existence probability ρ i,k|k−1 , turn rate estimation ω i,j k , and measurement y j,k , to obtain the updated marginal distribution, existence probability, and turn rate. The updated distribution and existence probability of target i that corresponds with measurement y j,k , are as follows: ,i = 1, 2, · · · , N k−1 ,j = 1, 2, · · · , M k where: m We then use the distribution, existence probability, and turn rate with index q, as the marginal distribution of target i, its existence probability, and the turn rate at time k, respectively, namely: Step 4: Generation of New Target Distribution and the Output of the Filter In this step, we use the measurement at time k to generate the marginal distribution of the new target as: where m j γ,k is from measurement y j,k = r j x,k r j y,k T , and m j γ,k = r j x,k 0 r j y,k 0 T and P j γ,k = P γ where P γ is a known covariance matrix. Meanwhile, we designate parameter ρ γ as the existence probability of the new target and assign 0 as its turn rate, namely: We then combine the marginal distributions of the existing targets in Equation (46), with those of new targets in Equation (49), to form the marginal distributions of individual targets at time k, as: where N k = N k−1 + M k . The corresponding existence probabilities and turn rates of individual targets at time k, are as follows: After this combination, we prune the targets whose existence probability ρ i,k is less than threshold τ, and propagate the marginal distributions, existence probabilities, and turn rates of the remaining targets to the next time step. Those targets whose existence probability ρ i,k is greater than 0.5 are picked as the output of the filter at time k. Simulation Results In this section, we use an example to reveal the tracking performance of the MB filter with an adaptive estimation of the turn rate for multiple maneuvering targets. In this example, Pasha's PHD filter [16] is used as the comparison object, and the OSPA distance [21], with parameters c = 50 m and p = 2, is used as the measure. The covariance matrix Q i,k−1 , observation matrix H k , and covariance matrices R k and P γ used in the experiment, are as follows: where σ v and σ w denote the standard deviations of noises. Three coordinated turn models with different turn rates are used in Pasha's PHD filter. The state transition and covariance matrices for models r i,k = 1, r i,k = 2, and r i,k = 3 are given by H(r i,k ) and R(r i,k ) of Pasha's PHD filter are given by H(r i,k ) = H k and R(r i,k ) = R k . Example 1. Five targets are considered in this example. Targets 1, 2, 3, and 4 appear at t = 1 s, t = 1 s, t = 3 s, and t = 3 s, respectively, and disappear at t = 70 s. Target 5 appears at t = 5 s and disappears at t = 60 s. Each target changes its turn rate at t = 15 s, t = 30 s, t = 40 s, and t = 55 s, respectively. The initial positions and moving trajectories of these five targets are shown in Figure 3. We use parameters Pasha's filter to address the simulation measurements for 150 trials. The experimental results are shown in Figure 5. Based on these experimental results, the proposed filter performs better than Pasha's filter, most of the time. Two factors are responsible for this result. The first factor is the difference between the actual motion mode of a target, and the dynamic model used by the filter. This difference causes Pasha's filter to provide an inaccurate state estimation of the target. The proposed filter reduces this difference by estimating the turn rate of the target at each given time. The second factor is the filter's memory. Due to the poor memory of Pasha's filter, it is prone to discarding the information of a target from the posterior intensity, and cannot provide its state estimation if the target is not detected by a sensor at each point in time. In contrast to Pasha's filter, the proposed filter provides the state estimation of a missed target, due to its sufficient memory of the target. The effect of the filter's memory on the OSPA distance has been discussed in detail in [20]. As shown in Figure 5, a peak appears at 60 s t  , because the proposed filter furnishes the state estimation of target 5 at its disappearing time. According to the definition of OSPA distance in [21], the OSPA distance is used to measure the similarity between two different sets. The excessive or deficient state estimation of the target will be punished with the cutoff distance. We use parameters ∆t k = 1 s, σ v = 0 m/s 2 , σ w = 1 m, p S,k = 1.0, λ c,k = 1.25 × 10 −5 m −2 , and p D,k = 0.95 to generate the simulation measurement. Figure 4 shows the simulation measurement for a trial. Setting the parameters of the proposed filter to ∆t k = 1 s, σ v = 1 m/s, p S,k = 0.6, λ c,k = 1.25 × 10 −5 m −2 , p D,k = 0.95, τ = 0.001, σ w = 2 m, ρ γ = 0.1, and ω max = 6 o /s, and the parameters of Pasha's filter to ∆t k = 1 s, σ v = 1 m/s, p S,k = 1.0, λ c,k = 1.25 × 10 −5 m −2 , p D,k = 0.95, τ = 0.001, σ w = 2 m, and ρ γ = 0.1, respectively, we use the proposed filter and Pasha's filter to address the simulation measurements for 150 trials. The experimental results are shown in Figure 5. Based on these experimental results, the proposed filter performs better than Pasha's filter, most of the time. Two factors are responsible for this result. The first factor is the difference between the actual motion mode of a target, and the dynamic model used by the filter. This difference causes Pasha's filter to provide an inaccurate state estimation of the target. The proposed filter reduces this difference by estimating the turn rate of the target at each given time. The second factor is the filter's memory. Due to the poor memory of Pasha's filter, it is prone to discarding the information of a target from the posterior intensity, and cannot provide its state estimation if the target is not detected by a sensor at each point in time. In contrast to Pasha's filter, the proposed filter provides the state estimation of a missed target, due to its sufficient memory of the target. The effect of the filter's memory on the OSPA distance has been discussed in detail in [20]. As shown in Figure 5, a peak appears at t = 60 s, because the proposed filter furnishes the state estimation of target 5 at its disappearing time. According to the definition of OSPA distance in [21], the OSPA distance is used to measure the similarity between two different sets. The excessive or deficient state estimation of the target will be punished with the cutoff distance. To reveal the effect of the clutter rate and detection probability on the tracking performance of the proposed filter, we use different clutter rates and detection probabilities to generate simulation measurements, and use the proposed filter and Pasha's filter to address the simulation measurements, respectively, for 150 trials. Tables 1 and 2 show the results obtained from different clutter rates and detection probabilities. Table 1 suggests that an increase in the clutter rate leads to a larger OSPA distance for both the proposed filter and Pasha's filter, but the proposed filter performs better at each clutter rate than Pasha's filter. A similar conclusion is also reached from the result in Table 2. A decrease in the detection probability enlarges the OSPA distance of the proposed filter and Pasha's filter, but the proposed filter obtains a smaller OSPA distance than Pasha's filter at each detection probability. To reveal the effect of the clutter rate and detection probability on the tracking performance of the proposed filter, we use different clutter rates and detection probabilities to generate simulation measurements, and use the proposed filter and Pasha's filter to address the simulation measurements, respectively, for 150 trials. Tables 1 and 2 show the results obtained from different clutter rates and detection probabilities. Table 1 suggests that an increase in the clutter rate leads to a larger OSPA distance for both the proposed filter and Pasha's filter, but the proposed filter performs better at each clutter rate than Pasha's filter. A similar conclusion is also reached from the result in Table 2. A decrease in the detection probability enlarges the OSPA distance of the proposed filter and Pasha's filter, but the proposed filter obtains a smaller OSPA distance than Pasha's filter at each detection probability. To reveal the effect of the clutter rate and detection probability on the tracking performance of the proposed filter, we use different clutter rates and detection probabilities to generate simulation measurements, and use the proposed filter and Pasha's filter to address the simulation measurements, respectively, for 150 trials. Tables 1 and 2 show the results obtained from different clutter rates and detection probabilities. Table 1 suggests that an increase in the clutter rate leads to a larger OSPA distance for both the proposed filter and Pasha's filter, but the proposed filter performs better at each clutter rate than Pasha's filter. A similar conclusion is also reached from the result in Table 2. A decrease in the detection probability enlarges the OSPA distance of the proposed filter and Pasha's filter, but the proposed filter obtains a smaller OSPA distance than Pasha's filter at each detection probability. The performance time is also an important measure for the performance of a filter. Table 3 displays the required time of a trial for the proposed filter and Pasha's filter, at different clutter rates. Based on Table 3, the proposed filter requires more time than Pasha's filter for each trial, because the proposed filter is used to estimate the turn rate of the target at each time step, and this estimation requires a number of calculations. Conclusions In this study, the formula for calculating the turn rate of a maneuvering target is derived. Based on this formula, we may estimate the turn rate of a target by using its state vector at a previous time, and its measurement at the current time. Applying the estimation method of the turn rate to the MB filter, we present a MB filter with an adaptive estimation of the turn rate, to track multiple targets maneuvering at a random turn rate. Based on simulation experimental data, we test the performance of the proposed filter, by comparing it with Pasha's filter. The experimental results suggest that the proposed filter is better than Pasha's filter at tracking the targets that maneuver at a variable turn rate.
2017-05-24T23:11:04.010Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "20062910204fd996feaec810bac773c9839e6fa6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/17/2/373/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "20062910204fd996feaec810bac773c9839e6fa6", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering", "Medicine", "Computer Science" ] }
6605336
pes2o/s2orc
v3-fos-license
Construction and Evaluation of Rodent-Specific rTMS Coils Rodent models of transcranial magnetic stimulation (TMS) play a crucial role in aiding the understanding of the cellular and molecular mechanisms underlying TMS induced plasticity. Rodent-specific TMS have previously been used to deliver focal stimulation at the cost of stimulus intensity (12 mT). Here we describe two novel TMS coils designed to deliver repetitive TMS (rTMS) at greater stimulation intensities whilst maintaining spatial resolution. Two circular coils (8 mm outer diameter) were constructed with either an air or pure iron-core. Peak magnetic field strength for the air and iron-cores were 90 and 120 mT, respectively, with the iron-core coil exhibiting less focality. Coil temperature and magnetic field stability for the two coils undergoing rTMS, were similar at 1 Hz but varied at 10 Hz. Finite element modeling of 10 Hz rTMS with the iron-core in a simplified rat brain model suggests a peak electric field of 85 and 12.7 V/m, within the skull and the brain, respectively. Delivering 10 Hz rTMS to the motor cortex of anaesthetized rats with the iron-core coil significantly increased motor evoked potential amplitudes immediately after stimulation (n = 4). Our results suggest these novel coils generate modest magnetic and electric fields, capable of altering cortical excitability and provide an alternative method to investigate the mechanisms underlying rTMS-induced plasticity in an experimental setting. INTRODUCTION Transcranial magnetic stimulation (TMS) has excellent potential for modulating human brain plasticity; however, the cellular and molecular mechanisms underlying TMS-induced plasticity remain poorly understood. Rodent models of TMS play a significant role in understanding TMSinduced plasticity mechanisms as they offer a more direct measure of TMS-induced synaptic and non-synaptic plasticity . However, one of the main limitations to rodent models of TMS is the lack of rodent-specific TMS stimulator coils. For example, most rodent studies use commercial human coils that are larger than the rodent brain, such as "small" figure of eight (Vahabzadeh-Hagh et al., 2011;Hoppenrath and Funke, 2013) or round coils (Gersner et al., 2011).While the use of such coils allows for stimulation at high intensities used in humans (1-2 T), they lack the equivalent spatial resolution (Weissman et al., 1992) (Figure 1A). Offsetting coil position can achieve greater stimulation focality (Rotenberg et al., 2010;Vahabzadeh-Hagh et al., 2011); however, an alternative approach for rodent TMS is to scale-down coil size to improve focality. Whilst recent work has shown that coil size can be dramatically reduced and maintain high intensity capabilities, it still results in relatively unfocal stimulation (Parthoens et al., 2016). In contrast, compromising stimulation intensity for greater focality, rodent-specific coils (circular, 8 mm outer diameter, ∼12 mT; Figure 1B) have recently been shown to induce structural and molecular plasticity in midbrain and cortical brain regions of mice (Rodger et al., 2012;Makowiecki et al., 2014). However, the effects induced by low intensity TMS may not be representative of the changes produced by high intensity stimulation used in human TMS studies (Grehl et al., 2015). Thus, there is further need to develop a small animal coil that can deliver TMS at higher intensities, whilst maintaining a good degree of spatial resolution (i.e., focality). However, maintaining high stimulation intensities in small coils has physical constraints such as increased thermal and mechanical stress (Cohen and Cuffin, 1991). The stimulation intensities that can be reliably delivered in an experimental setting by rodent-specific coils have yet to be explored. Here we describe two novel rodent-specific TMS coils that deliver stimulation at modest intensities (∼100 mT) to the rodent brain whilst maintaining the spatial resolution of previous low-intensity rodent coils. These small coils provide an alternative approach to the use of non-focal high intensity human coils or focal lowintensity rodent coils, to investigate TMS neuromodulation in rodents. FIGURE 1 | Schematic diagrams of coils and waveforms. Commercial 70 mm round coil over a rat brain (A). Rodent-specific 8 mm round coil placed over the rat brain (B). Birdseye (top) and side on views (bottom) of the novel air-core coil (left) and iron-core coils (right) (C). Diagram of the input coil voltage (top) and resulting magnetic field output as measured by a hall device (bottom) (D). Coil and Stimulation Parameters Two custom circular coils of the same dimensions (8 mm height × 8 mm outer diameter) with either an air or iron core were constructed ( Figure 1C). Insulated copper wire (0.125 mm diameter, Brocott UK, Yorkshire, UK) was wound (780 turns) around a steel or plastic bobbin (inner diameter 4 mm and outer diameter 8 mm). Coils were wound with a fine wire coil-winding machine (Shining Sun SW-202B, Taipei, China). Stimulation parameters were controlled by a waveform generator (Agilent Technologies 335141B, CA, USA) connected to a bipolar voltage programmable power supply (KEPCO BOP 100-4M, TMG test equipment, Melbourne, Australia). Current in the coil flowed in a direction that induces an anterior to posterior current across the left rat motor cortex (i.e., posterior to anterior in the coil). Experiments were conducted at 100% of the maximum power supply output (100 V) using custom biphasic waveforms (400 µs rise, 400 µs fall, and 100 µs rise, Figure 1D) (Agilent Benchlink Waveform Builder, CA, USA). Magnetic Field Decay and Measurements We used a Hall Effect probe to measure the magnetic field magnitude generated by the coils. Coils were fixed to a stereotaxic frame and manipulated around the Hall Effect probe (Honeywell SS94A2D, NJ, USA). Measurements of single pulse stimulation were taken in the perpendicular (xy) and parallel (z) axes relative to the main axis of the coil. Due to the axial symmetry of the circular coil, measurements in the x axis also represent the y axis and are therefore referred to as xy. Coil centers were positioned directly above the Hall Effect probe (xy, z = 0 mm) and repositioned independently at 1 mm increments to a maximum distance of 10 mm in each axis (xy max = +10 mm and z max = +10 mm). The peak Hall Effect voltage from the rising phase of the biphasic pulse was recorded for 4 pulses at each coordinate and averaged to obtain mean field strength as a function of position. Hall Effect voltages were recorded and analyzed with data acquisition software (Labchart 6, ADI instruments, Sydney, NSW, Australia). Here we define magnetic field focality as the distance at which the magnetic field is reduced to 50% of the peak. Field Strength during 1 and 10 Hz Stimulation Magnetic field measurements (with the Hall Effect probe) were averaged across the first and last 10 pulses of a 600-pulse train delivered at 1 and 10 Hz, and stability defined as the ratio of these averages expressed as a percentage. Stability % = (Mean magnetic field of the last 10 pulses/Mean magnetic field of the first 10 pulses) × 100. Temperature Measurements Coils were fixed to a K type thermocouple (−40 to 260 • C, Dick Smith Electronics Q1437, Perth, WA, Australia) and temperature recordings taken every 50 pulses during the 1 and 10 Hz protocols. Sound Measurements Attempts were made to measure the sound intensity/sound pressure level (SPL) of the brief clicks emitted by the coils undergoing 1 and 10 Hz rTMS using a 1/2" condensor microphone (Bruel and Kjaer Type 4134, Sydney, Australia) placed as close as possible to the coil. The microphone was calibrated using a Bruel and Kjaer Type 4231 calibrator. The output of the 1/2" microphone was viewed directly on an oscilloscope screen (Rigol DS1052E 50 MHz, Measurement Innovation, Perth, WA, Australia). It was found that there was a major artifact in the microphone output that was induced by the magnetic field from the coil and this could not be eliminated by shielding. This induced artifact was critically dependent on the spatial relationship between the TMS coil and the recording microphone, with the smallest artifact being present when these were at right angles to each other. Under these circumstances, the signal from the microphone (presumably a mixture of induced artifact and real acoustic signal) had a peak amplitude that corresponded to approximately 75 dB SPL (re 20 µPa). Because there was no way of separating artifact and acoustic signal in this method, this was thought to be an overestimate of the real sound pressure of the acoustic clicks emitted by the coil. A bioassay method was then used, using two normal hearing human listeners. The sound from the coil inserted into the external ear canal of one ear was matched in apparent loudness to a brief click presented to the other ear using calibrated custom sound generating equipment described in detail elsewhere (Mulders et al., 2011). The duration and spectral content of the clicks were adjusted to match as closely as possible the clicks emitted by the TMS coil. Finite Element Modeling Finite element modelling (FEM) was performed on a high throughput computer cluster consisting of 423 nodes and 4,296 processors using the commercially available Multiphysics 5.0 AC/DC module (COMSOL, Burlington, NJ, USA) to give a general estimate of the induced electric field strength within the animal's brain tissue during the magnetic stimulation. The geometry of the model was based on the coil used empirically, specifically a multi-turn circular copper wire coil (780 turns, inner diameter of 4 mm, wire diameter of 0.125 mm) containing a soft iron (with losses) core. Only the iron core coil was modeled, as this produced the greatest magnetic field and had better magnetic field stability during rTMS (see RESULTS below). Simulations were performed by driving the coil with a 100 V input (dI/dT = 1.83 mA/µs) in the frequency domain, using the rise time frequency of the biphasic pulse (2.5 kHz). This method is similar to that used in studies modeling magnetic stimulation of neural tissue, their methods section reviews the equations used for magnetic and electric fields in COMSOL (Bonmassar et al., 2012;Gasca, 2013). Modeling of the electric field was performed using a simplified geometry (Gasca, 2013) of the rat brain, an ellipsoid of 21 mm × 15.5 mm × 10.75 mm, taken from a rat brain atlas (Paxinos and Watson, 1982). The skull was modeled with a thickness of 0.7 mm, the average depth of the rat skull (Levchakov et al., 2006;O'Reilly et al., 2011). The simulation was performed with the coil positioned 0.25 mm above the center of the skull. Dielectric properties for human gray matter and bone were used, and taken from the Foundation for Research on Information Technologies in Society dielectric tissue properties database (Hasgall et al., 2014). These dielectric properties have been used in previous studies of magnetic stimulation in rodents (Nowak et al., 2011;Gasca, 2013;Crowther et al., 2014), and are shown in Table 1. Incorporating the frequency dependence of tissue is an important consideration, as low-frequency properties are controlled by the conduction of electrolytes in extracellular space, while high frequencies initiate several biophysical processes which change the dielectric properties of the tissue (Foster, 2000). Modeling was completed with the magnetic fields (mf) physics interface, and consisted of five domains: the brain, skull, the surrounding air, and a multiturn coil domain inclusive of the iron core and copper wire. The surrounding air domain was created using a condition that approximates the domain as set to infinity so that boundary conditions do not affect the solution. Geometry was discretized to the "extra fine" mesh setting with a swept mesh for the infinite air domain and a boundary condition mesh set around the iron core. To compare our iron-core rodent coil with a commercial coil, we ran an additional FEM model on the Magventure MC-B65 butterfly coil placed 7 mm above the ellipsoid rat brain model (as described above). The butterfly coil was modeled similarly to other papers but with parameters specific to the Magventure coil, as two sets of five concentric wires with diameters from 35 to 75 mm, spaced 5 mm apart, and placed 7 mm from the skull (due to the plastic casing; Thielscher and Kammer, 2004;Salvador et al., 2015). The coil input was set at 70% of the maximum stimulator output (MSO ; dI/dT of 112 A/µs). Anesthesia and Electromyography To determine whether the modest intensities of the rodentspecific coils are suitable for neuromodulation, we delivered sham stimulation (rodent-coil disconnected from power supply) or 10 Hz rTMS to the primary motor cortex of Sprague-Dawley rats (n = 4, 250-400 g males) with the iron-core coil. The iron-core coil was selected as it produced the greatest intensity and reliable stimulation at higher frequencies (see RESULTS below). Animals underwent two motor evoked potentials (MEP) recording sessions (a sham stimulation session and an rTMS session over two consecutive days). Animals were pseudo-randomized into MEP sessions, such that an equal number of animals (n = 2) received sham and rTMS in session 1 and 2. Changes in cortical excitability (MEPs) were characterized with single pulse TMS and electromyography (EMG) recordings of the rat forelimb as described by Rotenberg et al (Rotenberg et al., 2010). Briefly, rats were deeply anesthetized (verified by absence of pinch reflex) with an intraperitoneal injection of ketamine-xylazine (50 and 10 mg/Kg respectively, Troy Ilium, Sydney, NSW, Australia) and placed into an electrically grounded stereotaxic frame. The torso and all points of contact (ear bars and nose bar) between the animal and the metal frame were insulated with a thin layer of paraffin film to prevent any electrical conductance between the animal and the stereotaxic frame. Subdermal needle electrodes (13 mm 27G, Neuro Source Medical, ON, Canada) were inserted into the right brachioradialis muscle (recording electrode) and between the 3rd and 4th digit of the right forepaw (reference electrode). Animals were electrically grounded with a single needle electrode inserted into the base of the tail. EMG signals were amplified (×1000), band pass filtered (0.1-1000 Hz) (World Precision Instruments DAM50 Bio-amplifier, Coherent scientific, Adelaide, SA, Australia) and acquired at a sampling rate of 40 kHz (Powerlab 4/30 ADI Instruments, Sydney, NSW, Australia) with Scope software 4.1.1 (ADI instruments, Sydney, NSW, Australia). EMG recordings were stored for post-hoc analysis (MEP peak-peak amplitudes). Automated calculation of MEP amplitudes were calculated in a 10-20 ms window post single pulse TMS (i.e., MEPs had a latency of 10-20 ms post-stimulus). All procedures were approved by the University of Western Australia animal ethics committee (RA/3/100/1371). Single Pulse TMS and rTMS A MagPro R30 stimulator equipped with a Magventure BC-65 butterfly coil (Magventure, Farum, Denmark) was used to deliver single pulse TMS over the left motor cortex. MEP recordings were rapidly generated at 75% of machine stimulator output immediately before and after sham or rTMS stimulation (an intensity known to produce suprathreshold stimulation in rats anaesthetized with ketamine-xylazine (Vahabzadeh-Hagh et al., 2011). Single pulse parameters consisted of 8 pulses with an inter-stimulus interval of 7 s. Immediately following baseline MEP recordings, the iron-core rodent coil (base wrapped in a thin layer of paraffin to insulate the ironcore from the animal) replaced the figure of 8 coil and was placed on the rat head (lightly touching the skull), such that the coil windings overlaid the left motor cortex. This coil position was chosen as the greatest induced current occurs under the windings and not at the coil center. Stimulation consisted of 3 min of sham or 10 Hz rTMS (total of 1800 pulses). Immediately after stimulation, MEPs were recorded (as described above). We chose to record MEPs immediately following stimulation as studies in humans suggest that effects are maximal within the first 20-30 min following stimulation. In addition, we wished to avoid continuous dosing of anesthetic, which results in fluctuations of cortical excitability that are different for each animal. Therefore to maximize the consistency of MEP measurements between animals, we restricted our MEP measurements to within a 30 min window where cortical excitability and anesthesia depth were stable after a single anesthetic injection (as confimed in sham animals). At the end of each MEP recording session, anesthesia was reversed with an intraperitoneal injection of atipamazole (1 mg/Kg, Troy Ilium, Sydney, NSW, Australia) to increase the survival of the animals. All Experimental procedures were approved by the UWA Animal Ethics Committee (03/100/1371). Data Analysis Statistical analysis was performed with SPSS R (IBM, New York, NY, USA). All means are presented with their respective standard error of the mean. For magnetic field stability, a multivariate ANOVA was conducted to detect coil type (dependent variable) differences in magnetic field stability at 1 and 10 Hz (independent variables). For MEP amplitudes, a ratio of the mean post stimulation MEP amplitude relative to the mean sham MEP amplitude was calculated and log transformed for analysis. A paired t-test was conducted to detect whether rTMS (dependent variable) altered MEP amplitudes (independent variable). We also used 95% confidence intervals to support our use of parametric analysis. Magnetic Field Strength -Peak Values and Decay Magnetic field strength in the xy and z axes is illustrated in Figure 2. The iron-core coil produced a greater peak magnetic field (119.05 mT ± 0.42) relative to the air-core coil (89.50 mT ± 6.56) but with decreased focality. Half-maximum field occurred at ∼1.2 mm z axis, ∼3.5 mm xy axis (air-core) and ∼2mm z axis , and ∼4mm xy axis (iron-core). Changes in Coil Temperature Temperature measurements over 600 pulses of 1 and 10 Hz stimulation showed frequency and coil type dependent changes ( Figure 2C). Increases in coil temperature for 1 Hz stimulation peaked at 5.8 • C ± 0.40 (Air−Core) and 1.67 • C ± 0.38 (Iron−Core) . Peak increases in coil temperature for 10 Hz stimulation were 17.43 • C ± 1.07 (Air−Core) and 3.57 • C ± 0.47 (Iron−Core) . Change in temperature of the iron-core coil undergoing 1800 pulses of 10 Hz stimulation for neuromodulation and EMG assessment (see below) peaked at 6.8 • C ± 0.24. Sound Emission from Coils Measurement of the sound pressure level at the base of the coils undergoing rTMS with a sound level meter sound failed to give an accurate measurement due to the biphasic stimulus artifact induced in the microphone by the rTMS. Using the bio-assay method, an approximation of the peak sound intensity of the TMS clicks emitted by the coils was ∼26 dB SPL. Magnetic Field Stability A MANOVA on the magnetic field stability measurements ( Figure 2D) showed statistically significant coil differences at 10 Hz stimulation (p < 0.01) but not at 1 Hz stimulation (p = 0.084). At 1 Hz stimulation, both coils showed high stability (100.03% ± 1.03 (Air−Core) and 99.70% ± 0.93 (Iron−Core) ) at the end, relative to the beginning, of the stimulation train. However, at 10 Hz stimulation, magnetic field stability was reduced (89.20% ± 1.05 as a result in the reduction in magnetic field intensity towards the end of stimulation) in the air-core coil whereas there was no change in stability for the iron-core coil (99.65% ± 1.02). Finite Element Modeling Results from the FEM simulation found a magnetic field strength of 115 mT directly below the windings of the coil. The magnetic field distribution (mT) in the xy (coronal) plane is shown in Figure 3A, and the current density is represented by the arrows in Figure 3B. The maximum electric fields simulated within the skull and brains were 85 and 12.7 V/m, respectively (Figures 3C,D). These were located below the windings of the coil, similar to the placement of the coil used for the MEP recordings. The estimated electric field was >10 V/m up to a depth of 0.7 mm, >5 V/m to 1.4 mm, and >1 V/m to 3.3 mm. The peak electric fields in the rodent model under the Magventure coil were 1 order of magnitude larger than our rodent coils at 856 V/m in the skull and 224 V/m in the brain ( Figure 4A). The electric field induced was also larger with an estimated electric field of >150 V/m at a depth of 10mm from the surface (Figure 4B). Hz rTMS and Cortical Excitability Following sham stimulation, mean MEP amplitude was 98.25% ± 3.207 of the mean baseline MEP amplitude. Following 10 Hz rTMS with the iron-core coil, MEP amplitude was 157.1% ± 15.92 of the mean baseline MEP amplitude. A twotailed paired t-test was conducted on the log 10 transformed ratios (post stimulation amplitude/baseline amplitude; Figure 5) revealed a significant difference between sham and rTMS (mean = 0.198, SD = 0.116) conditions; t = 3.403, df = 3, p = 0.042. DISCUSSION We have developed and characterized two novel rodent-specific TMS coils that can deliver greater stimulation intensities than previous rodent-specific coils similar in size (∼12 mT) (Rodger et al., 2012;Makowiecki et al., 2014;Tang A. D. et al., 2015). As expected, the addition of an iron-core increased field strength relative to the air-core coil (Epstein and Davey, 2002) but with a trade-off between greater magnetic field penetration and decreased focality of the iron-core relative to the air-core coil (Deng et al., 2013). Finite element modeling of the iron-core coil FIGURE 2 | Characterization of coil properties. Magnetic field decay in the z (A) and xy (B) axes where 0 is the center of the coils, shows the iron-core coil produced a greater peak magnetic field (119.05 mT) than the air-core coil (89.50 mT) with a trade-off of focality. Half-maximum field occurred at ∼1.2 mm z axis, ∼3.5 mm xy axis (air-core) and ∼2 mm z axis , ∼4 mm xy axis (iron-core). Changes in the iron-core coil temperature during 600 pulses of 1 and 10 Hz rTMS (C) shows tolerable changes in temperature (≤ 5 • C) at both frequencies. 10 Hz stimulation with the air-core coil resulted in a large temperature change (∼ 17.5 • C). Magnetic field stability (D) shows the iron-core coil shows high stability at both 1 and 10 Hz stimulation. Magnetic field stability for the air-core coil at 10 Hz significantly decreased ( * p < 0.001) at 10 Hz. undergoing rTMS suggests the induced electric field induced in a simplified rat brain is approximately 1 order of magnitude lower than commercially available human stimulators. Unlike sham stimulation, 10 Hz rTMS with the iron-core coil significantly increased MEP amplitudes relative to baseline. Our results show that the iron-core coil displays good temperature and magnetic field stability at both 1 and 10 Hz stimulation, whereas, the air-core showed a large increase in temperature and decrease in magnetic field stability at 10 Hz. We attribute the corresponding reduction in field strength to a temperature-related increase in resistance within the copper coil wire. Greater temperature and field stability in the iron-core coils suggest that the core potentially acted as a heat-sink, minimizing heat retention in the copper coil windings. By contrast, temperature increased in the air-core coil most likely because air is a poor conductor of heat. However, it is important that any additional rTMS stimulation protocols be evaluated prior to use, as the efficacy of the iron-core as a heat sink is likely to diminish with higher frequencies (e.g., theta burst protocols), repeated blocks of stimulation or longer durations which may cause excessive heating in the coil with potential harm to the rodents. Given the greater magnetic field output and thermal stress performance of the iron-core coil, we suggest the iron-core coil is more suitable for use in rodent studies, particularly at high frequency stimulation. FIGURE 3 | Finite element modeling of the iron-core coil. The magnitude of the magnetic field (mT) and magnetic flux density in the xy plane (A). The arrows represent the direction of the current density separated in 15 bins. The induced current density within the brain, shown by normalized arrows separated into 12 equal bins for the xy grid and 4 in the z direction (B). Electric field magnitude (V/m) in a coronal slice of the ellipsoids representing the skull and brain below the coil windings (C). The inset shows an enlarged view of the electric field at the brain and skull interface. The simulated electric field strength within the skull and brain as a function of depth (D). The inset shows electric field strength with the brain domain on a different y-axis scale. Decreasing coil size has raised the question of stimulation efficiency as smaller coils induce proportionally smaller electric fields. Our calculations are consistent with a model of a commercial TMS stimulator and coil over a mouse brain which found a peak magnetic and electric field of 1.7 T and 132 V/m respectively, approximately 1 order of magnitude larger than our small custom coils (Crowther et al., 2014). Furthermore, our calculations suggest the induced electric field from the iron-core coil results in approximately 10% of the electric field needed for axonal suprathreshold stimulation (100 V/m). Therefore to investigate whether the modest magnetic field/ electric field strength delivered by the iron-core coil (∼120mT) could induce neuromodulatory effects, we delivered 10 Hz rTMS to a small number of anaesthetized rats combined with EMG recordings to quantify possible changes in MEPs. The iron-core coil was selected as it not only produced the strongest field strength but also showed greater temperature stability and stimulation reliability with high frequency rTMS. Our results showed 10 Hz rTMS significantly increased MEP amplitudes immediately after stimulation, with a mean increase of approximately 57% relative to baseline recordings. These findings are in line with both human (Arai et al., 2007;Jung et al., 2007) and rodent studies (Hsieh et al., 2015) that showed increased MEP amplitudes with subthreshold high frequency rTMS delivered with commercial stimulators and coils. However, although our results provide preliminary evidence that these modest magnetic/electric field intensities can induce neuromodulatory effects in rats, further characterization of changes in cortical excitability and molecular markers are needed. Unlike high intensity rTMS, which involves NMDA and AMPA receptors as elegantly demonstrated by recent publications from the Vlachos and Funke research groups (Labedi et al., 2014;Lenz et al., 2015Lenz et al., , 2016 ,low and moderate intensity rTMS as delivered here is likely to be subthreshold for action potentials, and therefore involve different mechanisms such as changes in intracellular calcium and BDNF levels (Makowiecki et al., 2014;Grehl et al., 2015). By providing a full characterization of the biophysical properties of our small coils, our report will enable future studies to examine in more depth the molecular and cellular mechanisms involved in the induction of cortical plasticity. It will also be important to determine whether the plasticity induced by these small coils is unilateral or bilateral, as well as characterize changes in corticospinal excitability with complete input-output curves, time course of changes and frequency-specific effects. Approximation of the induced electric field focality of the iron-core coil with FEM modeling showed that the induced electric field peaked below the windings of the coil, and is in line with FEM modeling of commercial coils in spherical head models (Deng et al., 2013). Furthermore, the spread of the electric field was highly localized and undergoes a rapid decay to <1 V/m within millimeters of the peak field. An estimate of stimulation penetration shows that the induced electric field remains above 1 V/m at a distance of 4 mm below the surface of the coil. Accounting for skull thickness (0.7 mm), this equates to an electric field greater than 1 V/m to a depth of ∼3.3 mm in the rat brain. This is in contrast to the induced electric field produced with a commercial butterfly coil, which resulted in a greater peak electric field (224 V/m) and more widespread electric field such that the electric field was >150 V/m at a depth of 10 mm from the surface of the brain and encapsulated the entire brain. This is similar to the electric field modeling with the commercial Cool-40 Rat coil, which induces a peak electric field of 220 V/m with a penetration of ≥50 V/m at a depth of ∼10 mm (Parthoens et al., 2016). These results suggest that although our coils produce weaker electric fields, they induce more focal stimulation. Given the rapid electric field decay with our coils, it is likely that stimulation is restricted to the cortical and superficial sub-cortical layers of the rat brain (e.g., pyramidal cell layer of the hippocampus) depending on the coil position and orientation. Due to decreased skull thickness and brain size, we expect reduced focality/spatial resolution if used in smaller rodents, such as mice. Whilst this decreases the ability to target specific brain regions, it increases the ability to target deeper brain structures. A limitation of this study was the need to replace the rodentspecific coil after rTMS with a human figure of 8 coil to induce MEPs. However, due to the subthreshold nature of our rodentspecific coils, eliciting MEPs with a stronger human coil was essential. The use of an unplugged coil to deliver sham is a potential limitation of the study. Whilst the unplugged coil sham maintains the mechanical stimuli of coil placement on the head and background auditory stimuli from the stimulator equipment, it lacks the auditory stimuli of the click sound produced by the coil during active TMS. Approximation of the sound pressure level generated by the air and iron-core coil undergoing 10 Hz rTMS was ∼26 dB at the base of the coil. Previous rodent studies suggest that at this intensity, the low frequency sound emitted by the coils is below the hearing threshold of mice (Fernandez et al., 2010) and close to the threshold for rats (Borg, 1982). However, as sound intensity decreases with distance (the inverse square law), it is likely that the ∼26 dB at the base of the coil is an over estimation of any sound perceived in the ears of the animal and would be dependent on coil position. Furthermore, it is unlikely that the auditory and small vibration component of active stimulation would induce sensory (e.g., shifts in attention and alertness) and/or placebo (e.g., the belief that one is receiving active stimulation) side effects (Duecker and Sack, 2015), in animals (particularly anaesthetized animals as in this study). FEM simulations using simplified spherical models are useful when approximating the general electric field properties in neural tissue. However, simplified models come with limitations, which have been addressed in other modeling papers. One of these is that isotropic tissue conductivities are used (Miranda et al., 2013), though a recent paper found no substantial differences in the electric field distribution between models with isotropic versus anisotropic conductivities (Salvador et al., 2015), and another found only weak increases in electric field strength due to the anisotropy of brain tissue . Furthermore the electric field estimations (which neglect local maxima at the gyral folds) do not take the radial electric field component into account, and are altered (and likely improved) in more detailed models (Salvador and Miranda, 2009;Thielscher et al., 2011). Whilst the rat and mouse cortex lacks folding and is relatively smooth, estimations of the electric field should be interpreted with care when extrapolating to regions like the cerebellum (where folding does occur in rats and mice) or in the brains of larger rodents such as guinea pigs which have more complex cortices. CONCLUSION We provide an alternative method to deliver TMS to rodents by constructing small rodent-specific TMS coils capable of delivering modest stimulation intensity whilst maintaining stimulation focality. Our results show different field strengths, penetration, focality, and performance for each coil that need to be considered prior to coil selection. Whilst our coils induce modest magnetic and electric fields, we have shown preliminary evidence that such field strengths can induce neuromodulatory effects. Therefore, we suggest these moderate intensity rTMS coils provide a useful tool for the preclinical investigation of TMS plasticity in rodents. AUTHOR CONTRIBUTIONS AT and AL conducted the experiments. AT wrote the first version of the manuscript as part of his PhD thesis. AT, AL, AG, RW, RG, AR, JW, and JR designed the study. AT, AL, AG, RW, and JR analyzed the data. All authors revised and proofed the manuscript. FUNDING This work was supported by a National Health and Medical Research Council (NHMRC) of Australia project grant (APP1050261 to JS, AC, MH, JR, and MG). JR is supported by an NHRMC senior research fellowship (APP1002258). MH was supported by an Australian Research Council DECRA fellowship (DE120100729). AT receives doctoral scholarship from an Australian Postgraduate award, the University of Western Australia and the Bruce and Betty Green Foundation.
2016-10-08T01:47:31.943Z
2016-06-30T00:00:00.000
{ "year": 2016, "sha1": "25cdccd2ff32572f5d292e2c17f076a35af0bb1b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncir.2016.00047/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cacd1b32774a6f90f3cec1005a31c706a71be735", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
260942617
pes2o/s2orc
v3-fos-license
Mesonephric-like Adenocarcinoma of the Uterine Corpus: Genomic and Immunohistochemical Profiling with Comprehensive Clinicopathological Analysis of 17 Consecutive Cases from a Single Institution Data on genetic and immunophenotypical characteristics of uterine mesonephric-like adenocarcinoma (MLA) remain limited. Therefore, we aimed to investigate the clinicopathological, immunohistochemical, and molecular features of uterine MLA. We performed targeted sequencing, array comparative genomic hybridization, and immunostaining in 17, 13, and 17 uterine MLA cases, respectively. Nine patients developed lung metastases. Eleven patients experienced disease recurrences. The most frequently mutated gene was Kirsten rat sarcoma viral oncogene homolog (KRAS; 13/17). Both the primary and matched metastatic tumors harbored identical KRAS (3/4) and phosphatase and tensin homolog deleted on chromosome 10 (1/4) mutations, and did not harbor any additional mutations. A total of 2 of the 17 cases harbored tumor protein 53 (TP53) frameshift insertion and deletion, respectively. Chromosomal gains were detected in 1q (13/13), 10 (13/13), 20 (10/13), 2 (9/13), and 12 (6/13). Programmed cell death-ligand 1 overexpression or mismatch repair deficiency was not observed in any of the cases. Initial serosal extension and lung metastasis independently predicted recurrence-free survival with hazard ratios of 6.30 and 7.31, respectively. Our observations consolidated the clinicopathological and molecular characteristics of uterine MLA. Both clinicians and pathologists should consider these features to make an accurate diagnosis of uterine MLA and to ensure appropriate therapeutic management of this rare entity. Introduction Endometrial carcinoma (EC) is the sixth leading cause of carcinoma-related death among women worldwide [1][2][3].The incidence rates of EC have been steadily increasing, particularly in developed countries [4,5].The diagnosis of EC subtypes is based on their distinct morphological features [6].Endometrioid carcinoma accounts for the majority of EC cases, followed by serous carcinoma, clear cell carcinoma, carcinosarcoma, and undifferentiated carcinoma [6].The histological type and grade as well as the International Federation of Gynecology and Obstetrics (FIGO) staging guides EC prognosis [6,7]; however, histological features overlap significantly between some subtypes, which makes accurate classification difficult.Consequently, continued efforts have been made to develop ancillary techniques, such as immunohistochemical staining (IHC) and molecular analyses, to stratify EC patients. Mesonephric-like adenocarcinoma (MLA) is a rare but distinct gynecological malignancy, primarily arising in the uterine corpus and adnexa.MLA was recently introduced as a new histological type in the 2020 version of the World Health Organization Classification of Female Genital Tumors [6].Uterine MLA exhibits morphological, immunophenotypical, and genetic characteristics similar to those of mesonephric adenocarcinoma (MA), which is a rare malignant tumor derived from the mesonephric remnants located in the lateral wall of the vagina and uterine cervix [8].The term MLA has been used for malignant mesonephric lesions arising in the uterine corpus [3,9,10], because it still debated whether they are of mesonephric origin.Both clinicians and pathologists should be aware of the clinical, pathological, and molecular features of uterine MLA to make an accurate diagnosis and to ensure appropriate therapeutic management. Next-generation sequencing (NGS), also known as massively parallel sequencing, allows for the effective capture of a substantial amount of genomic information regarding tumor development, progression, and biological behavior [11].NGS is inextricably intertwined with the realization of precision medicine in oncology.While it is unlikely to obviate traditional pathological diagnosis in its current state, NGS allows a more complete picture of carcinogenesis, progression, and metastasis than can be seen with any other modality.Recent advances in sequencing technologies have provided substantial insights into the mutated carcinoma-related genes and mutational processes operative in EC [12,13].Novel classification systems that incorporate molecular features have been developed to provide objective and reproducible EC categorization [14].The addition of molecular and immunohistochemical markers allows for the identification of clinically relevant subgroups, with potential therapeutic implications [15].However, despite significant advances directed toward elucidating molecular mechanisms and developing clinical trials for patients with EC, data on specific genetic alterations and molecular characteristics of uterine MLA remain limited [3,16,17]. Immune checkpoints and inhibitory immunoreceptors, including programmed cell death protein 1 (PD-1) and its ligand (PD-L1), have gained attention as therapeutic targets [18].PD-L1 expression on tumor and immune cells can be detected using IHC and different PD-L1 commercial clones.PD-1 and PD-L1 play progressively important roles in our understanding of tumor immunology and antitumor treatment [19].Binding of PD-L1 to its receptor PD-1 leads to T-cell inactivation in a variety of carcinomas [20].Therefore, anti-PD-1/PD-L1 treatment deregulates the adverse impact of tumor-infiltrating T-cells, which in turn may reverse the tumor immune resistance [19].Several clinical and experimental studies have investigated PD-L1 expression in EC and its prognostic values as well as its efficacy as an immunotherapy for EC [21][22][23]; however, these studies focused on endometrioid carcinomas and did not consider MLA as a separate or independent group.In addition, clinical trials that investigated the anti-PD-1 antibody, pembrolizumab, as a treatment for advanced or recurrent EC [24] did not separate MLA from other histological EC types. Our group has steadily documented the clinical manifestations, cytological and histological features, immunophenotypes, and mutational profiles of uterine MLA using IHC and molecular analyses [2,3,[33][34][35][36][37][38][39][40].However, get a deep insight into their relevance, the results need to be consolidated and completely analyzed.Therefore, in this study, we comprehensively investigated the clinicopathological, immunohistochemical, and molecular characteristics of uterine MLA and determined their relationships and prognostic significance. Biomedicines 2023, 11, x FOR PEER REVIEW 4 of 23 variation (CNV) log2 ratios were generated using a depth of coverage normalized that of normal uterine tissues. PD-L1 22C3 Pharmdx IHC FFPE tissue blocks were cut into 4 µm sections which were subsequently mounted on Superfrost Plus Microscope Slides (Thermo Fisher) dried at 60 • C for 1 hour.PD-L1 IHC was carried out on a Dako Autostainer Link 48 system (Agilent Technologies) using a Dako PD-L1 IHC 22C3 pharmDx kit (Agilent Technologies), with EnVision FLEX visualization system [46].PD-L1 protein expression was assessed using the combined proportion score (CPS).CPS was calculated as the number of PD-L1-stained cells (tumor cells, lymphocytes, and macrophages) divided by the total number of tumor cells, multiplied by 100.The specimen was considered to have positive PD-L1 expression if CPS ≥ 1 [35,42,46]. IHC Interpretation PD-L1 immunoreactivity in uterine MLA was assessed using the CPS interpretation guideline, as previously described [47].CPS was calculated as the number of PD-L1stained cells (viable tumor cells, lymphocytes, and macrophages) divided by the total number of viable tumor cells, multiplied by 100.A minimum of 100 viable tumor cells was considered adequate for evaluating PD-L1 positivity.For tumor cells, partial or complete membranous staining at any intensity was regarded as a positive expression.Membranous and/or cytoplasmic staining at any intensity was regarded as positive for tumor-associated immune cells.MMR protein expression in uterine MLA was classified into three categories: preserved, loss, and subclonal loss [41,48,49].A lack of one or more MMR protein expression was defined as MMRd, and preserved expression of all four MMR proteins was defined as MMRp.We regarded the complete absence of nuclear staining (0%), in the tumor cells with appropriate internal control staining (positive nuclear expression in the stromal non-neoplastic cells or lymphocytes), as loss of expression [32]. Statistical Analysis Pearson's chi-squared test, Fisher's exact test, or linear-by-linear association test was used to determine the association between recurrent pathogenic mutations and clinicopathological characteristics of patients with uterine MLA.Univariate survival analysis, with the log-rank test and Kaplan-Meier plots (RFS and OS), was conducted to evaluate the prognostic significance of recurrent pathogenic mutations and clinicopathological characteristics.Multivariate survival analysis was performed using the Cox proportional hazards model (95% confidence interval) with the backward stepwise elimination method.All statistical analyses were performed using IBM SPSS Statistics for Windows, version 23.0 (IBM Corporation, Armonk, NY, USA).Statistical significance was defined as p < 0.05. PD-L1 Expression, MMR Protein Expression, MSI Status, and Tumor Mutational Burden Table 3 summarizes the results of PD-L1 expression by IHC and MMR protein detection by multiplex PCR for MSI status determination.Representative photomicrographs, illustrating immunostaining, are depicted in Figure 4.In the majority of cases (14/17), PD- PD-L1 Expression, MMR Protein Expression, MSI Status, and Tumor Mutational Burden Table 3 summarizes the results of PD-L1 expression by IHC and MMR protein detection by multiplex PCR for MSI status determination.Representative photomicrographs, illustrating immunostaining, are depicted in Figure 4.In the majority of cases (14/17), PD-L1 in tumor and tumor-associated inflammatory cells (CPS 0) was not expressed.In the three cases that did express PD-L1, the PD-L1 CPS values were 0.5 (2/3) and 0.1 (1/3).In one of the cases with PD-L1 CPS 0.5, some tumor and inflammatory cells expressed PD-L1 with variable staining intensity.On high-power view, the cells exhibited weak membranous PD-L1 immunoreactivity as well as a paranuclear dot-like and Golgi staining pattern.In the other case with CPS 0.5, stromal lymphocytes and plasma cells expressed PD-L1.While the neoplastic glands were negative for PD-L1, the inflammatory cells surrounding the neoplastic glands and clusters of tumor cells reacted with PD-L1.In the case with PD-L1 CPS 0.1, only a small number of tumor cells expressed PD-L1 with weak-to-moderate staining intensity.Regarding MMR protein expression, all cases (17/17) retained MMR protein staining (for all four MMR proteins), indicating MMRp.Consistent with this finding, 15 cases tested for MSI were interpreted as MSS.Tumor mutational burden was measured using NGS (Table 3).We observed low tumor mutational burden in all examined cases, with a megabase range of 2.70 to 4.72 (median = 3.57). Clinicopathological and Prognostic Significance of Recurrent Pathogenic Mutations Based on the pathogenic mutational status, there were no significant differences in clinicopathological characteristics (Table 4) and patient outcomes (Table 5).Univariate survival analysis revealed that initial serosal extension (p < 0.001) and initial or recurrent lung metastasis (p = 0.002) were significant predictors for RFS.Using multivariate survival analysis, we found that serosal extension and initial or recurrent lung metastasis were independent factors for RFS prediction, with hazard ratios of 6.30 (p = 0.037) and 7.31 (p = 0.02), respectively.None of the clinicopathological characteristics and recurrent pathogenic mutations were significantly associated with OS.Figures 5 and 6 display Kaplan-Meier plots, with RFS and OS stratified by clinicopathological characteristics and pathogenic mutations, respectively.Abbreviations: KRAS-Kirsten rat sarcoma viral oncogene homolog; LN-lymph node; PIK3CA-phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit alpha; PTEN-phosphatase and tensin homolog deleted on chromosome 10. Discussions In this study, we investigated the genetic features of 17 uterine MLA cases from a single institution.Seventeen primary and four matched metastatic tumors were available for SNV analysis.The most frequently mutated gene was KRAS, followed by PTEN, PIK3CA, GNAQ, CTNNB1, and ARID1A.The obtained results were consistent with that of previous data [3,40,[50][51][52], where KRAS mutations were found in the majority of uterine MLA cases (16/21).In recent studies, we demonstrated that all examined uterine (6/6) and ovarian (4/4) MLA cases harbored pathogenic KRAS mutations [3,10].In addition, we summarized previously reported genetic abnormalities associated with ovarian MLA [10] and found that the most frequently mutated gene was KRAS (23/28), while PIK3CA, NRAS, and ARID1A mutations were uncommon.da Silva et al. [52] reported that the majority of the ovarian and uterine MLAs harbored mutations in KRAS (25/28) and genes frequently mutated in Mullerian tumors, including PIK3CA, CTNNB1, and PTEN.Collectively, the data support the concept found by Kolin et al. [51], which states that KRAS mutations can be considered as one of the defining features of MLA.Although KRAS mutations are not unique to MLA [3,10,39,40,53] and can be identified in 20-30% of the endometrial endometrioid carcinoma [54][55][56], with the additional use of histological features and immunophenotypes characteristic of MLA, the identification of KRAS mutations can confirm MLA diagnosis. We demonstrated that both the primary and matched metastatic tumors harbored identical KRAS (3/4) and PTEN (1/4) mutations, and did not harbor any additional mutations.da Silva et al. [52] analyzed the mutational profiles of two primary uterine MLAs and their respective distant metastases.Consistent with our result, in one case, both the primary and metastatic tumors harbored an identical KRAS mutation, without additional mutations.However, in the other case, the metastatic tumor exhibited identical KRAS mutations to the primary tumor and harbored additional mutations in genes related to the mitogen-activated protein kinase pathway (mitogen-activated protein kinase kinase kinase 13, mesenchymal epithelial transition, and mitogen-activated protein kinase 3).The latter finding suggests that the progression from primary to metastatic MLA may involve the acquisition of additional mutations. To the best of our knowledge, this study is the second to report PD-L1 expression in uterine MLA.Horn et al. [59] first reported PD-L1 negativity in four cases of uterine MLA.In line with this finding, we observed that the majority of cases (14/17) exhibited a complete lack of PD-L1 immunoreactivity in both the tumor and immune cells (CPS 0).The remaining three cases focally expressed PD-L1, with CPS < 1.In addition, our observations of retained MMR protein expression and MSS in all examined cases are consistent with our previous findings [41].We re-examined some uterine MLA cases misinterpreted as MMRd during the initial diagnosis and found that they should have been interpreted as MMRp tumors, confirming that they were MSS.A PD-1 inhibitor, pembrolizumab, was approved for advanced, recurrent, or metastatic MSI-H/MMRd ECs as a second-line treatment.Recently, the combination of pembrolizumab with an oral multikinase inhibitor, lenvatinib, has shown remarkable results, with an objective response rate of 36% and median OS of 16.4 months for advanced non-MSI-H/MMRp ECs [24].Although the effectiveness of lenvatinib and pembrolizumab combination therapy for uterine MLA has not been fully elucidated, there have been a few case reports documenting excellent and durable responses to this combination therapy in patients with uterine MLA [60][61][62].Considering that the treatment of patients with advanced or recurrent uterine MLA may differ depending on MSI status, MMR IHC in uterine MLA requires careful interpretation.Repeat IHC and MSI testing may improve diagnosis of challenging cases. The Proactive Molecular Risk Classifier for Endometrial Cancer introduced four subgroups of EC: (1) DNA polymerase epsilon, catalytic subunit (POLE)-mutant subgroup, harboring mutations in the exonuclease domain in exons 9-14; (2) MMRd subgroup, showing the loss of expression for one or more MMR proteins; (3) p53-abnormal subgroup, demonstrating aberrant p53 expression pattern indicating pathogenic TP53 mutation; and (4) no specific molecular profile (NSMP) subgroup [63][64][65].Even though the vast majority of NSMP ECs is low-grade endometrioid carcinoma, the NSMP subgroup also encompasses high-grade EC, clear cell carcinoma, undifferentiated carcinoma, carcinosarcoma, and MLA [36,40,50,51,53,57,[65][66][67][68].In this study, the majority of MLA cases harbored activating KRAS mutations but not POLE-mutant signatures, MMR deficiency, or TP53 mutation; confirming that these cases belong to the NSMP subgroup.Since the biological behavior of uterine MLA is consistently described as aggressive [36,67,69], the inclusion of this entity in the high-risk non-endometrioid group appears to be justified.Several studies have identified that the high expression of the L1 cell adhesion molecule (L1CAM) and CTNNB1 mutation can add significant prognostic information to the molecular classification of EC [70][71][72][73][74].The significance of L1CAM expression or CTNNB1 mutation in MLA patients has not been elucidated.Based on relatively poor MLA prognosis, compared to other histological types belonging to the NSMP subgroup [3,67], it is likely to exhibit L1CAM overexpression or harbor the CTNNB1 mutation.Further studies with a larger cohort of uterine MLA patients are necessary to clarify the clinicopathological and prognostic significance of L1CAM expression and the CTNNB1 mutation. We found that 2 of the 17 uterine MLA cases harbor frameshift TP53 mutations.One patient with stage IA MLA did not experience recurrence and is alive at 63.5 months postoperatively, while the other patient with stage IA disease developed recurrence at 8.0 months after surgery and died of disease at 25.1 months postoperatively.The TP53 mutation is known to be extremely uncommon in malignant mesonephric lesions [37,52,57,75] and its clinicopathological significance in patients with uterine MLA has not yet been investigated.We recently experienced a case of dedifferentiated uterine MLA harboring the TP53 mutation [39].IHC revealed that the dedifferentiated component overexpressed p53, while the MLA component exhibited a wild-type p53 expression pattern.We cannot conclude the prognostic implication of the TP53 mutation in uterine MLA because only a few cases were found.However, based on our observations that the majority of uterine MLA cases did not harbor the TP53 mutation and that the clinical course of the two patients with early-stage, TP53-mutant MLA was inconsistent, we believe that there is an urgent need to determine the clinicopathological and prognostic significance of the TP53 mutation in uterine MLA.Similar to previous results showing that the TP53 mutation in 'multiple-classifier' EC cases does not significantly affect disease development or prognosis [76], we hypothesize that the TP53 mutation occurs during late stage of MLA progression and does not affect the molecular landscape, since only two frameshift TP53 mutations were detected in this study of uterine MLAs with pathogenic KRAS mutations and the two cases displayed different outcomes.Further investigations are required to investigate whether TP53 mutation as a significant predictor for patient outcome of uterine MLA or merely a passenger event with no impact on biological behavior. Therapeutic strategies tailored to both the genetic and epigenetic features of EC are the basis of precision medicine in gynecological oncology [77].Aberrant expression of several cancer-related gene sets has been consistently reported to be a significant contributor for EC progression [78].In addition to those mutational changes, epigenetic alterations, including methylation, acetylation, and phosphorylation of nuclear chromatin, play a central role in EC development and progression [78,79].Particularly, non-coding RNAs (ncRNAs) are involved in the regulation of cellular metabolism, growth, and neoplastic transformation [78,80].They have very little or no protein-coding capability [81,82], but their expression patterns can modulate the function of oncogenes and tumor suppressors, resulting in either promotion or suppression of tumorigenesis and progression [77].Their regulation of gene expression can occur in different steps, at epigenetic, transcriptional, and post-transcriptional levels [78].Accumulating evidence shows that the abnormal expression of ncRNAs is associated with the prevalence and prognosis of many different types of human cancers [82,83].Some deregulated ncRNAs have recently been suggested as potential risk factors that can better define the biological behavior of EC and be used as prognostic markers to guide the risk stratification of EC patients.It is surprising to note that the association between ncRNA and EC has only recently been emerging in the literature, and that most of the papers regarding this association have been concentrated into the last three years [77].The following competing endogenous RNAs seemed to be associated with poor prognosis of EC: AC074212.1,ADARB2-AS1, C2orf48, C8orf49, C10orf91, FER1L4, FP671120.4,GLIS3-AS1, HOXB-AS1, LINC00483, LINC00491, LINC01143, LINC01352, LINC01410, LINC02381, MIR503HG, PCAT1, RP11-357H14.17and RP11 89K21.1 [77].In contrast, LINC00237, LINC00475, LINC00958, and LNCTAM34A were reported to exhibit a favorable prognostic effect in EC patients.The expressions of these molecules were found to be deregulated in EC compared to normal endometrial tissue.Identifying deregulated microRNAs (miRs) remains an ongoing endeavor.miRs are short molecules of ncRNA that function as post-transcriptional regulators of gene expression [84].In a meta-analysis by Delangle et al. [85], a number of significantly deregulated miRs were identified and classified as onco-miRs, suppressor miRs, and those with discordant functions.Similarly, a recent systematic review by Bloomfield et al. [86] revealed the deregulated levels of circulating miRs in the serum and plasma of EC patients.These studies suggest that adequate combinations of miR expression with conventional pathological parameters of EC may serve as prognostic markers that can help in predicting risk stratification of patients.Taken together, epigenetic modifications are gaining increasing importance for the characterization of EC.A group of molecules is emerging as identifiable risk factors that aid in establishing an accurate diagnosis and assessing clinical prognosis.Particularly, there is a significant correlation between the alterations of several ncRNAs and miRs and the clinical course of EC patients, representing the possibility of including these molecules in stratifying patients at greater risk of relapse and worse outcome [82].A comprehensive analysis of these molecules is the way to pursue towards personalized medicine, in which each patient is characterized by a specific set of epigenetic alterations, whose targets are well defined, and for whom drawing a therapeutic strategy would yield better results [80,82]. Conclusions We comprehensively investigated the clinicopathological and immunophenotypical features of 17 consecutive cases of uterine MLA from a single institution.We confirmed that uterine MLA is an aggressive malignancy, showing advanced stage, frequent postoperative disease recurrence, and frequent lung metastasis.Initial serosal extension and lung metastasis were independent prognostic factors for RFS prediction, while none of the clinicopathological or molecular features were significantly associated with OS of uterine MLA patients.IHC revealed that none of the cases overexpressed PD-L1 or were MMR deficient.We conducted targeted sequencing to analyze the molecular features of uterine MLA.We found that the majority of cases harbored pathogenic KRAS mutations.Two cases harboring the frameshift TP53 mutation were also identified, but the clinicopathological and prognostic significance of TP53 mutation could not be determined.The most frequent abnormalities were gains of chromosome 1q, 2, 10, and 20.Both clinicians and pathologists should be aware of these features to establish an accurate diagnosis of uterine MLA and to ensure appropriate therapeutic management of this rare entity. Informed Consent Statement: Regarding the retrospective nature of this study, the Institutional Review Board waived the requirement for the investigators to obtain signed informed consent. Figure 1 . Figure 1.Histological and immunophenotypical features of uterine mesonephric-like adenocarcinoma.(A) Tubular and glandular patterns showing compactly aggregated small-to-medium-sized tubules and elongated ductal structures.(B) Solid and tubular patterns showing solid cellular sheets and slit-like tubular lumina.(C) Complex small tubular proliferation with back-to-back arrangement.(D) Eosinophilic, hyaline-or colloid-like intraluminal secretions.(E) Lack of estrogen receptor expression.(F) Wild-type p53 immunostaining pattern.(G) Non-diffuse p16 positivity.(H) Moderate-to-strong nuclear immunoreactivity for transcription termination factor 1. (I) Uniform and strong GATA-binding protein 3 expression. Figure 1 . Figure 1.Histological and immunophenotypical features of uterine mesonephric-like adenocarcinoma.(A) Tubular and glandular patterns showing compactly aggregated small-to-medium-sized tubules and elongated ductal structures.(B) Solid and tubular patterns showing solid cellular sheets and slit-like tubular lumina.(C) Complex small tubular proliferation with back-to-back arrangement.(D) Eosinophilic, hyaline-or colloid-like intraluminal secretions.(E) Lack of estrogen receptor expression.(F) Wild-type p53 immunostaining pattern.(G) Non-diffuse p16 positivity.(H) Moderateto-strong nuclear immunoreactivity for transcription termination factor 1. (I) Uniform and strong GATA-binding protein 3 expression. Figure 3 . Figure 3. Diagram depicting chromosomal copy number variations, determined using array comparative genomic hybridization.Copy number variations are indicated in green or red for gain or loss in copy number, respectively.All cases of uterine mesonephric-like adenocarcinoma exhibit gains of chromosome 1q (yellow arrow) and 10 (orange arrow), and most of the cases exhibit gains of chromosome 2 (blue arrow) and 20 (purple arrow). Figure 3 . Figure 3. Diagram depicting chromosomal copy number variations, determined using array comparative genomic hybridization.Copy number variations are indicated in green or red for gain or loss in copy number, respectively.All cases of uterine mesonephric-like adenocarcinoma exhibit gains of chromosome 1q (yellow arrow) and 10 (orange arrow), and most of the cases exhibit gains of chromosome 2 (blue arrow) and 20 (purple arrow). Figure 5 . 23 Figure 6 . Figure 5. Kaplan-Meier plots showing probability of recurrence-free survival (RFS) stratified by clinicopathological characteristics and pathogenic mutations.Initial serosal extension and lung Figure 6 . Figure 6.Kaplan-Meier plots showing probability of overall survival (OS) stratified by clinicopathological characteristics and pathogenic mutations.None of the examined parameters significantly predict worse OS of patients with uterine MLA.KRAS-Kirsten rat sarcoma viral oncogene homolog; PIK3CA-phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit alpha; PTENphosphatase and tensin homolog deleted on chromosome 10.
2023-08-17T15:03:12.096Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "fcf7b5ae16081b9ca5cb9740e14fd2c390a18039", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/11/8/2269/pdf?version=1692079587", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85a7c19518ac54d5cfdb2274362e1690edea730e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55225370
pes2o/s2orc
v3-fos-license
An Uncommon Presentation of Myasthenia Gravis with Thymoma-A Case Report and Review of Literature Myasthenia Gravis (MG) is a rare disease of neuromuscular junction which typically presents with fatigable weakness of cranial and limb muscles. But patient may present with vague symptoms which may mislead physicians to an incorrect diagnosis. We are reporting a 45 year old cobbler presented with the complaints of lack of energy, aching pain in shoulder, back, upper arm and difficulty in swallowing of both solid and liquid food without any diurnal variation for two and half years. He was ultimately diagnosed as a case of MG and thymectomy revealed thymoma. Keyword: Myasthenia Gravis, Thymoma, 1. Professor of Neurology, BSMMU, Dhaka 2. MD Neurology (final part) student, BSMMU, Dhaka 3. Lecturer of Microbiology, Dhaka Medical College 4. Assistant Professor of Pathology, BSMMU, Dhaka Correspondence: Dr. M A Hannan, Professor, Department of Neurology, Room no-1304, Block-D, BSMMU, Dhaka. Phone 01199803587 After consulting with us, he was examined and investigated accordingly. On examination we found partial ptosis though the patient never complained. Ice pack test, ceiling test, counting test and forearm abduction tests were inconclusive. Routine investigations showed no abnormality. Hb-11.8 gm/ dl,TC of WBC-9000/cmm (N-62%, L-28%, E-8%, B-2% ), Platelet-200000/cmm, ESR27 mm in 1st hour, PBFUnremarkable, RBS4.9 mmol/L, S. electrolyteNa+ -134 mmol/ L, K+ 3.6 mmol/L, Urine R/Enormal, Thyroid function test – Normal(T3 – 1.28 ng/mg, T4-8.45 micro gm/dl, TSH3.15 J MEDICINE 2012; 13 : 109-114 . Fig.-1: MG patient with standard median trans-sternal Now the patient is on pyridostigmine 30 mg 6 hourly.Clinically he is much improved.He can perform his daily activities without significant physical disability.At present, the patient is under our regular follow up. Discussion: Myasthenia gravis is a potentially serious but treatable organ specific autoimmune disorder characterised by weakness and fatigability of the voluntary muscles caused by autoantibodies against the nicotinic acetylcholine receptor (AChR) on the postsynaptic membrane at the neuromuscular junction. 4,5 t was first described by Thomas Willis (1672) , and more than two centuries later another patient with bulbar and limb muscle weakness who died ofrespiratory failure was reported. 6,7ere are various types of classification of Myasthenia gravis.It has been classified according to the age of onset, presence or absence of anti-AchR antibodies, severity, and the aetiology of the disease.Based on the age of onset age of onset Myasthenia gravis can be classed as transient neonatal or adult autoimmune.Transient neonatal myasthenia gravis is due to transfer of maternal anti-AchR antibodies through the placenta to the newborn reacting with the AChR of the neonate. On the basis of presence or absence of anti-AChR antibodies Myasthenia gravis can be classed as seropositive or seronegative.Seropositive is the commonest type of acquired autoimmune myasthenia gravis accounting for up to 85% of patients with generalised myasthenia and 50%-60% with ocular myasthenia gravis test positive for anti-AChR antibodies by radioimmunoassay.The rest 10%-20% of patients with acquired myasthenia gravis do not have anti-AChR antibodies detectable by radioimmunoassay.A subgroup of these patients have antibodies that bind to MuSK . 8It has been proposed that the presence of antibodies against MuSK appears to define a subgroup of patients with seronegative myasthenia gravis who have predominantly localised, in many case bulbar, muscle weaknesses, reduced response to conventional immunosuppressive treatments, and muscle wasting. 9Essentially, seronegative myasthenia gravis is likely to be an autoimmune disorder involving antibodies against one or more components of the neuromuscular junction that are not detected by the current anti-AchR radioimmunoassay.Other distinct humoral factors are also being implicated are IgG antibodies that reversibly inhibit AChR function and a non-IgG (possibly IgM) factor that indirectly inhibits AChR function. 10serman's original classification divides adult myasthenia gravis into four groups based on the severity of the disease. 11) Ocular myasthenia, where disease is confined to ocular muscles. But this classification has been modified by an ad hoc committee of the American myasthenia gravis foundation to standardise it for research purposes into following types: 12 mU/ml), CPK-151 U/L, ANA-negative, pANCA-negative , cANCA-negative, Anti-Jo ab-negative, ECG-normal. Antibodies associated with MG (Anti Ach R Ab, Anti MuSK Ab, Anti skeletal Ab) could not be done.Repetitive Nerve Stimulation test was consistent with Myasthenia Gravis (9.2 to 20.9% decremental responses).X ray chest P-A and right lateral View; and CT scan of chest were done but there were no evidence of thymic enlargement. The patient was diagnosed as a case of generalized MG stage II-A.We started pyridostigmine in low dose which showed significant improvement.The patient was referred to cardiothoracic surgeon for thymectomy.Thymectomy was done on 2 nd August, 2010 by standard median trans-sternal thoracotomy. Histopathological examination showed well capsulated tumour composed of lymphocytes and thymic epithelial cells. No cellular atypia or capsular invasion was seen.It was labeled as WHO classification Type AB (figure 2).(II) Mild weakness other than ocular muscles, +/2 weakness of ocular muscles of any severity.IIa: predominant limb and/or axial involvement; IIb: predominantly oropharyngeal and/or respiratory involvement. (III) Moderate weakness affecting muscles other than ocular muscles, may have ocular weakness.IIIa: predominant limb and/or axial involvement; IIIb: predominantly oropharyngeal and/or respiratory involvement. (IV) Severe weakness affecting muscles other than ocular muscles, may have ocular weakness. IVa: predominant limb and/or axial involvement; IVb: predominantly oropharyngeal and/or respiratory involvement. (V) Defined by intubation with or without mechanical ventilation, except when employed during routine postoperative management. The use of feeding tube without intubation places the patient in class IVb There are four classes based on the aetiology which are acquired autoimmune, transient neonatal (caused by the passive transfer of maternal anti-AChR antibodies), drug induced (D-penicillamine,curare, aminoglycosides, quinine, procainamide, and calcium channel blockers); and congenital myasthenic syndromes (AChR deficiency, slow channel syndrome, and fast channel syndrome). 13r patient was categorized to have II a.But our patient had no diurnal variation of symptoms.His lack of energy, weakness and difficulty in swallowing of food was actually the fatigue of MG.Being illiterate, the patient could not explain his symptoms properly and the physicians also did not evaluate those properly.It has to be remembered that it is not uncommon for a patient with MG to exhibit symptoms even of depression. 14The present case had atypical presentations initially although we found ptosis during examination.He had no diplopia which is a common complain in MG specially at evening.Such atypical cases can be confused with polymyositis, inclusion body myopathy, stroke, motor neuron disease.7 Anti-MuSKpositive individuals tend to have more pronounced bulbar weakness and may have tongue and facial atrophy.They may have neck, shoulder and respiratory involvement without ocular weakness.They are also less likely to respond to acetylcholine esterase (AChE) inhibitors, and their symptoms may actually worsen with these medications. 18,19 Ufortunately, we could not perform none of these tests due to lack of facilities. Electrodiagnostic studies can demonstrate a defect of neuromuscular transmission and can aid in diagnosis of myasthenia.The following 2 studies are commonly performed: • Repetitive stimulation of a muscle at 2-3 Hz, also known as repetitive nerve stimulation (RNS) This test has some fallacies and can give both false-negative results and false-positive results.It has a low sensitivity in ocular MG; 50% of patients presenting with eye symptoms will be missed.On the other hand, diseases other than MG, such as amyotrophic lateral sclerosis (ALS) and cavernous sinus lesions can score positive on the test.This test has been combined with electromyography (EMG) and ocular tonography to increase its sensitivity in ocular MG; however, it still produces false-negative and false-positive results. 20e ice pack test (ie, placing ice over the lid) has gained interest among ophthalmologists for assessing improvement in ptosis and diplopia in ocular MG.The rationale behind this test is that cooling might improve neuromuscular transmission.The validity of such a test has been questiontioned by various experts demonstrating that patients with ocular MG actually improve on the ice, heat, and modified sleep tests.Hence, rest might be the cause of the improvement in ocular signs.Both the ice test and the rest test are sensitive and specific in ocular MG. 21,22 Some other tests are recommended.Testing for rheumatoid factor and antinuclear antibodies (ANAs) is indicated to rule out systemic lupus erythematosus (SLE) and rheumatoid arthritis (RA), thyroid function tests to rule out associated Graves disease or hyperthyroidism.This is essential, especially in patients with ocular MG where the concomitant hyperthyroidism is most frequent. Chest X ray, A/P and lateral views may identify a thymoma as an anterior mediastinal mass.A negative chest radiograph does not rule out a smaller thymoma, in which case a chest computed tomography (CT) scan is required.Chest CT scan is mandatory to identify or rule out thymoma or thymic enlargement in all cases of MG.This is especially true in older individuals. In our patient, though chest x ray did non initially give clinical impression of thymoma but ultimately histopathological tests ruled in favour of a thymoma. Thymoma is a rare tumour which may be related to MG in about 10 to 15% of cases.3] As our patient was middle aged male we suspected to be a case of thymoma, even though imaging reports were normal.Ultimately biopsy report confirmed the presence of thymoma.A thymoma, which is an epithelial tumor of the thymus gland that is usually benign, occurs in about 10 to 15% of adult patients with MG. 30% patients with thymoma are associated with MG. 3 Thymoma is a rare disease with incidence rate of 32/1.000.000/year. 4,5 though thymectomy may improve the myasthenic symptoms, MG can develop from months to years after the removal of a thymoma in previously nonmyasthenic patients. . The histological classiûcation of thymoma has remained a subject of controversy for many years.[7][8][9] The major Histological Classifications of Thymoma by WHO is-A -Medullary AB -Mixed type B1 -Predominant cortical B2 -Cortical B3 -Well differentiated thymic carcinoma C -Undifferentiated carcinoma. The cardinal symptom of MG is abnormal fatigue of the muscles.Movement although initially strong rapidly weakens.Intensification of symptoms occurs towards the end of the day or following exercise.Ocular motor disturbance (50 -66%) are the first symptoms with ptosis or diplopia but ultimately it is present in > 90% cases10.Bulbar presentation occurs in 5-10% cases but ultimately it is present in 80% of cases.Initial presentation with limb weakness is uncommon (10%).In the limbs, most commonly shoulder girdle is affected with difficulty with over-head tasks.Sometimes pelvic girdle (difficulty with getting out of chairs) may be affected.Respiratory muscle may be involved with respiratory paralysis and death.A patient requiring mechanical ventilation due to severe respiratory weakness is said to be in crisis. 30osis may be unilateral or bilateral.Patients with mild diplopia may initially seek the help of an ophthalmologist.Myasthenic weakness may mimic third, fourth, and sixth cranial nerve palsies as well as an internuclear ophthalmoplegia.MG never affects pupillary function.Difficulty in chewing, speaking, or swallowing may also be the cause for initial presentation, but the occurrence of these symptoms is less frequent than the aforementioned ocular symptoms. 12These patients usually present to ENT specialist.The symptoms of MG are worsened at the end of the day or after repetitive activities of involved muscles.Examination of a patient with MG therefore is directed at muscle strength and demonstrating pathologic fatigability.A few maneuvers that may be used are having the patient look up for several minutes (examining for ptosis or extraocular weakness), counting aloud to 100 (listening for nasal or slurred speech), or by repetitively testing the proximal muscles. 3,14 he results for the remainder of the neurologic examination are usually normal.A key point to remember is that if a patient has generalized limb weakness without ocular involvement, the diagnosis of MG should be questioned. 14inical data suggest that patients with thymoma associated MG have high-grade symptoms with low rate of remission even after therapy. 30But our patient had mild symptoms despite having thymoma and responded well after taking low dose of pyridostigmine.Although the improvement after thymectoy is usually delayed, our patient had clinical improvement within weeks after thymectomy. Conclusion: It is not difficult to diagnose a case of MG with typical symptoms.But it may be very difficult when symptoms are bizarre and vague.Patient may be undiagnosed despite consulting with physicians repeatedly.High degree of clinical suspicion is therefore necessary to diagnose these cases. Although patients with thymoma associated MG have highgrade symptoms with low rate of remission after therapy; patient may have milder symptoms with rapid improvement after therapy. Fig.- 2 : Fig.-2: Thymoma type AB 16sults are positive in as many as 90% of patients who have generalized MG but in only 50-70% of those who have only ocular MG.But false negatives can occur in cases of purely ocular MG and false-positive anti-AChR Ab test results reported in cases of thymoma without MG,in patients with Lambert-Eaton myasthenic syndrome, small cell lung cancer, or rheumatoid arthritis treated with penicillamine, and in a small group of population older than 70 years.16Antistriatedmuscle (anti-SM) Ab is present in about 84% of patients with thymoma in patients younger than 40 years, but less commonly in those without thymoma.So, a positive test result necessitates a search for thymoma in patients younger than 40 years.In individuals older than 40 years, Thus, meticulous history taking, clinical examination and relevant laboratory investigations are required to rule out these possibilities.thymoma can present without anti-SM Ab.Patients with negative results for anti-AChR Ab (seronegative MG) can have positive test results for antibody to muscle-specific kinase (MuSK), a receptor tyrosine kinase .
2018-12-15T16:28:19.967Z
2012-03-12T00:00:00.000
{ "year": 2012, "sha1": "891556a1fc6a6238b784ea67629c14dd391e01b5", "oa_license": "CCBY", "oa_url": "https://www.banglajol.info/index.php/JOM/article/download/10087/7479", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "891556a1fc6a6238b784ea67629c14dd391e01b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6795998
pes2o/s2orc
v3-fos-license
The Graphical Lasso: New Insights and Alternatives The graphical lasso \citep{FHT2007a} is an algorithm for learning the structure in an undirected Gaussian graphical model, using $\ell_1$ regularization to control the number of zeros in the precision matrix ${\B\Theta}={\B\Sigma}^{-1}$ \citep{BGA2008,yuan_lin_07}. The {\texttt R} package \GL\ \citep{FHT2007a} is popular, fast, and allows one to efficiently build a path of models for different values of the tuning parameter. Convergence of \GL\ can be tricky; the converged precision matrix might not be the inverse of the estimated covariance, and occasionally it fails to converge with warm starts. In this paper we explain this behavior, and propose new algorithms that appear to outperform \GL. By studying the"normal equations"we see that, \GL\ is solving the {\em dual} of the graphical lasso penalized likelihood, by block coordinate ascent; a result which can also be found in \cite{BGA2008}. In this dual, the target of estimation is $\B\Sigma$, the covariance matrix, rather than the precision matrix $\B\Theta$. We propose similar primal algorithms \PGL\ and \DPGL, that also operate by block-coordinate descent, where $\B\Theta$ is the optimization target. We study all of these algorithms, and in particular different approaches to solving their coordinate sub-problems. We conclude that \DPGL\ is superior from several points of view. Introduction Consider a data matrix X n×p , a sample of n realizations from a p-dimensional Gaussian distribution with zero mean and positive definite covariance matrix Σ.The task is to estimate the unknown Σ based on the n samples -a challenging problem especially when n ≪ p, when the ordinary maximum likelihood estimate does not exist.Even if it does exist (for p ≤ n), the MLE is often poorly behaved, and regularization is called for.The Graphical Lasso [Friedman et al., 2007] is a regularization framework for estimating the covariance matrix Σ, under the assumption that its inverse Θ = Σ −1 is sparse [Banerjee et al., 2008, Yuan and Lin, 2007, Meinshausen and Bühlmann, 2006].Θ is called the precision matrix; if an element θ jk = 0, this implies that the corresponding variables X j and X k are conditionally independent, given the rest.Our algorithms focus either on the restricted version of Θ or its inverse W = Θ −1 .The graphical lasso problem minimizes a ℓ 1 -regularized negative log-likelihood: minimize Θ≻0 f (Θ) := − log det(Θ) + tr(SΘ) + λ Θ 1 . (1) Here S is the sample covariance matrix, Θ 1 denotes the sum of the absolute values of Θ, and λ is a tuning parameter controlling the amount of ℓ 1 shrinkage.This is a semidefinite programming problem (SDP) in the variable Θ [Boyd and Vandenberghe, 2004]. In this paper we revisit the glasso algorithm proposed by Friedman et al. [2007] for solving (1); we analyze its properties, expose problems and issues, and propose alternative algorithms more suitable for the task.Some of the results and conclusions of this paper can be found in Banerjee et al. [2008], both explicitly and implicitly.We re-derive some of the results and derive new results, insights and algorithms, using a unified and more elementary framework. Notation We denote the entries of a matrix A n×n by a ij .A 1 denotes the sum of its absolute values, A ∞ the maximum absolute value of its entries, A F is its Frobenius norm, and abs(A) is the matrix with elements |a ij |.For a vector u ∈ ℜ q , u 1 denotes the ℓ 1 norm, and so on. From now on, unless otherwise specified, we will assume that λ > 0. 2 Review of the glasso algorithm. We use the frame-work of "normal equations" as in Hastie et al. [2009], Friedman et al. [2007].Using sub-gradient notation, we can write the optimality conditions (aka "normal equations") for a solution to (1) as where Γ is a matrix of component-wise signs of Θ: (we use the notation γ jk ∈ Sign(θ jk )).Since the global stationary conditions of (2) require θ jj to be positive, this implies that where W = Θ −1 .glasso uses a block-coordinate method for solving (2).Consider a partitioning of Θ and Γ: where Θ 11 is (p − 1) × (p − 1), θ 12 is (p − 1) × 1 and θ 22 is scalar.W and S are partitioned the same way.Using properties of inverses of block-partitioned matrices, observe that W = Θ −1 can be written in two equivalent forms: glasso solves for a row/column of (2) at a time, holding the rest fixed.Considering the pth column of (2), we get − w 12 + s 12 + λγ 12 = 0. The glasso algorithm solves (10) for β = θ 12 /θ 22 , that is where γ 12 ∈ Sign(β), since θ 22 > 0. ( 13) is the stationarity equation for the following ℓ 1 regularized quadratic program: minimize where W 11 ≻ 0 is assumed to be fixed.This is analogous to a lasso regression problem of the last variable on the rest, except the cross-product matrix S 11 is replaced by its current estimate W 11 .This problem itself can be solved efficiently using elementwise coordinate descent, exploiting the sparsity in β.From β, it is easy to obtain ŵ12 from (9).Using the lower-right element of ( 6), θ22 is obtained by Finally, θ12 can now be recovered from β and θ22 .Notice, however, that having solved for β and updated w 12 , glasso can move onto the next block; disentangling θ 12 and θ 22 can be done at the end, when the algorithm over all blocks has converged.The glasso algorithm is outlined in Algorithm 1.We show in Lemma 3 in Section 8 that the successive updates in glasso keep W positive definite. 2. Cycle around the columns repeatedly, performing the following steps till convergence: (a) Rearrange the rows/columns so that the target column is last (implicitly). (b) Solve the lasso problem (14), using as warm starts the solution from the previous round for this column. (d) Save β for this column in the matrix B. 3. Finally, for every row/column, compute the diagonal entries θjj using (15), and convert the B matrix to Θ. (k) ) for the sequence of solutions produced by glasso on an example.Surprisingly, the curve is not monotone decreasing, as confirmed by the middle plot.If glasso were solving (1) by block coordinate-descent, we would not anticipate this behavior. A closer look at steps ( 9) and (10) of the glasso algorithm leads to the following observations: (a) We wish to solve (8) for θ 12 .However θ 12 is entangled in W 11 , which is (incorrectly) treated as a constant. (b) After updating θ 12 , we see from (7) that the entire (working) covariance matrix W changes. glasso however updates only w 12 and w 21 .These two observations explain the non-monotone behavior of glasso in minimizing f (Θ).Section 3 shows a corrected block-coordinate descent algorithm for Θ, and Section 4 shows that the glasso algorithm is actually optimizing the dual of problem (1), with the optimization variable being W. A Corrected glasso block coordinate-descent algorithm Recall that ( 12) is a variant of (10), where the dependence of the covariance sub-matrix W 11 on θ 12 is explicit.With α = θ 12 w 22 (with w 22 ≥ 0 fixed), Θ 11 ≻ 0, ( 12) is equivalent to the stationary condition for minimize If α is the minimizer of ( 16), then θ12 = α/w 22 .To complete the optimization for the entire row/column we need to update θ 22 .This follows simply from (7) with w 22 = s 22 + λ. To solve (16) we need Θ −1 11 for each block update.We achieve this by maintaining W = Θ −1 as the iterations proceed.Then for each block • once θ 12 is updated, the entire working covariance matrix W is updated (in particular the portions W 11 and w 12 ), via the identities in (7), using the known Θ −1 11 . Both these steps are simple rank-one updates with a total cost of O(p 2 ) operations.We refer to this as the primal graphical lasso or p-glasso, which we present in Algorithm 2. The p-glasso algorithm requires slightly more work than glasso, since an additional O(p 2 ) operations have to be performed before and after each block update.In return we have that after every row/column update, Θ and W are positive definite (for λ > 0) and ΘW = I p . 2. Cycle around the columns repeatedly, performing the following steps till convergence: (a) Rearrange the rows/columns so that the target column is last (implicitly). 4 What is glasso actually solving? Building upon the framework developed in Section 2, we now proceed to establish that glasso solves the convex dual of problem (1), by block coordinate ascent.We reach this conclusion via elementary arguments, closely aligned with the framework we develop in Section 2. The approach we present here is intended for an audience without much of a familiarity with convex duality theory Boyd and Vandenberghe [2004]. Figure 1 illustrates that glasso is an ascent algorithm on the dual of the problem 1.The red curve in the left plot shows the dual objective rising monotonely, and the rightmost plot shows that the increments are indeed positive.There is an added twist though: in solving the block-coordinate update, glasso solves instead the dual of that subproblem. Dual of the ℓ 1 regularized log-likelihood We present below the following lemma, the conclusion of which also appears in Banerjee et al. [2008], but we use the framework developed in Section 2. Notice that for the dual, the optimization variable is Γ, with S + Γ = Θ −1 = W.In other words, the dual problem solves for W rather than Θ, a fact that is suggested by the glasso algorithm. Remark 1.The equivalence of the solutions to problems ( 19) and (1) as described above can also be derived via convex duality theory [Boyd and Vandenberghe, 2004], which shows that ( 19) is a dual function of the ℓ 1 regularized negative log-likelihood (1).Strong duality holds, hence the optimal solutions of the two problems coincide Banerjee et al. [2008]. We now consider solving ( 22) for the last block γ12 (excluding diagonal), holding the rest of Γ fixed.The corresponding equations are The only non-trivial translation is the θ 12 in the first equation.We must express this in terms of the optimization variable γ12 .Since s 12 + γ12 = w 12 , using the identities in ( 6), we have W −1 11 (s 12 + γ12 ) = −θ 12 /θ 22 .Since θ 22 > 0, we can redefine p12 = p 12 /θ 22 , to get The following lemma shows that a block update of glasso solves (24) (and hence ( 23)), a block of stationary conditions for the dual of the graphical lasso problem.Curiously, glasso does this not directly, but by solving the dual of the QP corresponding to this block of equations. Note that the QP ( 27) is a (partial) optimization over the variable w 12 only (since s 12 is fixed); the sub-matrix W 11 remains fixed in the QP.Exactly one row/column of W changes when the blockcoordinate algorithm of glasso moves to a new row/column, unlike an explicit full matrix update in W 11 , which is required if θ 12 is updated.This again emphasizes that glasso is operating on the covariance matrix instead of Θ.We thus arrive at the following conclusion: Theorem 1. glasso performs block-coordinate ascent on the box-constrained SDP ( 19), the Lagrange dual of the primal problem (1).Each of the block steps are themselves box-constrained QPs, which glasso optimizes via their Lagrange duals. In our annotation perhaps glasso should be called dd-glasso, since it performs dual block updates for the dual of the graphical lasso problem.Banerjee et al. [2008], the paper that inspired the original glasso article [Friedman et al., 2007], also operates on the dual.They however solve the block-updates directly (which are box constrained QPs) using interior-point methods. A New Algorithmdp-glasso In Section 3, we described p-glasso, a primal coordinate-descent method.For every row/column we need to solve a lasso problem ( 16), which operates on a quadratic form corresponding to the square matrix Θ −1 11 .There are two problems with this approach: • the matrix Θ −1 11 needs to be constructed at every row/column update with complexity O(p 2 ); • Θ −1 11 is dense. We now show how a simple modification of the ℓ 1 -regularized QP leads to a box-constrained QP with attractive computational properties. The KKT optimality conditions for (16), following (12), can be written as: Along the same lines of the derivations used in Lemma 2, the condition above is equivalent to for some vector (with non-negative entries) q12 .( 32) are the KKT optimality conditions for the following box-constrained QP: The optimal solutions of ( 33) and ( 31) are related by a consequence of (31), with α = θ12 • w 22 and w 22 = s 22 + λ.The diagonal θ 22 of the precision matrix is updated via (7): Algorithm 3 dp-glasso algorithm 1. Initialize Θ = diag(S + λI) −1 . 2. Cycle around the columns repeatedly, performing the following steps till convergence: (a) Rearrange the rows/columns so that the target column is last (implicitly).By strong duality, the box-constrained QP (33) with its optimality conditions ( 32) is equivalent to the lasso problem ( 16).Now both the problems listed at the beginning of the section are removed.The problem matrix Θ 11 is sparse, and no O(p 2 ) updating is required after each block. The solutions returned at step 2(b) for θ12 need not be exactly sparse, even though it purports to produce the solution to the primal block problem ( 16), which is sparse.One needs to use a tight convergence criterion when solving (33).In addition, one can threshold those elements of θ12 for which γ is away from the box boundary, since those values are known to be zero. Note that dp-glasso does to the primal formulation ( 1) what glasso does to the dual.dpglasso operates on the precision matrix, whereas glasso operates on the covariance matrix. Computational Costs in Solving the Block QPs The ℓ 1 regularized QPs appearing in ( 14) and ( 16) are of the generic form minimize for A ≻ 0. In this paper, we choose to use cyclical coordinate descent for solving (36), as it is used in the glasso algorithm implementation of Friedman et al. [2007].Moreover, cyclical coordinate descent methods perform well with good warm-starts.These are available for both ( 14) and ( 16), since they both maintain working copies of the precision matrix, updated after every row/column update.There are other efficient ways for solving (36), capable of scaling to large problemsfor example first-order proximal methods [Beck andTeboulle, 2009, Nesterov, 2007], but we do not pursue them in this paper. The box-constrained QPs appearing in ( 27) and ( 33) are of the generic form: for some à ≻ 0. As in the case above, we will use cyclical coordinate-descent for optimizing (37). In general it is more efficient to solve (36) than (37) for larger values of λ.This is because a large value of λ in (36) results in sparse solutions û; the coordinate descent algorithm can easily detect when a zero stays zero, and no further work gets done for that coordinate on that pass.If the solution to (36) has κ non-zeros, then on average κ coordinates need to be updated.This leads to a cost of O(qκ), for one full sweep across all the q coordinates. On the other hand, a large λ for (37) corresponds to a weakly-regularized solution.Cyclical coordinate procedures for this task are not as effective.Every coordinate update of v results in updating the gradient, which requires adding a scalar multiple of a column of Ã.If à is dense, this leads to a cost of O(q), and for one full cycle across all the coordinates this costs O(q 2 ), rather than the O(qκ) for (36).However, our experimental results show that dp-glasso is more efficient than glasso, so there are some other factors in play.When à is sparse, there are computational savings.If à has κq non-zeros, the cost per column reduces on average to O(κq) from O(q 2 ).For the formulation (33) à is Θ 11 , which is sparse for large λ.Hence for large λ, glasso and dp-glasso have similar costs. For smaller values of λ, the box-constrained QP (37) is particularly attractive.Most of the coordinates in the optimal solution v will pile up at the boundary points {−λ, λ}, which means that the coordinates need not be updated frequently.For problem (33) this number is also κ, the number of non-zero coefficients in the corresponding column of the precision matrix.If κ of the coordinates pile up at the boundary, then one full sweep of cyclical coordinate descent across all the coordinates will require updating gradients corresponding to the remaining q − κ coordinates.Using similar calculations as before, this will cost O(q(q − κ)) operations per full cycle (since for small λ, à will be dense).For the ℓ 1 regularized problem (36), no such saving is achieved, and the cost is O(q 2 ) per cycle. Note that to solve problem (1), we need to solve a QP of a particular type (36) or (37) for a certain number of outer cycles (ie full sweeps across rows/columns).For every row/column update, the associated QP requires varying number of iterations to converge.It is hard to characterize all these factors and come up with precise estimates of convergence rates of the overall algorithm.However, we have observed that with warm-starts, on a relatively dense grid of λs, the complexities given above are pretty much accurate for dp-glasso (with warmstarts) specially when one is interested in solutions with small / moderate accuracy.Our experimental results in Section 9.1 and Appendix Section B support our observation. We will now have a more critical look at the updates of the glasso algorithm and study their properties. glasso: Positive definiteness, Sparsity and Exact Inversion As noted earlier, glasso operates on W -it does not explicitly compute the inverse W −1 .It does however keep track of the estimates for θ 12 after every row/column update.The copy of Θ retained by glasso along the row/column updates is not the exact inverse of the optimization variable W. The precision matrix produced after every row/column update need not be the exact inverse of the working covariance matrix -the squared Frobenius norm of the error is being plotted across iterations.[Right Panel] The estimated precision matrix Θ produced by glasso need not be positive definite along iterations; plot shows minimal eigen-value. In many real-life problems one only needs an approximate solution to (1): • for computational reasons it might be impractical to obtain a solution of high accuracy; • from a statistical viewpoint it might be sufficient to obtain an approximate solution for Θ that is both sparse and positive definite It turns out that the glasso algorithm is not suited to this purpose.Since the glasso is a block coordinate procedure on the covariance matrix, it maintains a positive definite covariance matrix at every row/column update.However, since the estimated precision matrix is not the exact inverse of W, it need not be positive definite.Although it is relatively straightforward to maintain an exact inverse of W along the row/column updates (via simple rank-one updates as before), this inverse W −1 need not be sparse.Arbitrary thresholding rules may be used to set some of the entries to zero, but that might destroy the positive-definiteness of the matrix.Since a principal motivation of solving (1) is to obtain a sparse precision matrix (which is also positive definite), returning a dense W −1 to (1) is not desirable. Figures 2 illustrates the above observations on a typical example. The dp-glasso algorithm operates on the primal (1).Instead of optimizing the ℓ 1 regularized QP ( 16), which requires computing Θ −1 11 , dp-glasso optimizes (33).After every row/column update the precision matrix Θ is positive definite.The working covariance matrix maintained by dp-glasso via w 12 := s 12 + γ need not be the exact inverse of Θ. covariance matrix estimates, if required, can be obtained by tracking Θ −1 via simple rank-one updates, as described earlier. Unlike glasso, dp-glasso (and p-glasso) return a sparse and positive definite precision matrix even if the row/column iterations are terminated prematurely. Warm Starts and Path-seeking Strategies Since we seldom know in advance a good value of λ, we often compute a sequence of solutions to (1) for a (typically) decreasing sequence of values λ 1 > λ 2 > . . .> λ K .Warm-start or continuation methods use the solution at λ i as an initial guess for the solution at λ i+1 , and often yield great efficiency.It turns out that for algorithms like glasso which operate on the dual problem, not all warm-starts necessarily lead to a convergent algorithm.We address this aspect in detail in this section. The following lemma states the conditions under which the row/column updates of the glasso algorithm will maintain positive definiteness of the covariance matrix W. Lemma 3. Suppose Z is used as a warm-start for the glasso algorithm.If Z ≻ 0 and Z−S ∞ ≤ λ, then every row/column update of glasso maintains positive definiteness of the working covariance matrix W. Proof.Recall that the glasso solves the dual (19).Assume Z is partitioned as in (5), and the pth row/column is being updated.Since Z ≻ 0, we have both Since Z 11 remains fixed, it suffices to show that after the row/column update, the expression ( ŵ22 − ŵ21 (Z 11 ) −1 ŵ12 ) remains positive.Recall that, via standard optimality conditions we have ŵ22 = Combining the above along with the fact that ŵ22 ≥ z 22 we see which implies that the new covariance estimate W ≻ 0. Remark 3. If the condition Z − S ∞ ≤ λ appearing in Lemma 3 is violated, then the row/column update of glasso need not maintain PD of the covariance matrix W. We have encountered many counter-examples that show this to be true, see the discussion below. The R package implementation of glasso allows the user to specify a warm-start as a tuple (Θ 0 , W 0 ).This option is typically used in the construction of a path algorithm. If ( Θ λ , W λ ) is provided as a warm-start for λ ′ < λ, then the glasso algorithm is not guaranteed to converge.It is easy to find numerical examples by choosing the gap λ − λ ′ to be large enough.Among the various examples we encountered, we briefly describe one here.Details of the experiment/data and other examples can be found in the online Appendix A.1.We generated a data-matrix X n×p , with n = 2, p = 5 with iid standard Gaussian entries.S is the sample covariance matrix.We solved problem (1) using glasso for λ = 0.9 × max i =j |s ij |.We took the estimated covariance and precision matrices: W λ and Θ λ as a warm-start for the glasso algorithm with λ ′ = λ × 0.01.The glasso algorithm failed to converge with this warm-start.We note that W λ − S ∞ = 0.0402 λ ′ (hence violating the sufficient condition in Lemma 4) and after updating the first row/column via the glasso algorithm we observed that "covariance matrix" W has negative eigen-values -leading to a non-convergent algorithm.The above phenomenon is not surprising and easy to explain and generalize.Since W λ solves the dual ( 19), it is necessarily of the form W λ = S + Γ, for Γ ∞ ≤ λ. In the light of Lemma 3 and also Remark 3, the warm-start needs to be dual-feasible in order to guarantee that the iterates W remain PD and hence for the sub-problems to be well defined convex programs.Clearly W λ does not satisfy the box-constraint W λ − S ∞ ≤ λ ′ , for λ ′ < λ.However, in practice the glasso algorithm is usually seen to converge (numerically) when λ ′ is quite close to λ. The following lemma establishes that any PD matrix can be taken as a warm-start for p-glasso or dp-glassoto ensure a convergent algorithm. Lemma 4. Suppose Φ ≻ 0 is a used as a warm-start for the p-glasso (or dp-glasso) algorithm.Then every row/column update of p-glasso (or dp-glasso) maintains positive definiteness of the working precision matrix Θ. Note that the block Φ 11 remains fixed; only the pth row/column of Θ changes.φ 21 gets updated to θ21 , as does θ12 .From (7) the updated diagonal entry θ22 satisfies: Thus the updated matrix Θ remains PD.The result for the dp-glasso algorithm follows, since both the versions p-glasso and dp-glasso solve the same block coordinate problem. Remark 4. A simple consequence of Lemmas 3 and 4 is that the QPs arising in the process, namely the ℓ 1 regularized QPs ( 14), ( 16) and the box-constrained QPs ( 27) and ( 33) are all valid convex programs, since all the respective matrices W 11 , Θ −1 11 and W −1 11 , Θ 11 appearing in the quadratic forms are PD. As exhibited in Lemma 4, both the algorithms dp-glasso and p-glasso are guaranteed to converge from any positive-definite warm start.This is due to the unconstrained formulation of the primal problem (1). glasso really only requires an initialization for W, since it constructs Θ on the fly.Likewise dp-glasso only requires an initialization for Θ.Having the other half of the tuple assists in the block-updating algorithms.For example, glasso solves a series of lasso problems, where Θ play the role as parameters.By supplying Θ along with W, the block-wise lasso problems can be given starting values close to the solutions.The same applies to dp-glasso.In neither case do the pairs have to be inverses of each other to serve this purpose. If we wish to start with inverse pairs, and maintain such a relationship, we have described earlier how O(p 2 ) updates after each block optimization can achieve this.One caveat for glasso is that starting with an inverse pair costs O(p 3 ) operations, since we typically start with W = S + λI.For dp-glasso, we typically start with a diagonal matrix, which is trivial to invert. Experimental Results & Timing Comparisons We compared the performances of algorithms glasso and dp-glasso (both with and without warmstarts) on different examples with varying (n, p) values.While most of the results are presented in this section, some are relegated to the online Appendix B. Section 9.1 describes some synthetic examples and Section 9.2 presents comparisons on a real-life micro-array data-set. Synthetic Experiments In this section we present examples generated from two different covariance models -as characterized by the covariance matrix Σ or equivalently the precision matrix Θ.We create a data matrix X n×p by drawing n independent samples from a p dimensional normal distribution MVN(0, Σ).The sample covariance matrix is taken as the input S to problem (1).The two covariance models are described below: Type-1 The population concentration matrix Θ = Σ −1 has uniform sparsity with approximately 77 % of the entries zero. We created the covariance matrix as follows.We generated a matrix B with iid standard Gaussian entries, symmetrized it via 1 2 (B + B ′ ) and set approximately 77% of the entries of this matrix to zero, to obtain B (say).We added a scalar multiple of the p dimensional identity matrix to B to get the precision matrix Θ = B + ηI p×p , with η chosen such that the minimum eigen value of Θ is one.Type-2 This example, taken from Yuan and Lin [2007], is an auto-regressive process of order two -the precision matrix being tri-diagonal: For each of the two set-ups Type-1 and Type-2 we consider twelve different combinations of (n, p): For every (n, p) we solved (1) on a grid of twenty λ values linearly spaced in the log-scale, with λ i = 0.8 i × {0.9λ max }, i = 1, . . ., 20, where λ max = max i =j |s ij |, is the off-diagonal entry of S with largest absolute value.λ max is the smallest value of λ for which the solution to (1) is a diagonal matrix. Since this article focuses on the glasso algorithm, its properties and alternatives that stem from the main idea of block-coordinate optimization, we present here the performances of the following algorithms: Dual-Cold glasso with initialization W = S + λI p×p , as suggested in Friedman et al. [2007]. Dual-Warm The path-wise version of glasso with warm-starts, as suggested in Friedman et al. [2007].Although this path-wise version need not converge in general, this was not a problem in our experiments, probably due to the fine-grid of λ values. Primal-Warm The path-wise version of dp-glasso with warm-starts. We did not include p-glasso in the comparisons above since p-glasso requires additional matrix rank-one updates after every row/column update, which makes it more expensive.None of the above listed algorithms require matrix inversions (via rank one updates).Furthermore, dp-glasso and p-glasso are quite similar as both are doing a block coordinate optimization on the dual.Hence we only included dp-glasso in our comparisons.We used our own implementation of the glasso and dp-glasso algorithm in R. The entire program is written in R, except the inner block-update solvers, which are the real work-horses: • For glasso we used the lasso code crossProdLasso written in FORTRAN by Friedman et al. [2007]; • For dp-glasso we wrote our own FORTRAN code to solve the box QP. An R package implementing dp-glasso will be made available in CRAN. In the figure and tables that follow below, for every algorithm, at a fixed λ we report the total time taken by all the QPs -the ℓ 1 regularized QP for glasso and the box constrained QP for dpglasso till convergence All computations were done on a Linux machine with model specs: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz. Convergence Criterion: Since dp-glasso operates on the the primal formulation and glasso operates on the dual -to make the convergence criteria comparable across examples we based it on the relative change in the primal objective values i.e. f (Θ) (1) across two successive iterations: where one iteration refers to a full sweep across p rows/columns of the precision matrix (for dp-glasso ) and covariance matrix (for glasso ); and TOL denotes the tolerance level or level of accuracy of the solution.To compute the primal objective value for the glasso the precision matrix is computed from W via direct inversion (the time taken for inversion and objective value computation is not included in the timing comparisons). Computing the objective function is quite expensive relative to the computational cost of the iterations.In our experience convergence criteria based on a relative change in the precision matrix for dp-glasso and the covariance matrix for glasso seemed to be a practical choice for the examples we considered.However, for reasons we described above, we used criterion 40 in the experiments. Observations: Figure 4 presents the times taken by the algorithms to converge to an accuracy of TOL = 10 −4 on a grid of λ values. The figure shows eight different scenarios with p > n, corresponding to the two different covariance models Type-1 (left panel) and Type-2 (right panel).It is quite evident that dp-glasso with warmstarts (Primal-Warm) outperforms all the other algorithms across all the different examples.All the algorithms converge quickly for large values of λ (typically high sparsity) and become slower with decreasing λ.For large p and small λ, convergence is slow; however for p > n, the non-sparse end of the regularization path is really not that interesting from a statistical viewpoint.Warm-starts apparently do not always help in speeding up the convergence of glasso ; for example see Figure 4 with (n, p) = (500, 1000) (Type 1) and (n, p) = (500, 800) (Type 2).This probably further validates the fact that warm-starts in the case of glasso need to be carefully designed, in order for them to speed-up convergence.Note however, that glasso with the warm-starts prescribed is not even guaranteed to converge -we however did not come across any such instance among the experiments presented in this section. Based on the suggestion of a referee we annotated the plots in Figure 4 with locations in the regularization path that are of interest.For each plot, two vertical dotted lines are drawn which correspond to the λs at which the distance of the estimated precision matrix Θ λ from the population precision matrix is minimized wrt to the • 1 norm (green) and • F norm (blue).The optimal λ corresponding to the • 1 metric chooses sparser models than those chosen by • F ; the performance gains achieved by dp-glasso seem to be more prominent for the latter λ. Table 1 presents the timings for all the four algorithmic variants on the twelve different (n, p) combinations listed above for Type 1.For every example, we report the total time till convergence on a grid of twenty λ values for two different tolerance levels: TOL ∈ {10 −4 , 10 −5 }.Note that the dp-glasso returns positive definite and sparse precision matrices even if the algorithm is terminated at a relatively small/moderate accuracy level -this is not the case in glasso .The rightmost column presents the proportion of non-zeros averaged across the entire path of solutions Θ λ , where Θ λ is obtained by solving (1) to a high precision i.e. 10 −6 , by algorithms glasso and dp-glasso and averaging the results. Again we see that in all the examples dp-glasso with warm-starts is the clear winner among its competitors.For a fixed p, the total time to trace out the path generally decreases with increasing n.There is no clear winner between glasso with warm-starts and glasso without warm-starts.It is often seen that dp-glasso without warm-starts converges faster than both the variants of glasso (with and without warm-starts). Table 2 reports the timing comparisons for Type 2. Once again we see that in all the examples Primal-Warm turns out to be the clear winner. For n ≤ p = 1000, we observe that Primal-Warm is generally faster for Type-2 than Type-1.This however, is reversed for smaller values of p ∈ {800, 500}.Primal-Cold is has a smaller overall computation time for Type-1 over Type-2.In some cases (for example n ≤ p = 1000), we see that Primal-Warm in Type-2 converges much faster than its competitors on a relative scale than in Type-1 -this difference is due to the variations in the structure of the covariance matrix. Micro-array Example We consider the data-set introduced in Alon et al. [1999] and further studied in Rothman et al. [2008], Mazumder and Hastie [2012].In this experiment, tissue samples were analyzed using an Affymetrix Oligonucleotide array.The data was processed, filtered and reduced to a subset of 2000 gene expression values.The number of Colon Adenocarcinoma tissue samples is n = 62.For the purpose of the experiments presented in this section, we pre-screened the genes to a size of p = 725.We obtained this subset of genes using the idea of exact covariance thresholding introduced in our paper [Mazumder and Hastie, 2012].We thresholded the sample correlation matrix obtained from the 62 × 2000 microarray data-matrix into connected components with a threshold of 0.003641 the genes belonging to the largest connected component formed our pre-screened gene pool of size p = 725.This (subset) data-matrix of size (n, p) = (62, 725) is used for our experiments. The results presented below in Table 3 show timing comparisons of the four different algorithms: Primal-Warm/Cold and Dual-Warm/Cold on a grid of fifteen λ values in the log-scale.Once again we see that the Primal-Warm outperforms the others in terms of speed and accuracy.Dual-Warm performs quite well in this example. Conclusions This paper explores some of the apparent mysteries in the behavior of the glasso algorithm introduced in Friedman et al. [2007].These have been explained by leveraging the fact that the glasso algorithm is solving the dual of the graphical lasso problem (1), by block coordinate ascent.Each block update, itself the solution to a convex program, is solved via its own dual, which is equivalent tolerance levels (TOL).We took a grid of fifteen λ values, the average % of zeros along the whole path is 90.8. to a lasso problem.The optimization variable is W, the covariance matrix, rather than the target precision matrix Θ.During the course of the iterations, a working version of Θ is maintained, but it may not be positive definite, and its inverse is not W. Tight convergence is therefore essential, for the solution Θ to be a proper inverse covariance.There are issues using warm starts with glasso, when computing a path of solutions.Unless the sequence of λs are sufficiently close, since the "warm start"s are not dual feasible, the algorithm can get into trouble. We have also developed two primal algorithms p-glasso and dp-glasso.The former is more expensive, since it maintains the relationship W = Θ −1 at every step, an O(p 3 ) operation per sweep across all row/columns.dp-glasso is similar in flavor to glasso except its optimization variable is Θ.It also solves the dual problem when computing its block update, in this case a box-QP.This box-QP has attractive sparsity properties at both ends of the regularization path, as evidenced in some of our experiments.It maintains a positive definite Θ throughout its iterations, and can be started at any positive definite matrix.Our experiments show in addition that dp-glasso is faster than glasso. An R package implementing dp-glasso will be made available in CRAN. A Online Appendix This section complements the examples provided in the paper with further experiments and illustrations. A With q denoting the maximum off-diagonal entry of S (in absolute value), we solved (1) using glasso at λ = 0.9 × q.The covariance matrix for this λ was taken as a warm-start for the glasso algorithm with λ ′ = λ × 0.01.The smallest eigen-value of the working covariance matrix W produced by the glasso algorithm, upon updating the first row/column was: −0.002896128, which is clearly undesirable for the convergence of the algorithm glasso .This is why the algorithm glasso breaks down. Example 2: The example is similar to above, with (n, p) = (10, 50), the seed of random number generator in R being set to set.seed(2008) and X n×p is the data-matrix with iid Gaussian entries.If the covariance matrix W λ which solves problem (1) with λ = 0.9 × max i =j |s ij | is taken as a warm-start to the glasso algorithm with λ ′ = λ × 0.1 -the algorithm fails to converge.Like the previous example, after the first row/column update, the working covariance matrix has negative eigen-values. B Further Experiments and Numerical Studies This section is a continuation to Section 9, in that it provides further examples comparing the performance of algorithms glasso and dp-glasso .The experimental data is generated as follows.For a fixed value of p, we generate a matrix A p×p with random Gaussian entries.The matrix is symmetrized by A ← (A + A ′ )/2.Approximately half of the off-diagonal entries of the matrix are set to zero, uniformly at random.All the eigen-values of the matrix A are lifted so that the smallest eigen-value is zero.The noiseless version of the precision matrix is given by Θ = A + τ I p×p .We generated the sample covariance matrix S by adding symmetric positive semi-definite random noise N to Θ −1 ; i.e. S = Θ −1 + N, where this noise is generated in the same manner as A. We considered four different values of p ∈ {300, 500, 800, 1000} and two different values of τ ∈ {1, 4}. For every p, τ combination we considered a path of twenty λ values on the geometric scale.For every such case four experiments were performed: Primal-Cold, Primal-Warm, Dual-Cold and Dual-Warm (as described in Section 9).Each combination was run 5 times, and the results averaged, to avoid dependencies on machine loads.Figure 4 shows the results.Overall, dp-glasso with warm starts performs the best, especially at the extremes of the path.We gave some explanation for this in Section 6.For the largest problems (p = 1000) their performances are comparable in the central part of the path (though dp-glasso dominates), but at the extremes dp-glasso dominates by a large margin. Figure 1 ( Figure1(left panel, black curve) plots the objective f (Θ(k) ) for the sequence of solutions produced by glasso on an example.Surprisingly, the curve is not monotone decreasing, as confirmed by the middle plot.If glasso were solving (1) by block coordinate-descent, we would not anticipate this behavior.A closer look at steps (9) and (10) of the glasso algorithm leads to the following observations: Figure 1 : Figure 1: [Left panel] The objective values of the primal criterion (1) and the dual criterion (19) corresponding to the covariance matrix W produced by glasso algorithm as a function of the iteration index (each column/row update).[Middle Panel] The successive differences of the primal objective values -the zero crossings indicate non-monotonicity.[Right Panel] The successive differences in the dual objective valuesthere are no zero crossings, indicating that glasso produces a monotone sequence of dual objective values. Figure 2 Figure 2 : Figure2: Figure illustrating some negative properties of glasso using a typical numerical example.[Left Panel] The precision matrix produced after every row/column update need not be the exact inverse of the working covariance matrix -the squared Frobenius norm of the error is being plotted across iterations.[Right Panel] The estimated precision matrix Θ produced by glasso need not be positive definite along iterations; plot shows minimal eigen-value. Figure 3 : Figure 3: The timings in seconds for the four different algorithmic versions: glasso (with and without warmstarts) and dp-glasso (with and without warm-starts) for a grid of λ values on the log-scale.[Left Panel] Covariance model for Type-1, [Right Panel] Covariance model for Type-2.The horizontal axis is indexed by the proportion of zeros in the solution.The vertical dashed lines correspond to the optimal λ values for which the estimated errors Θ λ − Θ 1 (green) and Θ λ − Θ F (blue) are minimum. Table 2 : Table showing comparative timings of the four algorithmic variants of glasso and dp-glasso for the covariance model in Type-2.This table is similar to Table1, displaying results for Type-1.dp-glasso with warm-starts consistently outperforms all its competitors. Table 3 : Comparisons among algorithms for a microarray dataset with n = 62 and p = 725, for different .1 Examples: Non-Convergence of glasso with warm-starts This section illustrates with examples that warm-starts for the glasso need not converge.This is a continuation of examples presented in Section 8. We took (n, p) = (2, 5) and setting the seed of the random number generator in R as set.seed(2008)we generated a data-matrix X n×p with iid standard Gaussian entries.The sample covariance matrix S is given below:
2012-08-07T23:11:40.000Z
2011-11-23T00:00:00.000
{ "year": 2011, "sha1": "cce262acee7b6dd9d299d0cb5e441e8b62101e8d", "oa_license": "CCBY", "oa_url": "https://projecteuclid.org/journals/electronic-journal-of-statistics/volume-6/issue-none/The-graphical-lasso-New-insights-and-alternatives/10.1214/12-EJS740.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "e388295889c3d086e2d8c6929907ac683bc07215", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Medicine" ] }
264490693
pes2o/s2orc
v3-fos-license
A boundary projection for the dilation order Motivated by Arveson's conjecture, we introduce a notion of hyperrigidity for a partial order on the state space of a $C^*$-algebra $B$. We show how this property is equivalent to the existence of a boundary: a subset of the pure states which completely encodes maximality in the given order. In the classical case where $B$ is commutative, such boundaries are known to exist when the partial order is induced by some well-behaved cone. However, the relevant order for the purposes of Arveson's conjecture is the dilation order, which is not known to fit into this framework. Our main result addresses this difficulty by showing that the dilation maximal states are stable under absolute continuity. Consequently, we obtain the existence of a boundary projection in the bidual $B^{**}$, on which all dilation maximal states must be concentrated. The topological regularity of this boundary projection is shown to lie at the heart of Arveson's conjecture. Our techniques do not require $B$ to be commutative. Introduction Let B be a unital C * -algebra generated by an operator system S ⊂ B. A unital * -representation π : B → B(H) is said to have the unique extension property with respect to S if any unital completely positive map ψ : B → B(H) agreeing with π on S must in fact agree with π everywhere on B. This special class of * -representations, and especially its irreducible elements called boundary representations, first rose to prominence as the centrepiece of Arveson's program for constructing the C *envelope of S [4], [5], which eventually came to fruition in [17], [6], [14]. Beyond this important application, * -representations with the unique extension property have since become a mainstay in modern non-selfadjoint operator algebra theory, where they can be meaningfully interpreted as forming the non-commutative Choquet boundary of S; see [15], [12] and the references therein. In [7], Arveson showed that a certain rigidity property of S inside of B, inspired by a classical phenomenon observed by Korovkin in approximation theory, could be reformulated using the unique extension property.More precisely, we say that S is hyperrigid if every unital * -representation of B has the unique extension property with respect to S. Guided by Šaškin's explanation of the Korovkin phenomenon [34], Arveson ventured the following. Hyperridigity conjecture.The operator system S is hyperrigid in B whenever all irreducible * -representations of B are boundary representations for S. In essence, Arveson conjectured that to verify that the unique extension property holds for all * -representations, it suffices to determine whether it holds for all Date: October 27, 2023.R.C. was partially supported by an NSERC Discovery Grant. irreducible * -representations.This conjecture is still wide open, despite attracting attention for more than a decade [25], [22], [10], [9], [31], [21], [23].Notably, even the case where B is commutative has not been settled; see [16] for some recent work in this direction.Our aim in this paper is to shed new light on the conjecture, with no requirement of commutativity. Basic theory of C * -algebras guarantees that any * -representation is a direct sum of cyclic ones.In addition, the unique extension property passes both to direct sums and to subrepresentations [11,Lemma 2.8], so that S is hyperrigid in B whenever all cyclic * -representations of B have the unique extension property with respect to S. Recent work of Davidson and Kennedy [15], [16] exhibited a mechanism for tackling this issue. Let ϕ be a state on B. Let π : B → B(H) be a unital * -representation and let ξ ∈ H be a unit vector.We say that the triple (π, H, ξ) is a representation of ϕ if Via the GNS construction, there is a one-to-one correspondence between states and cyclic * -representations.As shown in [15], [16], the unique extension property for a cyclic * -representation π is entirely encoded in the corresponding state ϕ.More precisely, it follows from [15,Theorem 8.3.7] that π has the unique extension property with respect to S precisely when the state ϕ is maximal in a certain partial order on the state space of B, called the dilation order (the precise definition of this order will be given in Section 3).Consequently, a deeper understanding of the dilation maximal states is highly relevant for the purposes of Arveson's conjecture, and this is what our work aims to provide. In the special case where B is commutative, the authors of [16] capitalize on this alternative characterization of the unique extension property by relating the dilation order to another order, much more classical and transparent, called the Choquet order.Crucially, the maximal elements in the Choquet order are completely understood as those states on B concentrated on a certain subset, called the Choquet boundary.As explained in [16,Question 8.4], Arveson's conjecture, at least in the commutative setting, then becomes equivalent to determining whether the dilation and Choquet orders share the same maximal elements.We explore this theme further, and examine abstract partial orders on state spaces of arbitrary C * -algebras. For a unital C * -algebra B, we let E(B) denote its state space, and E p (B) denote the pure states.Given a state ϕ on B, we let R ϕ denote the set of Borel probability measures µ on E p (B) satisfying ϕ = ωdµ(ω).Such measures always exist, at least in the separable setting [8,Theorem 4.2]. Given a Borel measurable subset X ⊂ E p (B), we let Σ X denote the set of those states ϕ for which there exists µ ∈ R ϕ concentrated on X.Furthermore, we let Σ X denote the set of those states ϕ for which every µ ∈ R ϕ is concentrated on X. Let ∆ ⊂ E(B) × E(B) be partial order, and denote its maximal elements by max(∆).A Borel measurable subset X ⊂ E p (B) will be called a ∆-boundary if We say that the order ∆ is hyperrigid if Σ Ω ⊂ max(∆), where Ω = E p (B) ∩ max(∆) is the set of pure ∆-maximal states.In other words, ∆ is hyperrigid precisely when states of the form ωdµ(ω) are ∆-maximal, where µ is a Borel probability measure concentrated on the pure ∆-maximal states. In Section 2, the relationships between the above notions are studied.In Corollary 2.4, we show that hyperrigidity of ∆ is equivalent to the existence of a ∆boundary, at least when B is separable and ∆ is weak- * closed and convex. In Section 3, we define the dilation order D(S, B) ⊂ E(B) × E(B) relative to an operator system S generating B. Using the terminology introduced above, in Corollary 3.2 we show that Arveson's conjecture can be reformulated as saying that D(S, B) is hyperrigid whenever all pure states are D(S, B)-maximal.In turn, this is equivalent to the existence of a D(S, B)-boundary. Classically, it is known that certain partial orders on the state space of a commutative C * -algebra always admit boundaries.For instance, in [3,Corollary I.5.18] the condition that the order be induced by some cone in B that is stable under taking maxima is shown to be sufficient.We thus examine the dilation order from this perspective, so as to determine whether the aforementioned machinery can be employed to produce the boundary that we seek.To do this, we rely rather heavily on the developments in non-commutative Choquet theory and function theory obtained recently by Davidson and Kennedy [15]. When B is chosen to be the so-called maximal C * -cover of S, then we show in Proposition 3.6 that the dilation order is indeed induced by some cone Ξ, whose closure is unique with this property (Corollary 3.10).For a general representation of S, these facts still allow us to identify the dilation maximal states in terms of an order induced by a cone (Theorem 3.8).Unfortunately, even when S can be represented in a commutative C * -algebra, the corresponding cone is not stable under maxima (Proposition 3.5), so that the aforementioned result from [3] cannot be used to manufacture a boundary and thus resolve Arveson's conjecture. In Section 4, we will see how this difficulty in exhibiting a boundary can be somewhat circumvented, provided that one is willing to look for a "non-classical" boundary.The key observation that we make is that D(S, B)-maximality is preserved under absolute continuity (Theorem 4.4), thus enabling the use of some results from [12] on non-commutative measure theory.The main result of the paper is then the following (see Theorem 4.5). Theorem A. Let B be a unital C * -algebra and let S ⊂ B be an operator system such that B = C * (S).Then, there exists a projection d ∈ B * * with the property that a state ϕ on B is D(S, B)-maximal precisely when ϕ(d) = 1. We refer to the projection d above as the boundary projection.We examine the non-commutative topological properties of d, in the sense of Akemann [1], [2].As an application, our second main result reformulates Arveson's conjecture in terms of regularity properties of d; see Corollary 4.7. Theorem B. Assume that B is separable and that every pure state on B is D(S, B)maximal.Then, the following statements are equivalent. (i) The operator system S is hyperrigid in B. Boundaries and hyperrigidity for pre-orders Let B be a unital C * -algebra, let E(B) denote its state space, and let E p (B) denote the pure states.In this section, we aim to understand the structure of the maximal elements in some pre-orders defined on E(B).In the next section, our findings will be applied to a specific example of a partial order, but we proceed here in greater generality. We begin with a technical fact.Fix integers m, n ≥ 1, and let (ϕ i ) be a net in K n,m converging to some state ϕ ∈ E(B) in the weak- * topology.By definition, this means that there is another net of states (ψ i ) such that (ϕ i , ψ i ) ∈ ∆ and ψ i (a n ) − ϕ i (a n ) ≥ 1/m.Upon passing to a cofinal subnet, we may assume that (ψ i ) also converges to some state ψ ∈ E(B) in the weak- * topology.Clearly, we then have ψ(a n ) − ϕ(a n ) ≥ 1/m, while (ϕ, ψ) ∈ ∆ since ∆ is assumed to be weak- * closed.This shows that ϕ ∈ K n,m , so indeed K n,m is closed in the weak- * topology. Finally, the previous paragraph implies that the ∆-maximal states form a G δ -set, and hence a Borel measurable set. Given a state ϕ on B, we let R ϕ denote the set of Borel probability measures µ on E(B) concentrated on E p (B) and satisfying When B is separable, such measures always exist [8,Theorem 4.2]. The following is inspired by the proof of [8, Corollary 3.3], and it generalizes the separable version of [15, Proposition 9.2.5] to a large class of pre-orders. Theorem 2.2.Let B be a separable unital C * -algebra.Let ∆ ⊂ E(B) × E(B) be a weak- * closed, convex pre-order.Let ϕ be a ∆-maximal state on B and let µ be a measure in R ϕ .Then, µ is concentrated on the pure ∆-maximal states. Proof.First note that because B is separable, the set E p (B) is Borel measurable [8, Corollary 3.3 and Lemma 4.1].Let N ⊂ E p (B) denote the set of pure states that are not ∆-maximal.Then, N is Borel measurable by Lemma 2.1.Our goal is to show that µ(N ) = 0. Assume for the sake of contradiction that µ(N ) > 0. By Lemma 2.1, there is a self-adjoint element a ∈ B and ε > 0 such that µ(K) > 0, where K is the set of pure states ω on B for which there is a state ψ with (ω, ψ) ∈ ∆ and ω(a)−ψ(a) ≥ ε. Define a non-zero positive linear functional ϕ As is well known, we can find a net (α i ) of finitely supported Borel probability measures on K such that lim i K Correspondingly, for each i we may define a state β i on B as Note then that (β i ) converges to 1 µ(K) ϕ ′ in the weak- * topology of B * .By definition of K, since each α i is a finite convex combination of point masses on K and ∆ is convex, for each i there is a state be a weak- * cluster point of (ψ i ).Using that ∆ is weak- * closed, taking the weak- * limit of a cofinal subnet of (β i , ψ i ), we find ( Then, θ is a convex combination of the state ψ and the state ϕ ′′ = 1 1−µ(K) (ϕ − ϕ ′ ), and hence is state itself.We note that so that ϕ = θ, which contradicts the fact that ϕ is ∆-maximal.Consequently, µ(N ) = 0 as desired. Next, we examine a converse to Theorem 2.2.For a Borel measurable subset X ⊂ E p (B), we let Σ X denote the set of those states ϕ on B for which there exists µ ∈ R ϕ concentrated on X.Furthermore, we let Σ X denote the set of those states ϕ on B for which every µ ∈ R ϕ is concentrated on X. Plainly, we have Σ X ⊂ Σ X as long as R ϕ is non-empty.It follows immediately from [8,Lemma 4.1] that if ω is a pure state on B, then R ω consists only of the point mass at ω. Hence Let ∆ ⊂ E(B)×E(B) be pre-order, and denote its maximal elements by max(∆).Under natural conditions, we can show that maximal elements are always plentiful, at least for partial orders. Proof.Let Z denote the set of states ψ on B such that (ϕ, ψ) ∈ ∆.Let C ⊂ Z be a chain.There is a cofinal subnet Λ of C that converges to some τ in the weak- * topology.Since ∆ is assumed to be weak- * closed, we see that (ϕ, τ ) ∈ ∆, whence τ ∈ Z. We claim that θ is in fact ∆-maximal.To see this, assume that γ is a state on B such that (θ, γ) ∈ ∆.Then, (ϕ, γ) ∈ ∆ so that γ ∈ Z. Maximality of θ in Z then forces θ = γ, thereby completing the proof. A Borel measurable subset X ⊂ E p (B) will be called a ∆-boundary if We also set Ω = max(∆)∩E p (B), that is, Ω is the set of pure ∆-maximal states.Recall that by [8, Corollary 3.3 and Lemma 4.1] and Lemma 2.1, Ω is Borel measurable whenever B is separable.We say that the order ∆ is hyperrigid if Σ Ω ⊂ max(∆).In other words, this says that a state of the form ωdµ(ω) is ∆-maximal if µ is a Borel probability measure concentrated on Ω.This condition is vacuously satisfied if ∆ has simply no maximal elements. The following is the main result of this section, and it shows that hyperrigidity and boundaries are closely related. Corollary 2.4.Let B be a separable unital C * -algebra.Let ∆ ⊂ E(B) × E(B) be a weak- * closed, convex pre-order.Then, the following statements are equivalent. (i) There is a Borel measurable subset X ⊂ E p (B) such that max(∆) = Σ X . (ii) The set Ω of pure ∆-maximal states is a ∆-boundary. The dilation order The main driving force of this paper is to ascertain the hyperrigidity (in the sense of Section 2) of the so-called dilation order relative to an operator system, as introduced in [16], [15].The goal of this section is to give a precise definition of this order, and to establish some of its properties.The results therein all rely heavily on the technical machinery of non-commutative function theory developed in [15]. In an effort to keep the exposition light, we only recall the bare minimum from that paper.The interested reader should consult [15] to fill in the gaps as needed. 3.1.Definition.Let B be a unital C * -algebra and let S ⊂ B be an operator system such that C * (S) = B. Let E(B) denote the state space of B. Let ϕ ∈ E(B).Recall that by a representation of ϕ we mean a triple (π, H, ξ) consisting of a Hilbert space H, a unital * -representation π : B → B(H) and a unit vector ξ ∈ H that satisfies When ξ happens to be a cyclic vector for π, then the representation (π, H, ξ) is unitarily equivalent to the GNS representation of ϕ.In general, we will need to consider non-cyclic representations as well. 3.2. The maximal C * -cover.The dilation order does not depend only on the operator system, but also on the ambient C * -algebra.For the "maximal" representation of the operator system, the dilation order admits an alternative description, more in line with the classical Choquet order alluded to in the introduction and examined in [16].We recall some of the details underlying this non-trivial fact. Operator systems can be defined abstractly with no mention of an ambient C *algebra or concrete representation on Hilbert space by means of the Choi-Effros theorem [26,Theorem 13.1].The following concept is useful in studying the flexibility afforded by this coordinate-free approach. Let S be an operator system.A C * -cover of S is a pair (B, θ) consisting of a unital C * -algebra B and a unital completely isometric map θ : S → B such that B = C * (θ(S)).It is known [24] that there exists a maximal C * -cover (A, j).More precisely, this C * -cover has the property that, given any other C * -cover (B, θ), there is a surjective unital * -homomorphism π : A → B such that π • j = θ.It is customary to use the notation A = C * max (S) for this maximal C * -cover.Further, it is well known that C * max (S) satisfies a formally stronger condition, which we record next for later reference. 3.3. Non-commutative functions.Our next goal is to give an alternative description of the dilation order.Let us first set some notation regarding matrices indexed by sets with infinite cardinality. Let H be a Hilbert space and let S ⊂ B(H) be an operator system.Let m be a cardinal number.Let H (m) = n<m H, where the direct sum is taken over all cardinal numbers n < m.In standard fashion, a bounded linear operator T on H (m) corresponds to a matrix [t i,j ] i,j<m with t i,j : H → H such that sup{ [t i,j ] i,j∈I : I finite subset of {n < m}} is finite.Here, given a finite subset I of {n < m}, we identify [t i,j ] i,j∈I with an operator on the Hilbert space n∈I H. Accordingly, we may view M m (S) ⊂ B(H (m) ) as those matrices with entries in S for which the corresponding collection of finite submatrices is bounded.In particular, M m (S) is another operator system. Let κ be an infinite cardinal greater than the linear dimension of S. Given a cardinal number n ≤ κ, we fix a Hilbert space H n of dimension n.We simply write C for H 1 and B(H 1 ). We let K n denote the set of all unital completely positive maps from S into B(H n ).Thus, K n is a convex subset of the unit ball of the space of all completely bounded maps from S into B(H n ).In addition, each K n compact in the topology of pointwise weak- * convergence. The collection K = (K n ) n≤κ enjoys some additional compatibility relations betweens each of its levels K n , which makes it an nc convex set ; see [ If F is self-adjoint, we say that it is convex if, for each cardinal n ≤ κ, each pair of elements ϕ, ψ ∈ K n and each number 0 ≤ t ≤ 1, we have The set of all continuous nc functions from K into C is a unital C * -algebra, which we will denote by A. By virtue of [15,Theorem 4.4.3]there is a * -isomorphism for each element b ∈ C * max (S), each map ϕ ∈ K n and each cardinal n ≤ κ.Therefore, any continuous nc function F = (F (i,j) ) i,j<m : K → M m (C) gives rise to a matrix [Φ −1 (F (i,j) )] i,j<m with entries in C * max (S). 3.4.The cone Ξ.We define Ξ ⊂ C * max (S) to be the set consisting of elements of the form i,j∈I c j c i Φ −1 (F (i,j) ) where F = (F (i,j) ) i,j<m : K → M m (C) is a convex continuous nc function for some cardinal m ≤ κ, I is some finite subset of {n < m}, and {c i : i ∈ I} is a set of complex numbers.Proof.Let m and n be two cardinal numbers at most equal to κ.Let F : K → M m (C) and G : K → M n (C) be two convex continuous nc functions.Define a function H : K → M m+n (C) as It is easily verified that H is still a convex continuous nc function. Next, let I ⊂ {r < m} and J ⊂ {r < n} be two finite subsets, and correspondingly let {c i : i ∈ I} and {d j : j ∈ J} be two finite subsets of complex numbers.Let s, t ≥ 0. It is easily verified that there exists a finite subset Λ ⊂ {r < n + m} and finitely many complex numbers {α λ : λ ∈ Λ} such that Hence, Ξ is indeed a cone. Next, we wish to gain a more concrete understanding of the elements of Ξ.For this purpose, we will consider restrictions of nc functions on K to the first level K 1 , which is simply the state space of S. More precisely, consider the unital * -homomorphism ρ : A → C(K 1 ) defined as for each continuous nc function f = (f n ) : K → C and each ϕ ∈ K 1 .Define also the natural evaluation map ε : for each s ∈ S and ϕ ∈ K 1 .This is a unital completely contractive map, so by Lemma 3.3, there is a unique unital surjective * -homomorphism q : C * max (S) → C(K 1 ) such that q • j = ε on S. For s ∈ S and ϕ ∈ K 1 , using (2) we see that We can now give a fairly concrete description of the restrictions to K 1 of the elements in Ξ. Proposition 3.5.Let S be a operator system with state space L. Let Γ ⊂ C(L) denote the closed cone of continuous convex functions on L. Let ε : S → C(L) be the evaluation map, and let q : C * max (S) → C(L) denote the surjective unital * -homomorphism satisfying q • j = ε on S.Then, q(Ξ) ⊂ Γ and q(Ξ) contains all restrictions to L of affine weak- * continuous functions on S * .Furthermore, Γ is the smallest closed cone of C(L) stable under maxima and containing q(Ξ). Proof.Recall that for each cardinal n ≤ κ, K n is the set of all completely positive maps from S into B(H n ).Hence, K 1 = L. Fix s ∈ S. The evaluation function s : K → C defines a continuous nc function, which is readily seen to be convex.By (2) we see that Φ −1 ( s) = j(s), so that j(s) ∈ Ξ. Hence q(Ξ) contains q(j(s)) = ε(s) for every s ∈ S, and hence contains all restrictions to L of affine weak- * continuous functions on S * . Next, let ξ ∈ Ξ.By definition, this means that there is a cardinal m ≤ κ, a convex continuous nc function F = (F (i,j) ) i,j<m : K → M m (C), a finite subset I ⊂ {n < m}, and a subset {c i : i ∈ I} complex numbers such that Let h = (h n ) be the vector in n<m C such that h n = c n if n ∈ I, and h n = 0 otherwise.Then, using that F is convex and applying (3), we obtain for each ϕ, ψ ∈ K 1 and 0 We infer that q(ξ) is convex on L. We have thus proved that q(Ξ) contains all restrictions to L of affine weak- * continuous functions on S * , and it is contained in Γ.The second conclusion follows directly from this, in light of [3,Corollary I.1.3]. 3.5. The pre-order induced by Ξ.The motivation for introducing Ξ is the next development.We define ) to be the pre-order consisting of those pairs of states (ϕ, ψ) satisfying ϕ(ξ) ≤ ψ(ξ) for every ξ ∈ Ξ. Proposition 3.6.Let S be an operator system with maximal C * -cover (C * max (S), j).Then, D(j(S), C * max (S)) = Order(Ξ).In particular, D(j(S), C * max (S)) is a convex, weak- * closed partial order on the state space of C * max (S).Proof.By [15,Theorem 8.5.1], we see that (ϕ, ψ) ∈ D(j(S), for every convex nc continuous function F = [F (i,i) ] i,j<m : K → M m (C) and every cardinal number m ≤ κ.For such a function F , the required inequality is equivalent to for every finitely supported vector h in n<m C. Given such finitely supported vector h, there is a finite subset I ⊂ {n < m} and complex numbers In other words, D(j(S), C * max (S)) = Order(Ξ).It is routine to check that this implies that D(j(S), C * max (S)) is convex and weak- * closed.The previous result applies only to operator systems that are represented in their maximal C * -covers.Our next aim is to show that Proposition 3.6 still contains relevant information about dilation maximal states for any representation of S. For this purpose, we need the following.Lemma 3.7.Let B be a unital C * -algebra generated by an operator system S ⊂ B. Let q : C * max (S) → B be the surjective unital * -homomorphism such that q • j = id on S. Let ϕ be a state on B. Then, ϕ is D(S, B)-maximal if and only if ϕ • q is D(j(S), C * max (S))-maximal.Proof.Let π : B → B(H) be the GNS representation of ϕ.It is then easily verified that π • q is the GNS representation of ϕ • q.Since q is completely isometric on S, we infer that π has the unique extension property with respect to S if and only if π • q has the unique extension property with respect to j(S); see for instance [4, Theorem 2.1.2],the proof of which easily adapts outside the irreducible setting.The desired conclusion then follows from Theorem 3.1. Retaining the notation from above, we can now state the main result of this section. Theorem 3.8.Let B be a unital C * -algebra generated by an operator system S ⊂ B. Let q : C * max (S) → B be the surjective unital * -homomorphism such that q • j = id on S.Then, max(D(S, B)) = max(Order(q(Ξ))). Let us explore some of the ramifications of the previous result in relation to Arveson's conjecture in the commutative setting. Let B be a separable commutative unital C * -algebra and let S ⊂ B be an operator system with C * (S) = B. Let L denote the state space of S. Because B is commutative, the evaluation map ε : S → C(L) is completely isometric.Let q : C * max (S) → C(L) denote the unique surjective unital * -homomorphism such that q • j = ε on S. Under the assumption that all pure states on C(L) are D(ε(S), C(L))-maximal, to establish the conjecture, we need to prove that D(ε(S), C(L)) is hyperrigid (see Corollary 3.2).On the other hand, the property of being hyperrigid only depends on the set of maximal elements, so we may replace the dilation order by any preorder with the same maximal elements.In light of Theorem 3.8, this means that can just as well try to show that Order(q(Ξ)) is hyperrigid. In turn, by Corollary 2.4, this is equivalent to the existence of a boundary for Order(q(Ξ)).In this context, we may thus hope to apply the classical machinery of [3,Corollary I.5.18] to construct such a boundary.This strategy essentially reduces to the one employed in [16].Indeed, in order for [3,Corollary I.5.18] to be applicable, the cone q(Ξ) would need to be stable under taking maxima.If this were the case, then by virtue of Proposition 3.5, we would know that the closure of q(Ξ) coincides with the cone of all continuous convex functions on L. In turn, Theorem 3.8 would then imply that the dilation maximal elements coincide with so-called Choquet maximal elements, which are at the heart of [16]. 3.6.Uniqueness of Ξ.It is natural now to wonder whether Ξ is the unique cone in C * max (S) that satisfies Proposition 3.6.Before we can address this question, we introduce some notation and terminology. Let B be a unital C * -algebra.Given a pre-order ∆ ⊂ E(B) × E(B), we define the induced cone of ∆ to be the set Cone(∆) ⊂ B of self-adjoint elements b with the property that ϕ(b) ≤ ψ(b) whenever (ϕ, ψ) ∈ ∆.If, conversely, we are given a cone Γ ⊂ B of self-adjoint elements, we define the induced order of Γ to be the pre-order Order(Γ) ⊂ E(B) × E(B) consisting of those pairs of states (ϕ, ψ) satisfying There exists a certain duality between these objects, as we show next.Theorem 3.9.Let B be a unital C * -algebra and let Γ ⊂ B be a cone of self-adjoint elements containing both 1 and −1.Then, Cone(Order(Γ)) is the norm closure of Γ. Proof.By continuity, it follows from the definitions that the norm closure of Γ is contained in Cone(Order(Γ)).Assume that there is a self-adjoint element b ∈ Cone(Order(Γ)) outside the norm closure of Γ.By the convex separation theorem, we can find a bounded linear functional θ on B such that Here, we let Re θ = (θ + θ * )/2; this is a self-adjoint bounded linear functional on B. One may wonder if the "duality" uncovered above between cones and pre-orders goes in the other direction, namely whether Order(Cone(∆)) = ∆ for any pre-order ∆ on the state space of B. It is readily seen that ∆ must be convex and weak- * closed for this to hold, but at the time of this writing we do not know if these necessary conditions are also sufficient. We can now address the uniqueness question raised earlier. Detecting hyperrigidity with the boundary projection In the discussion following Theorem 3.8, we saw that the existence of a boundary for the dilation order cannot simply be inferred from the known classical techniques of [3], even in the commutative setting.In this section, we exploit non-commutative machinery to exhibit a certain "non-classical" boundary.We also illustrate how its regularity properties are deeply intertwined with Arveson's conjecture.The crucial idea is to consider absolute continuity for states, and how it interacts with maximality in the dilation order. Let B be a unital C * -algebra.The bidual B * * is then a von Neumann algebra.If π : B → B(H) is a * -representation, then it admits a unique weak- * continuous extension π : B * * → B(H), which is also a * -representation.Given a state ϕ on B, we consider its left kernel Let ψ be another state on B. We say that ϕ is absolutely continuous with respect to ψ if L ψ ⊂ L ϕ . As shown in [13,Lemma 2.6], this definition coincides with the usual one when B is commutative.Unlike in the classical setting however, the existence of some form of a Radon-Nikodym theorem in the general case is a rather subtle issue, and no perfect analogue exists as far as we know; see [29], [28], [18], [33], [19] and the references therein.Fortunately, this difficulty can be circumvented via the following fact. Lemma 4.1.Let B be a unital C * -algebra.Let ϕ, ψ be states on B with respective GNS representations (π, H, ξ) and (σ, K, η).Assume that ϕ is absolutely continuous with respect to ψ.Then, the following statements hold.(ii) The GNS representation of ϕ is unitarily equivalent to a subrepresentation of σ (∞) . Proof.(i) Let θ be a D(S, B)-maximal state on B. Assume that there is 0 < t < 1 and states ϕ, ψ on B such that θ = tϕ + (1 − t)ψ.It is easily verified that both ϕ and ψ are absolutely continuous with respect to θ, so that ϕ, ψ are also D(S, B)maximal by Theorem 4.4.This shows that the D(S, B)-maximal elements form a face of the state space of B. Next, let (ϕ n ) be a sequence of D(S, B)-maximal states converging in norm to some state ϕ on B. For each n, let σ n : B → B(H n ) be the GNS representation of ϕ n , which has the unique extension property with respect to S by Theorem 3.1.Arguing as in the proof of Proposition 4.2(ii), we see that the GNS representation of A standard multiplicative domain argument reveals that this is simply equivalent to ϕ(d) = 1.Another application of Theorem 4.4 completes the proof. The projection d in the previous result completely determines the set of D(S, B)maximal elements: these are the states that are "concentrated" on d.This phenomenon is reminiscent of the notion of a D(S, B)-boundary from Section 2. For this reason, we call d the boundary projection of the dilation order.4.1.Hyperrigidity and non-commutative topology.We saw in Corollary 3.2 that the existence of a D(S, B)-boundary would yield a positive solution to Arveson's conjecture.For the remainder of the paper, we examine whether there is a similar relationship between the boundary projection d and the conjecture. By virtue of Theorem 3.8, we know that there is a weak- * closed convex pre-order on E(B) with the same maximal elements as D(S, B).If B is separable, the set Ω of pure D(S, B)-maximal states is therefore Borel measurable by Lemma 2.1.Recall that we denote by Σ Ω those states ϕ on B for which there is a Borel probability measure concentrated on Ω such that ϕ = Ω ωdµ(ω). A projection q ∈ B * * is said to be closed if it is the decreasing weak- * limit of a net of contractions in B [1], [2].When B is separable, the net can be chosen to be a sequence.The projection q is open if I − q is closed. We now give a generalization of [8, Theorem 3.2]. Corollary 4.6.Let B be a separable unital C * -algebra and let S ⊂ B be an operator system such that B = C * (S).Any state ϕ ∈ Σ Ω satisfies ϕ(q) = 0 if q ∈ B * * is a closed projection orthogonal to the boundary projection d. Proof.Fix a closed projection q ∈ B * * orthogonal to d.Let ϕ be a state in Σ Ω . We can find a Borel probability measure µ concentrated on the set Ω such that (ii) The boundary projection d is closed.(iii) The boundary projection d is the infimum of a collection of open projections in B * * . Corollary 3 . 2 . , we call D(S, B) the dilation order relative to S in B. It is readily seen that D(S, B) is a preorder on E(B).It is in fact a partial order; this follows from [15, Proposition 7.2.8 and Theorem 8.5.1].For the purposes of Arveson's hyperrigidity conjecture, the importance of the dilation order is made manifest in the following fact, which follows from [15, Proposition 5.2.3 and Theorem 8.3.7].Theorem 3.1.Let B be a unital C * -algebra and let S ⊂ B be an operator system such that C * (S) = B. Let π : B → B(H) be a unital * -representation with unit cyclic vector ξ.Then, the following statements are equivalent.(i) π has the unique extension property with respect to S. (ii) The state b → π(b)ξ, ξ is D(S, B)-maximal.Using the terminology from Section 2, we can now reformulate Arveson's conjecture.Let B be a separable unital C * -algebra and let S ⊂ B be an operator system such that C * (S) = B. Assume that every pure state on B is D(S, B)maximal.Then, the following statements are equivalent.(i) The operator system S is hyperrigid in B. (ii) The pre-order D(S, B) is hyperrigid.(iii) There exists a D(S, B)-boundary.Proof.(i)⇒(ii): By assumption, every state on B is D(S, B)-maximal, so trivially D(S, B) is hyperrigid.(ii)⇒(i):By assumption, the set Ω of pure D(S, B)-maximal states coincides with the set of pure states on B. Thus, Σ Ω is the entire state space by[8, Theorem 4.2].Assuming that D(S, B) is hyperrigid, we thus conclude that every state on B is D(S, B)-maximal. Lemma 3 . 4 . The set Ξ is a cone in C * max (S). ∞ n=1 1 2 1 2 1 2 n ϕ n is a subrepresentation of ∞ n=1 σ n , and hence also has the unique extension property with respect to S [11, Lemma 2.8].Therefore, ∞ n=1 n ϕ n is D(S, B)-maximal by Theorem 3.1.A routine calculation reveals that ϕ is absolutely continuous with respect to ∞ n=1 n ϕ n , whence ϕ is D(S, B)-maximal by Theorem 4.4.(ii) By (i), we may apply [13, Theorem 3.5] to find a projection d ∈ B * * with the property that a state ϕ on B is absolutely continuous with respect to some D(S, B)-maximal state precisely when ϕ(b) = ϕ(db), b ∈ B. K n,m is contained in the set of states on B that are not ∆-maximal.Conversely, if ϕ is a state on B which is not ∆-maximal, then there is another state ψ = ϕ such that (ϕ, ψ) ∈ ∆.This implies that there must be a self-adjoint element b ∈ B such that ϕ(b) = ψ(b).Upon replacing b with −b if necessary, we may assume that ψ(b) − ϕ(b) > 0. The density of the set {a n } then easily implies that ϕ ∈ 15, Definition 2.2.1 and Example 2.2.6] for details.Fix a cardinal m ≤ κ.An nc function F : K → M m (C) is a collection of functions F n : K n → M m (B(H n )), n ≤ κ satisfying some natural compatibility and equivariance conditions; see [15, Definition 4.2.1].For each n ≤ κ, we can write F n as a matrix [f
2023-10-27T06:42:54.262Z
2023-10-26T00:00:00.000
{ "year": 2023, "sha1": "b1fe29a212777fff5edb723db336ee62322f4150", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b1fe29a212777fff5edb723db336ee62322f4150", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
123838119
pes2o/s2orc
v3-fos-license
The 144Ce source for SOX The SOX (Short distance neutrino Oscillations with BoreXino) project aims at testing the light sterile neutrino hypothesis. To do so, two artificials sources of antineutrinos and neutrinos respectively will be consecutively deployed at the Laboratori Nazionali del Gran Sasso (LNGS) in close vicinity to Borexino, a large liquid scintillator detector. This document reports on the source production and transportation. The source should exhibit a long lifetime and a high decay energy, a requirement fullfilled by the 144Ce-144Pr pair at secular equilibrium. It will be produced at FSUE “Mayak” PA using spent nuclear fuel. It will then be shielded and packed according to international regulation and shipped to LNGS across Europe. Knowledge of the Cerium antineutrino generator (CeANG) parameters is crucial for SOX as it can strongly impact the experiment sensitivity. Several apparatuses are being used or designed to characterize CeANG activity, radioactive emission and content. An overview of the measurements performed so far is presented here. Introduction The neutrino sector still shows unexplained discrepancies between observations and theoretical predictions. One of them is an electron neutrino deficit in short distance nuclear reactor based experiments, the so-called reactor antineutrino anomaly (RAA) [1]. A similar problem occurs in the source experiments GALLEX and SAGE [2]. This anomaly could be explained by a short distance oscillation between the three known neutrino states and one hypothetical new state with a squared mass difference of ∆m 2 new ≈ 1eV 2 with the known neutrinos [3]. Due to constraints on Z boson coupling asserted by LEP [4], this state would be sterile. The SOX experiment (Short baseline Oscillation in BoreXino) is designed to observe this oscillation, which has an expected length of 1 m order of magnitude, using a big liquid scintillator detector, in addition to the possible interaction rate deficit. Borexino is a spherical liquid scintillator detector, with a fiducial mass of 280 tons, and provides vertex reconstruction with 15 cm precision and 5% energy resolution [5]. With a diameter of 8.5 m, the scintillator vessel allows to probe interaction rate as a function of particle travel distance from the source.ν e are detected through the inverse beta decay (IBD) process. It has a relatively high cross-section and allows a very good background rejection thanks to its rather unique signature: a time-correlated energy deposit from both the emitted positron and neutron. The study of the flux from a 144 Ce-144 Prν e generator (CeANG), deployed in a tunnel right under the center of the detector base, is the first phase of the project. CeANG design 2.1. Source selection The source design of a β decay antineutrino generator must comply with some challenging constraints. First the source lifetime should be long enough to obtain statistically significant results with a 2 years schedule and to make possible the source manufacturing and transportation in a realistic amount of time. Second, the IBD process has a 1.806 MeV threshold so the sourcē ν e spectrum must significantly extend beyond this threshold. This statement contradicts the previous one as it leads to a nucleus half-life shorter than the day. To circumvent this difficulty, the retained solution is a nuclei couple, where the mother nucleus satisfies the first requirement, and decays with a long half-life in a daughter nucleus having a high energy decay. Four nuclei couples hold to this description: 42 Ar-42 K, 90 Sr-90 Y, 106 Ru-106 Rh and 144 Ce-144 Pr. The choice of the best suited couple is based on the following considerations: ease of production at the necessary scale, then available energy for the daughter nucleus decay to optimize the event rate. The 144 Ce-144 Pr couple is the best candidate, as 144 Ce is a relatively abundant fission product from irradiated nuclear fuel and 144 Pr exhibits endpoint energy Q β = 2.997 MeV well beyond IBD threshold [6]. With a 144 Ce half-life of 285 days, the minimal activity for 18 months of SOX data taking is 3.7 PBq. Source production The production of the CeANG requires industrial scale spent nuclear fuel reprocessing. Despite the ∼5% cumulative fission yield of 144 Ce in commercial reactor fuel, gathering enough material to manufacture the source forces to reprocess tons of spent fuel, depending on the irradiation history and cooling times. The FSUE "Mayak" PA has demonstrated the ability to handle such radioactive materials and will be in charge of the whole production process. The Cerium extraction begins by the standard PUREX (Plutonium Uranium Redox EXtraction) radiochemical process which allows to separate Uranium and Plutonium from the lighter fission products by liquid-liquid extraction. Then, complex displacement chromatography of rare-earth elements enables to isolate Cerium from the other fission products. The obtained Cerium residue undergoes calcination to form CeO 2 which will be pressed and inserted into a specially designed stainless steel capsule. To satisfy the high activity requirement, the source will contain up to 5 kg of CeO 2 . The expected delivery date of the CeANG is end of 2016. Shielding The 144 Pr decay produces γ rays. Among them the most difficult to handle is a 2.2 MeV ray with 0.7% intensity. Given the PBq activity of the source, it requires shielding both for biological safety and to avoid source-induced backgrounds into the detector. The flux must be suppressed by a factor 10 12 . The CeANG must also fit into the 1 m-width Borexino tunnel, putting constraints on the shielding compactness. The shielding should then have the highest possible density. For these reasons, it is built in tungsten-iron-nickel alloy with ∼ 18 g/cm 3 density. The shielding thickness is 19 cm, leading to a φ = 54 and H = 60 cm hollow cylinder in which the stainless steel source capsule will be placed. The total mass of the shielding is 2.4 tons, making it the biggest tungsten piece ever built. It is currently being manufactured at Xiamen Honglu Inc. (China). It will be delivered by the end of the year, for testing the different mechanical interfaces (calorimeter, transportation cask) and for some handling training. Transportation Once the capsule will be inserted in the shielding, the transportation will take place from Mayak to LNGS. The route has been planned and taken care of according to national and international regulations by Areva TN (France). The transportation of the CeANG must use a specific certified cask. Areva TN-MTR model, generally used for commercial nuclear fuel transportation, has been selected because of its capacity to handle the weight of the shielding. A custom basket has been designed to fit the shielding into the cask. The accreditations of the cask and basket as a source container for transport have been obtained, as well as transportation authorizations. The selected route aims at minimizing border-crossing, which is lengthy and tedious to organize for nuclear material transportation. The cask will be delivered by train at Saint Petersburg, then a dedicated boat will bring it to Le Havre. The final step will be done by truck through France and Italy. The whole transportation is expected to take in any case less than three weeks, allowing for less than 5% activity loss. The delivery should take place at LNGS in December 2016. Measurements for CeANG characterization 4.1. Activity To guarantee the sensitivity of the SOX experiment over a wide range of sterile ν oscillation parameters, one of the key point is to be able to predict the interaction rate into the detector as a function of theν e energy and distance to the source. The source activity is a key information to predict this global interaction rate. In order to measure the activity, two calorimeters are being built within the SOX collaboration. They aim at two independent subpercent activity measurements. The calorimeters are detailed in dedicated talks [7], [8]. β spectroscopy To convert the thermal power, which is the raw result of the calorimeter measurements, to an activity, one should know the mean decay energy of the couple 144 Ce-144 Pr. It depends on the β spectrum shape. This shape also influences directly theν e interaction rate prediction as thē ν e spectrum is estimated by converting the β spectrum. To test the parameter space favored by the RAA, the β spectrum shape should be determined at a 0.5% precision. It corresponds to an ν e interaction rate precision ≤ 1%. The 144 Ce and 144 Pr spectra contain non-unique forbidden β transitions, for which the theoretical spectral shape is not known at this precision level. Furthermore past measurements of these isotopes show huge discrepancies at the scale of 10 to 15% [9] [10]. It is then necessary to make a new β spectrum measurement for the CeANG. Two β spectrometers are under development for this task, using plastic scintillator. The first have a multiwire-chamber to tag β rays by comparison to γ rays. It is developped by the Technische Universität München and has already been used for 238 U fission products spectrometry [11]. The second is designed specifically for SOX by CEA. It aims both at maximizing the light collection and doing fast and repeatable measurements. FSUE "Mayak" PA has provided a first batch of radioactive samples which are used to check and tune the various apparatuses. Even if the source production process has evolved and the first samples are not representative of the final CeANG, other samples will be sent during the year of production. Due to the high lifetime difference, 144 Ce and 144 Pr are in secular equilibrium in source samples. As the 144 Prν e spectrum is the only one contributing to the IBD interactions in Borexino, a chemical extraction will be realized to isolate 144 Pr. The second spectrometer will then be used to make repeated hour-scale measurements of 144 Pr to achieve the best precision on its shape. Source impurity assessment In addition to calorimetry and β spectroscopy, a whole batch of measurements will help to characterize the content of the CeANG, as the production and handling environments could lead to traces of radioactive leftovers in the final source. The γ spectroscopy is essential to assert the low background level needed by SOX. The CeANG must not show any gamma emitting impurities with an activity higher than 10 −3 Bq/Bq(β). This measurement is also done on the same source samples provided by FSUE "Mayak" PA using a high purity Germanium crystal detector. The first sample batch has shown to be within this specification. To check the relative abundance of Cerium and to quantify the presence of long lifetime impurities and possible neutron emitting impurities, we also rely on α spectroscopy and inductive coupled plasma mass spectroscopy (ICP-MS). α spectroscopy test with gridded ionization chamber was particularly focused on 244 Cm and 241 Am and underlined neutron activity level lesser than 10 −6 Bq/Bq(β) in agreement with CeANG specifications. Both these nuclei could generate a problematic background as neutrons they emit can mimic an IBD event in the detector. On ICP-MS again no significant impurities were seen on the first tests, and the relative abundance of isotopes 140, 142 and 144 of Cerium were consistent with the sample origin. Conclusion The SOX project relies heavily on the feasability of the radioactive sources production and their characterisation. The CeANG is by various sides an atypical object which has thus required a thorough process of design and validation for the associated tools. Now these early tasks are giving way to the final production phases of the experiment preparation. During 2016, several characterization measurements will take place on radioactive samples and be cross checked to ensure a deployment with minimal troubles at the end of the year.
2019-04-21T13:05:56.658Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "508d9cf6067ff798269edc015ef306c14bb69e78", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/675/1/012032", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a32ff63a259358f8e22cc7468364eca259a08c16", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
12087081
pes2o/s2orc
v3-fos-license
Oncotargets and Therapy Dovepress Dovepress Pp4r1 Accelerates Cell Growth and Proliferation in Hepg2 Hepatocellular Carcinoma Hepatocellular carcinoma (HCC), as the fifth most common cancer worldwide, has become the third leading cause of cancer-related deaths. It is reported that protein phosphatase 4 (PP4) is an essential protein for nucleation, growth, and stabilization of microtubules in centrosomes/spindle bodies during cell division. Besides, previous studies have identified protein phosphatase 4 regulatory subunit 1 (PP4R1) as a constitutive interaction partner of PP4 catalytic subunit PP4C. The PP4C-PP4R1 PP4 complex plays a role in dephosphorylation, regulation of histone acetylation, and NF-κB activation. However, little is known about the pathological functions of PP4R1 in human cancers. Thus, in order to investigate how PP4R1 functions in human HCC, two common hepatocarcinogenesis HCC cell lines HepG2 and SMMC-7721 were employed, transduced with recombinant lentivirus expressing PP4R1 short hairpin RNA. Compared with the controls, the cells treated with Lv-shPP4R1 showed a significant decrease in cell proliferation and colony formation. The results of flow cytometry showed that the knockdown of PP4R1 caused HepG2 cells arrest at G 2 /M phase in the cell cycle. Furthermore, the transduc-tion of Lv-shPP4R1 into HepG2 cells led to the inactivation of two major mitogen-activated protein kinase signaling cascades: p38 and c-Jun N-terminal kinase (JNK), indicating that PP4R1 could promote cell proliferation, which might be regulated by p38 and c-Jun N-terminal kinase pathways. In a word, this study highlights the crucial role of PP4R1 in promoting HCC cell growth, which might elucidate the pathological mechanism of HCC. Introduction Hepatocellular carcinoma (HCC), a common primary liver cancer, which is the fifth most common cancer worldwide, has become the third leading cause of cancer-related deaths. 1,2 HCC is a complex and heterogeneous malignancy that arises in the context of progressive underlying liver dysfunction. Therefore, no single dominant or pathognomonic molecular mechanism exists in HCC. Given the asymptomatic nature of early disease and the limited use of surveillance, the majority of HCC cases are present at advanced or incurable stages. Even among patients with liver cancer detected earlier, there are still very few candidates for surgery because of the coexisting disease. In addition, the prognosis of advanced-stage HCC is poor, with an overall survival rate of ,5%. 3 Besides, the 5-year recurrence rates of over 70% have been reported despite surgical or locoregional therapies in earlier stages. 4 Thus, new treatment modalities must be pursued. It has been suggested that many physiopathologic processes are controlled through the balance between protein phosphorylation and dephosphorylation. Protein phosphatases (PPs) are the primary effectors of dephosphorylation, and can be grouped into three main classes based on sequence, structure, and catalytic function. The largest class of PPs is the phosphoprotein phosphatase family comprising PP1, PP2A, PP2B, submit your manuscript | www.dovepress.com PP4, which belongs to the PP2A family, is a protein complex comprised of a catalytic subunit PP4C plus regulatory subunits. 5 PP4 has been reported to be involved in many processes such as microtubule organization at centrosomes, resistance to apoptosis induced by ultraviolet irradiation and cisplatin, and recovery from the DNA damage checkpoint, [5][6][7][8] maturation of spliceosomal small nuclear ribonucleic proteins (snRNPs), 9 DNA repair, 10 tumor necrosis factor-α signaling, activation of c-Jun N-terminal kinase (JNK) mitogenactivated protein kinase (MAPK) 8, 11 regulation of histone acetylation, 12 NF-κB activation, 13 and division. A major form of PP4 in organisms from yeast to humans comprises PP4C in complex with a core regulatory subunit R2, and a variable regulatory subunit R3. 14 Moreover, serine/threonine-protein phosphatase 4 regulatory subunit 1 (PP4R1), a unique noncatalytic regulatory phosphatase subunit, was first identified as a constitutive interaction partner of PP4C. 15 The PP4C-PP4R1 PP4 complex plays a role in the dephosphorylation and regulation of HDAC3. 12 Furthermore, PP4R1 was also identified as a negative regulator of NF-κB activity in T lymphocytes. 16 It was demonstrated that PP4R1 formed part of a distinct PP4 holoenzyme and directed PP4C activity to dephosphorylate and inactivate NF-κB signaling. At last, a recent study identified a novel mechanism that PP4R1 targeted TRAF2 and TRAF6 to mediate the inhibition of the NF-κB pathway. 17 However, the functional role of PP4R1 in human cancers remains unclear yet. In this study, a lentivirus vector was successfully constructed to introduce PP4R1 short hairpin RNA (shRNA) into two human HCC cell lines HepG2 and SMMC-7721. Lentivirus-mediated silencing of PP4R1 inhibited the proliferation and colony-forming ability of HepG2 cells, and induced cell cycle arrest in the G 2 /M phase, which might elucidate the pathological mechanism of HCC. lentivirus-mediated shrna knockdown of PP4r1 expression The shRNA sequence 5′-GCTTGAATCTCGGTGTC TTTCCTCGAGGAAAGACACCGAGATTCAAGCT TTTTT-3′ was designed targeting human PP4R1 gene (NM_001042388.2). The negative control shRNA was 5′-GCGGAGGGTTTGAAAGAATATCTCGAGAT ATTCTTTCAAACCCTCCGCTTTTTT-3′. The doublestranded DNA fragments were formed in the annealing reaction system. The pFH-L vector (Shanghai Hollybio, Shanghai, People's Republic of China) was linearized by NheI and PacI restriction enzyme digestion. Pure linearized vector fragments and double-stranded DNA fragments were collected and combined together during a 16-hour reaction. Each DNA was used to transform the Escherichia coli strain DH5α and was purified with a plasmid purification kit (Qiagen, Valencia, CA, USA). The ligation product was confirmed by polymerase chain reaction (PCR) and sequencing. The generated plasmids were named pFH-Lv-shPP4R1 or Lv-shCon. Recombinant lentiviral vectors and packaging pHelper plasmids (pVSVG-I and pCMV∆R8.92) (Shanghai Hollybio) were cotransfected into 293T cells. Supernatants containing lentivirus expressing PP4R1 shRNA (Lv-shPP4R1) or control shRNA (Lv-shCon) were harvested 48 hours after transfection. The lentiviruses were then purified via ultracentrifugation, and the viral titer was determined by counting green fluorescent protein (GFP)-positive cells. The viral titer was determined by the method of end point dilution through counting the numbers of infected GFP-positive cells at 100× magnification under a fluorescence microscope (Olympus, Tokyo, Japan). Titer in IU/mL = (the numbers of green fluorescent cells) × (dilution factor)/ (volume of virus solution). For lentivirus infection, HepG2 cells (50,000 cells/well) were seeded in six-well plates and transduced with Lv-shPP4R1 or Lv-shCon at a multiplicity of infection of 10. Infection efficiency was determined through counting GFP-positive cells under a fluorescence microscope 96 hours after infection, and the knockdown efficiency of PP4R1 was evaluated by real-time quantitative PCR (qPCR) and Western blot analysis. MTT assay The effect of PP4R1 on cell viability was analyzed using the MTT assay based on growth curves of HepG2 cells in vitro. Briefly, cells were reseeded in 96-well plates at a concentration of 2.25×10 4 /mL in 200 μL/well after 4 days of lentivirus infection. Cells were then further cultured in this manner for 1-5 days. Four hours before the termination of culture, MTT (5 mg/mL; Sigma) was added at a volume of 20 μL/well. Later, the entire supernatant was discarded, and solubilization solution (0.01 M HCl, 10% SDS, 5% isopropanol) was added at a volume of 100 μL/well and incubated in an air bath shaker at 37°C for 10 minutes. The absorbance at 595 nm of each well was determined using the Epoch Microplate Spectrophotometer (BioTek, CA, USA). Statistical analysis was performed using Prism 5 for Windows software (GraphPad Software, San Diego, CA, USA). Statistically significant differences between groups were evaluated using the Student's t-test. The results were determined to be significantly significant when P,0.05 was obtained. suppression of PP4r1 by shrna in hepg2 cells and sMMc-7721 cells To explore the role of PP4R1 in human HCC, lentivirusmediated shRNA targeting PP4R1 was used to silence its endogenous expression in HepG2 cells and SMMC-7721 cells. In parallel, a negative control (Lv-shCon) and wild-type (Con) cells were run. The infection efficiency of recombinant lentivirus was above 80% as revealed by fluorescence microscopy in both HepG2 cells and SMMC-7721 cells ( Figure 1A). RT-qPCR assay showed that Lv-shPP4R1 could significantly downregulate PP4R1 gene expression in HepG2 cells, while no knockdown effect was observed following Lv-shCon infection ( Figure 1B). Western blot analysis also confirmed the silencing of PP4R1 protein expression by Lv-shPP4R1 ( Figure 1C). These results revealed that recombinant lentivirus could efficiently be transduced into HepG2 cells and SMMC-7721 cells, and silenced PP4R1 expression. PP4r1 silencing inhibited the proliferation of hepg2 cells and sMMc-7721 cells The effects of PP4R1 knockdown on HCC cell viability and proliferation were further investigated. MTT assay showed 2071 PP4r1 accelerates hepatocellular carcinoma cell growth that PP4R1 silencing markedly inhibited the viability of HepG2 cells and SMMC-7721 cells (Figure 2A). The proliferative rate of the Lv-shPP4R1 group started to drop on day 3 as compared with Lv-shCon and control groups, and the gap reached the maximum on day 5 (P,0.001). Colony formation assay was conducted to gain an insight into the long-term effect of PP4R1 on cell proliferation. The colony-forming efficiency of the Lv-shPP4R1 group was much less than that of the Lv-shCon and control groups ( Figure 2B). We can see from Figure 2C that the size of a single colony was shrunk and the number of colonies was diminished in HepG2 cells following Lv-shPP4R1 infection. PP4r1 silencing blocked the cell cycle progression of hepg2 cells Next, we performed flow cytometry assay to determine whether the pro-proliferative effect of PP4R1 in HepG2 cell line is mediated via cell cycle control. We could see from Figure 3A that the cell cycle distribution of HepG2 cells was visibly changed after PP4R1 knockdown. Compared with the Lv-shCon and control groups, the cell percentage of the G 0 /G 1 phase was declined, while the cell population in the G 2 /M phase was elevated in the Lv-shPP4R1 group ( Figure 3B). Thus, PP4R1 might regulate cell cycle progression to control cell growth. PP4r1 knockdown inhibited the activity of saPK/JnK and p38 pathways To gain insight into how PP4R1 knockdown alters the phenotype of HCC cells, we examined the expression of intracellular signaling molecules using a PathScan ® Intracellular Signaling Array Kit (Cell Signaling Technology). As shown in Figure 4A, compared with the Lv-shCon group, PP4R1 knockdown obviously downregulated the phosphorylated levels of p38 (Thr180/Tyr182) and SAPK/JNK (Thr183/Tyr185). To verify the results further, we found that compared with the Lv-shCon group, PP4R1 knockdown obviously downregulated the phosphorylated levels of p38 through Western blot analysis ( Figure 4B). All these results revealed that PP4R1 silencing could suppress p38 and SAPK/JNK pathways. PP4r1 knockdown had no effect on the activity of nF-κB pathway To investigate whether PP4R1 knockdown affects the activity of NF-κB in HCC cells, we separated the nucleus from the cytoplasm and detected the expression of p65 in them OncoTargets and Therapy 2015:8 submit your manuscript | www.dovepress.com 2073 PP4r1 accelerates hepatocellular carcinoma cell growth separately through Western blot. The results show that p65 was not expressed in the nucleus, and PP4R1 knockdown downregulated the expression of p65 ( Figure 5). Discussion Hepatocarcinogenesis is considered to be a process originating from hepatic stem cells (however, the role of liver stem cells as HCC cells of origin is debated) 18 or mature hepatocytes, and evolving from chronic liver disease driven by oxidative stress, chronic inflammation, and cell death followed by unrestricted proliferation/restricted regeneration, and permanent liver remodeling. In recent years, improved knowledge of the oncogenic processes and signaling pathways that regulate tumor cell proliferation, differentiation, angiogenesis, invasion, and metastasis has contributed to the identification of several potential therapeutic targets, which has driven the development of molecularly targeted therapies. Herein, we identified PP4R1 as an essential player in HCC cell growth in vitro, which might serve as a potential therapeutic target in HCC. PP4 is an essential protein for nucleation, growth, and stabilization of microtubules at centrosomes/spindle bodies during cell division. 19 In this study, we found that knockdown of PP4R1 by lentivirus-mediated stable gene silencing in HepG2 cells and SMMC-7721 cells caused a significant reduction in cell viability and proliferation. A previous study showed that silencing of PP4C in HEK293 cells led to cell cycle arrest at the M phase, with some cells displaying aberrant chromosomal organization and loss of microtubules near the centrosomes. 20 Thus, to examine whether PP4R1 knockdown affects cell cycle progression, we conducted flow cytometry assay in HepG2 cells and SMMC-7721 cells, and found PP4R1 silencing caused a block at G 2 /M phase of the cell cycle, which was similar to the outcome of PP4C deficiency. Furthermore, transduction of Lv-shPP4R1 into HepG2 cells led to inactivation of two major MAPK signaling cascades: p38 and JNK, indicating that the p38 and JNK pathways might be involved in the inhibition of cell growth by PP4R1. Our ongoing study should further validate the antiapoptosis role of PP4R1 in HCC cells. Besides, it has been suggested that PP4R1 targeted TRAF2 and TRAF6 to mediate the inhibition of the NF-κB pathway. 17 In our study, we found p65 was not expressed in the nucleus and PP4R1 knockdown downregulated the expression of p65, which suggested that the NF-κB pathway might have no effect on the cell growth inhibition by PP4R1 in HCC cells. In conclusion, this is the first report on the involvement of PP4R1 in HCC. Moreover, the inhibition of HCC cell growth by PP4R1 silencing could be linked to the induction of cell cycle arrest as well as apoptosis. Our findings have led us to believe that further research on PP4R1 will increase the understanding of the pathological mechanism of HCC. Disclosure The authors report no conflicts of interest in this work. OncoTargets and Therapy Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/oncotargets-and-therapy-journal OncoTargets and Therapy is an international, peer-reviewed, open access journal focusing on the pathological basis of all cancers, potential targets for therapy and treatment protocols employed to improve the management of cancer patients. The journal also focuses on the impact of management programs and new therapeutic agents and protocols on patient perspectives such as quality of life, adherence and satisfaction. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2017-06-21T08:10:36.677Z
0001-01-01T00:00:00.000
{ "year": 2015, "sha1": "edba9cfc9cddc5419728b6c1727c7b569adf4b9f", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=26393", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fa7ac2e823663fecfe4201180ee5e2f31225538b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55985867
pes2o/s2orc
v3-fos-license
A Single Sided Edge Marking Method for Detecting Pectoral Muscle in Digital Mammograms In the computer-assisted diagnosis of breast cancer, the removal of pectoral muscle from mammograms is very important. In this study, a new method, called Single-Sided Edge Marking (SSEM) technique, is proposed for the identification of the pectoral muscle border from mammograms. 60 mammograms from the INbreast database were used to test the proposed method. The results obtained were compared for False Positive Rate, False Negative Rate, and Sensitivity using the ground truth values pre-determined by radiologists for the same images. Accordingly, it has been shown that the proposed method can detect the pectoral muscle border with an average of 95.6% sensitivity. Keywords-INbreast database; pectoral muscle extraction; segmentation; mammogram INTRODUCTION Mammography is one of the basic and effective imaging techniques used for the detection and diagnosis of breast cancer.Mammography is usually obtained in two forms, the mediolateral oblique view (MLO) and the craniocaudal (CC) view [1].These two views are used to detect abnormal structures within the breast.However, because of the heterogeneity of the malignancies and overlapping of the intense fibro glandular tissue, it may be difficult to read and interpret mammograms in two-dimensional projection images [2].Human factors as fatigue, limitation of the human eye and others may also cause misinterpretations.In addition, mammography alone is not sufficient for a final diagnosis and a biopsy may also be needed.However, a small percentage of breast biopsies are shown to be cancerous (only 15-30% in case of [3]).Therefore, a variety of computer aided diagnostic systems are employed as a second reader to radiologists.Clinical trials indicate that the number of cancers detected by computer-aided diagnosis (CAD) systems is increased by about 10%, which is comparable to double reading by two radiologists [4].MLO-views mammograms often contain pectoral muscle.Since pectoral muscle shows similar features with abnormal structures such as mass, it is confused with suspicious regions in CAD studies and makes accurate diagnosis difficult [5].For this reason, the removal of pectoral muscle from mammograms is important for accurate diagnosis. The automatic removal of pectoral muscle from MLOviews mammograms is a necessary step.At the same time this procedure is quite difficult because values such as shape, size, and density in mammograms differ from mammogram to mammogram.The studies for detection of pectoral muscle from digital mammograms can be categorized in four main groups.These are density-based approaches [6,7], line-based detection approaches [8,9], wavelet-based segmentation [10,11] and statistical [10,12] methods.Authors in [7] proposed a new adaptive method for the detection of pectoral muscle.In this method, the pectoral margin, position and orientation are estimated first with a suitable straight line.This line is then smoothed using the repeated "cliff detection" algorithm to draw the pectoral boundary more accurately.In [13], authors proposed an approach for the detection of segmentation at the pectoral muscle boundary based on the structure tensor.Experimental results indicate that the proposed method distinguishes the pectoral muscle exactly with the segments [13].In [14], authors conducted a study based on the positional characteristics of the pectoral muscle.They have combined iterative Otsu thresholding and mathematical morphological processing to find the rough edge of the pectoral muscle.They applied multiple regression analysis to this rough border to obtain the correct segmentation of the pectoral muscle.In [15], authors combined median filtering, morphological erosion process, sobel edge detector and thresholding to find the breast rough limit, then use the GVF Snake algorithm with gradient map setting to obtain the sensitivity breast border.Authors in [16] used thresholding to identify pectoral muscles, connected component labeling to identify and remove the connected pixels outside the breast region and edge detection processes to identify the edge of the full breast.They used global thresholding for segmentation and removal of pectoral muscle.As a result, they noted that they have effectively removed the gauss and impulse noise and in general, they achieved 90.06% accuracy. In this study, the pectoral muscle border was first determined by a new method called by the authors as Single Sided Edge Marking (SSEM), based on the geometrical properties of the pectoral muscle and neighborhood relations to extract the pectoral muscle from mammograms.Later, the www.etasr.com Toz and Erdogmus: A Single Sided Edge Marking Method for Detecting Pectoral Muscle in Digital … points found were reinforced with morphological operators and a rough pectoral muscle region was obtained.The boundary of the found candidate pectoral muscle area has been reconstructed and this boundary extracted by the linear interpolation method has been redefined so that the missing point does not exist.The results obtained by the radiology specialists with the ground-truth data for the same images were compared in terms of false positive rate, false negative rate and sensitivity.Accordingly, it has been shown that the proposed method can detect the pectoral muscle border with a sensitivity of 95.6 percent. II. MATERIAL In this study, 60 mammogram received from the INbreast [17] database were used to test the methodology.The INbreast consists of 115 cases (410 images) of which 90 cases are from women with both breasts (4 images per case) and 25 cases are from mastectomy patients (2 images per case).Several types of lesions (masses, calcifications, asymmetries, and distortions) are included.Accurate contours made by specialists are also provided in XML format. III. PREPROCESSING Before performing pectoral muscle area boundary detection, noise reduction and image enhancement [18][19][20][21][22] on the image are necessary.The flowchart for all followed in the current work is presented in Figure 1.Firstly, the image has been reduced to a size of 512x512 and the pixel intensity values have been reduced to 256 for decrease processing complexity.All images are then arranged in such a way that the pectoral muscle region is in their upper left corner.In order to achieve this, a mirror image of the mammogram is taken if the pectoral muscle region is located at the upper right corner of the mammogram.No additional procedure has been performed in the mammograms that have the pectoral muscle region in their upper left corner (Figure 2(b)).Then, the mammogram image has been converted to a binary image by using an obtained experimentally threshold value σ = 0.09 (Figure 2(c)).Since, the largest region in the binary image is the breast region; a filter has been applied so that this area remains in the image.The resulting image was eroded using a two pixel structure element and then subtracted from the image obtained in the previous step.Thus, the rough border of the breast region has been obtained as seen in Figure 2(d).The resulting image has been multiplied by the original image and so the result is the image that contains only the breast region.Then, the 3x3 median filter and biorthogonal wavelet transform have been used to remove noise and enhance the image.Finally, the image has been enhanced with adaptive histogram equalization and the noise has been reduced by anisotropic diffusion method (Figure 2(e)). The biorthogonal wavelet and anisotropic diffusion methods used for preprocessing are briefly described below.  Biorthogonal Wavelet Transform Wavelets are used in many areas including noise reduction in image processing.There are many wavelet families [23].Biorthogonal wavelet representation can be seen to have many advantages when compared to orthogonal wavelet representation [24].For example, sub-band images do not change and have no overlap under translation.Smoothing symmetric and asymmetric wavelet functions can be used to reduce the reflection of signal extensions and border effects [24].Because of these advantages, a biorthogonal wavelet (biorthogonal 3.1) has been used for noise reduction in this work.  Anisotropic Diffusion Method Anisotropic diffusion method is a technique aiming at reducing image noise.This method reduces the image noise while preserving important image details, such as lines, edges, etc. that are important for interpretation of the image.The diffusion equation can be formulated as follows [14]. A. Pectoral Muscle Characteristics For the method proposed in this study, some peculiar characteristics of the pectoral muscle were utilized:  The pectoral muscle region is a roughly triangular area.  The pectoral muscle border is approximately a straight line.  There is a certain density change between the pectoral muscle and the breast region.  Pectoral muscle region is roughly homogeneous. The SSEM method is based on the geometric characteristics of mammograms and the intensity difference between muscle and breast tissue.When we look at the geometrical properties of the mammograms, it is seen that it is a roughly triangular region that is narrowed from top to bottom.Since the pectoral muscle region is brought to the upper left corner in the preprocessing, the edge detection process is performed at an www.etasr.com Toz and Erdogmus: A Single Sided Edge Marking Method for Detecting Pectoral Muscle in Digital … angle of about 30° to 45° from right to left and from top to bottom. B. SSEM Details First, three different threshold values have been defined for use in edge marking process.The first of these is the threshold value (φ) used to exclude non-mammogram regions in the images; this value is taken as 5.The second is the value (α) of how much the intensity values of the pixels are similar to each other and is taken as 1 in this study.The third threshold is the threshold value used for the density difference between the pectoral muscle and the breast region.This value is also set to 1 in this study.The fact that this value is kept small ensures that the pectoral muscle border is marked with as many points as possible.Next the mammogram image (I) scanned pixel by pixel, I i,j = (I = 2,3,…M; j = 2,3,…,N-1), where, M defines the number of rows of the matrix I, and N is the number of columns.The selected pixel is evaluated together with its neighbors located at two pixel distances.The following conditions are used for this evaluation. ( 2) 0 then _ 255 As a result of the evaluation made according to these conditions, the new mammogram image marked I_new is obtained on the border of the pectoral muscle.The reason for choosing pixel intensity values of 255 for marking the pectoral muscle margin, is that this is a value less common on the mammogram and easier to distinguish in binary images.A graphical representation of the SSEM method and a sample application are presented in Figure 3. C. Determination of Candidate Pectoral Muscle and Border Area In order to find the candidate pectoral muscle region from the image of the new mammogram marked with the pectoral muscle boundaries as the result of the SSEM method, two different variables, experimentally obtained and named as Area min and Area max were used in this study.Area min is the smallest possible pectoral muscle area and Area max is half of the largest possible mammogram region.In Figure 4, there are sample mammogram images of Area max and Area min .When the SSEM method is used, the pixel at the border of the candidate pectoral muscle is marked as 255.Thus, the resulting image of SSEM is converted a binary image with a threshold value of 0.99 (Figure 5(b)) and the pectoral muscle border is strengthened by morphological operators (Figure 5(c)).The image, which has been converted into a binary image with enhanced pectoral muscle strength, is cleaned from images smaller than the Area min .The cleaned image is added with the previously obtained rough breast region.Then to create the pectoral muscle area, a line is drawn starting from the upper left corner and covering the pectoral muscle corner from top to bottom and left to right (Figure 5(c)).The area between these lines and the border of the candidate pectoral muscle region is filled.Two regions with the largest area are selected to detect the pectoral muscle region (Figure 5(d)).The region located at the upper left of the other region and is bigger than the Area min and smaller than Area max is selected as the pectoral muscle region (Figure 5(e)).There may be residual defects at the border of the candidate pectoral muscle region, sometimes after morphological processing (Figure 6(b)).Similar to the SSEM method, while obtaining the boundaries of the pectoral muscle region to get rid of these disorders, the pectoral muscle location information was taken at an angle of about 30º-45º from the top down and left to right, and the muscle border was created.Finally, a linear interpolation method was used to complete the missing points on the obtained boundaries, and the result obtained was regarded as the final muscle border (Figure 6(c)).The final muscle border obtained for a sample mammogram and the muscle margin plotted using ground truth values given by the experts for the same mammogram are presented in Figure7.The flow chart of all the procedures used for boundary detection and for determining the candidate pectoral muscle region is presented in Figure 8. V. EXPERIMENTAL STUDIES The proposed algorithm has been tested on 60 mammograms received from the INbreast database.The obtained results were evaluated in terms of false positive rate (FPR), false negative rate (FNR) and sensitivity with groundtruth data given by the radiologists.False positive (FP) is the pixel found by the algorithm as the region of the pectoral muscle but detected as a different tissue by the radiologist, false negative (FN) is the pixel shown by the radiologist as pectoral muscle but detected as a different tissue by the algorithm and true positive (TP) is the pixel determined by the radiologist as the pectoral muscle and is also found as the region of the pectoral muscle by the algorithm.FPR, TPR and sensitivity are defined by ( 2)-( 4) and are presented in Table I.Muscle limit (red) plotted using ground truth values given by experts on a sample mammogram and muscle limit (green) found by the proposed method.Fig. 8. Flow chart of candidate pectoral muscle region and border determination As shown in Table I, the proposed method was able to detect the pectoral muscle region with an average sensitivity of 95.6% for the 60 mammogram images.Also, in order to show the performance of the proposed method on different mammogram images, images of pectoral muscle boundary detection applications for four different mammogram images are presented in Figure 9. VI. CONCLUSIONS In this study, a new method has been proposed to determine the pectoral muscle and its borders on digital mammograms.In this method, the pectoral muscle border was determined using a technique based on geometric features of the pectoral muscle and neighborhood relations.The boundaries for the rough pectoral muscle region have been strengthened and adjusted and the linear interpolation method has been used to complete the missing points within this boundary and to establish border continuity.The recommended method was tested using 60 mammograms from the INBreast database.According to the results obtained, it has been shown that the proposed method can detect the pectoral muscle margin with an average of 95.6% sensitivity Fig. 1 . Fig. 1.Mammogram pre-processing flowchart ) where c(x,y,t) is the diffusion coefficient.c(x,y,t) is preserve edges in the image and usually chosen as a function of the image gradient which controls the rate of diffusion.The images related to the preprocessing operations are shown in Figures 2 (a)-(e). Fig. 3 . Fig. 3. Graphical representation of the SSEM method and a sample application (a) Graphical representation for border detection by SSEM method.(b) A sample pixel intensity values of 6x18 muscle and breast area.(c) New mammographic image with marked pectoral muscle border as the result of SSEM method. Fig. 5 . Fig. 5. (a) The original mammogram.(b) The binary image of the SSEM result.(c) The mammogram after morphological processing.(d) Mammogram with the selected two largest areas.(e) Mammogram with the rough pectoral muscle area determined. Fig. 7 . Fig.7.Muscle limit (red) plotted using ground truth values given by experts on a sample mammogram and muscle limit (green) found by the proposed method. Figure 9 . (a, c, e, g) Original Mammograms.(b, d, f, h) Muscle boundaries (red) drawn using ground truth values given by experts and muscle boundaries (green) found by the proposed method.
2018-12-06T21:59:00.663Z
2018-02-20T00:00:00.000
{ "year": 2018, "sha1": "2bf47e1ed2fd494c270816c9e1cb7f13f6a126ad", "oa_license": "CCBY", "oa_url": "http://etasr.com/index.php/ETASR/article/download/1719/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2bf47e1ed2fd494c270816c9e1cb7f13f6a126ad", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
213651027
pes2o/s2orc
v3-fos-license
PDGF-BB homodimer serum level – a good indicator of the severity of alcoholic liver cirrhosis Introduction. Liver cirrhosis is a chronic disease in which progressive fibrosis is noted. This process leads to changed architectonics of the liver parenchyma and the appearance of regenerative nodules, all of which are caused by pathological activation of the hepatic stellate cells. This process is enhanced on a molecular level by many cytokines, with platelet-derived growth factors (PDGFs) playing the key role. Objective. The aim of the study was to assess serum concentrations of PDGFs active biodymers (PDGF-AA, PDGF-BB and PDGF-AB) in patients with alcoholic liver cirrhosis, and to correlate them with the stage of disease. Materials and method. 64 patients with alcoholic cirrhosis and a control group of 16 healthy individuals were analysed. Liver cirrhosis was determined based on clinical image, history of the patients’ alcohol consumption, laboratory findings and abdominal ultrasonography. The serum PDGF-AA, PDGF-BB and PDGF-AB concentrations were determined using INTRODUCTION In chronic liver cirrhosis, progressive fibrosis takes place, leading to impaired architectonics of the liver parenchyma and the appearance of regenerative nodules. There are different diseases and toxins that can be responsible for causing chronic liver damage, among which alcohol is one of the most important, followed by infections with hepatotropic viruses [1]. Europe is considered to be one of the geographical regions with the highest alcohol consumption, although consumption decreased in the 1990s, to increase again and finally stabilise at high levels at the beginning of the 21 st century, depending on the country [2]. The key process in the progression of liver fibrosis leading in time to liver cirrhosis involves the pathological activation of hepatic stellate cells (HSCs) [3]. Activated HSCs change in morphology and behaviour, transforming themselves from a neuronal-like to a miofibroblast-like morphology, and start to produce an extracellular matrix (ECM). The activated stellate cells also produce, among other substances, tissue inhibitors of metaloproteinases (TIMPs), which in turn enable the activation of matrix metaloproteinase (MMP) and further enhance ECM deposition in the liver. The ECM contains mainly collagen type I, the overproduction of which is a direct and basic cause of liver fibrosis [4]. This process is enhanced on a molecular level by many pro-inflammatory, proangiogenic and profibrotic cytokines involving, among others, platelet-derived growth factors (PDGFs), tumour necrosis factor α (TNF-α), transforming growth factor β (TGF-β), interleukin 6 (IL-6), fibroblast growth factors like , leptin and many others, issued by damaged liver cells [5]. The strongest cytokines causing activation of HSCs are PDGFs and TGF-β. HSCs can also be activated by reactive oxygen species (ROS) issued during the metabolism of ethanol or other toxins, making oxidative stress a second line of activation factors of HSCs. Thus, direct damage of hepatic cells by a liver damaging agent leads to the production of multiple cytokines activating HSCs, while on the other hand, activated HSCs are able to enhance their own activation by further producing growth factors such as PDGFs and TGF-β. PDGFs take part directly in liver fibrosis and cirrhosis and have been studied extensively in the literature [6][7][8][9][10][11][12][13][14]. There are4 different PDGF isoforms or subunits (A, B, C and D) that act as bio-active dimmers, and via a disulfide-bond linkage create 5 homologous or heterogeneous bio-polymers -PDGF-AA, -BB, -AB, -CC, and -DD, [7,8]. Among these isoforms, PDGF-A has a molecular weight of 16 kDa, consists of 211 amino acids, and is located at the chromosomal site 7p22. PDGF-A has been found to have a high expression in the muscle, aorta and heart. The other isoform -PDGF-B -has a molecular weight of 14 kDa, is located at the chromosomal site 22q13 and exhibits high expression in the placenta and heart. PGDFs A and B link together to form 3 homologous or heterologous dipolymers -PDGF-AA, -BB and -AB, which in turn bind as homo-or heterodimers to 2 possible receptors -PDGF α and β. PDGF-AA primarily binds with platelet derived growth factor receptor αα (PDGFR-αα) in order to control the proliferation and chemotaxis of cells. PDGF-AB in turn, binds with either PDGFR-αα or platelet derived growth factor receptor αβ (PDGFR-αβ), whereas the PDGF-BB binds with all receptor subunits (PDGFR-αα, -αβ and -ββ). Stimulated PDGFR-αβ and -ββ take part in boosting collagen synthesis and cellular adhesion [8]. The PDGF-B isoform is thought to be the most serious mitogen for HSCs [9,10]. The blood concentration of PDGF-BB was found to positively correlate with the degree of liver fibrosis and inflammation in chronic B hepatitis patients [11]. Some authors, however, have shown that PDGF-A could also be treated as an important profibrogenic agent, exercising its action via TGF-β1 induction. Recently, the platelet-derived growth factor A mRNA has been found to correlate with the liver fibrosis grade in chronic hepatitis C [12]. The other 2 PDGFs -PDGF-C and PDGF-D -seem to be implicated more in vascular pathologies, for example, atherosclerosis or stroke. A different sequence of serum cytokines, including PDGF-BB, TIMP-1 and MMP, has been proposed in screening for liver fibrosis [12][13][14]. Although a great deal is known about the mechanisms responsible for liver fibrosis and cirrhosis and the role of PDGFs in it, the exact molecular presence of PDGF dimers specifically in alcoholic cirrhotic disease has not been fully investigated. OBJECTIVE The aim of the study was to examine the correlation between the stage of alcoholic liver cirrhosis and subtypes of bioactive dimers of the profibrotic cytokines (PDGFs) in order to find the best serum indicator of alcohol-induced liver fibrosis process and future possible treatment targets. MATERIALS AND METHOD 64 patients from the Lublin Region in Eastern Poland were included who were diagnosed with alcoholic liver cirrhosis due to an over-consumption of alcohol for many years. The assessment of alcohol consumption was estimated on the basis of a self-reported-survey. The cirrhotic patients had abused alcohol for a mean of 12.±7.8 years, with the quantity of alcohol being consumed equal to a daily consumption of 5.1 ± 2.8 drinks in men and 4.5 ± 2.6 drinks y in women. In the survey it was assumed that one drink equaled 10 g of pure alcohol. Patients with viral, autoimmune diseases and alcoholic hepatitis were excluded from the study. Liver cirrhosis was diagnosed on the basis of clinical picture, patient's history of heavy alcohol consumption, laboratory results and abdominal ultrasonography. The stage of liver cirrhosis was assessed with the use of Child-Turcotte-Pugh criteria (Child-Pugh score) taking into account 5 features: the laboratory results of bilirubin, albumin, INR, the presence of ascites (determined by ultrasonography) and encephalopahy (graded by a neurologist) [15]. A score of 1, 2, or 3 was given to each measure, with 3 being the most severe. Class A of Child-Pugh classification involved patients with 5-6 points, Class B -7-9 points and class C -10-15 points. Based on these criteria, the patients were classified into 3 groups: 1 st -P-Ch A, 17 patients with stage A, 2 nd -P-Ch B, -26 patients with stage B and 3 rd -P-Ch C -21 patients with stage C liver cirrhosis. The control group comprised 16 healthy individuals who did not abuse alcohol and had no liver disease. None of them, neither the cirrhotic patients nor controls received mineral supplements. Any underlying liver disease was excluded in the control group by both clinical assessment and laboratory tests. The subgroups of patients did not differ significantly in age or gender (Tab. 1). Tables 1 and 2 present detailed demographic, clinical and biochemical characteristics of the patients. The study was approval by the Local Ethics Committee of the Medical University of Lublin, Poland (KE-0254/349/2015), and all participants gave written informed consent for participation in the study. Laboratory analysis. The serum concentrations of 3 dimers of platelet derived growth factors PDGF AA, PDGF-BB and PDGF-AB were determined using the appropriate ELISA Kits (Cloud-Clone Corporation, USA) in accordance with the manufacture's instructions. The microplates provided in theses kits were pre-coated with antibodies specific for a particular growth factor. Samples, standards and standard diluents (for the blank sample) were applied onto microwells and incubated for 1 hour at 37 o C. Subsequently, Avidin conjugated to Horseradish Peroxidase (HRP) was added to every microwell onto all 3 microplates and incubated under the same conditions. Next, the TMB substrate was added and coloured products were formed in proportion to the amount of PDGF-AA, PDGF-BB and PDGF-AB. The reaction was stopped by adding sulphuric acid, and immediately afterwards, the absorbance of the yellow product was assessed using an Epoch Microplate Spectrophotometer (BioTek Instrumentals, Inc., Winooski, VT, USA). The sample concentrations of PDGF-AA, PDGF-BB and PDGF-AB were calculated using the appropriate standard curves. Statistical Analysis. All continuous variables were expressed as mean ± standard deviation (SD). Before performing statistical analysis, the Shapiro-Wilk test was used to test the variables for normality. The Brown-Forsythe's test was then used to test the equality of variances. Serum titer differences between the study and control group were estimated with the use of the ANOVA single factor test. For assessment of the correlations between variables, r-Pearson correlation coefficients were calculated. All qualitative variables were expressed in percentage, as indicators of structure. The Kruskal Wallis χ2 test was used for intergroup comparisons. For all tests, p value <0.05 was assumed as statistically significant. All calculations were performed using SPSS Statistics Software, IBM. PDGF-AB concentrations. There was no statistical difference in the serum concentration of PDGF-AB between cirrhotic patients and controls. Its level amounted to 3.79±1.15 ng/ml in Child-Pugh A, 3.77± 1.63 ng/ml in group B, and 3.92± 1.14 ng/ ml in group C patients, compared to controls (3.36±0.72 mg/L; p>0.05. No statistically significant differences of PDGF-AB serum concentrations were demonstrated between particular groups of liver cirrhosis. The serum levels of different dimers of PDGFs are summed up in Table 3 and Figures 1-3. PDGF dimers clinical and laboratory correlations. Analysis between the subgroups of cirrhotic patients showed positive correlations between the serum level of PDGF-AA and PDGF-BB and stages of liver cirrhosis (Pugh-Child) (R=0.26; p=0.038 and R=0.28; p=0.027, respectively). Furthemore, PDGF-AA correlated negatively with platelet, INR levels and encephalopathy diagnosis in cirrhosis patients (p=0.001, p<0.05 and p=0.004, respectively). PDGF-BB correlated statistically with the serum levels of albumin (p<0.05), bilirubin (p=0.001), ASP, urea and CRP (p<0.05), as well as with the presence of encephalopathy (p<0.05). DISCUSSION Non-invasive diagnosis of hepatic fibrosis is becoming very important and promising, especially in limited liver biopsy. It plays an especially important role in the surveillance of treatment and in screening for hepatic fibrosis. Even though the pathogenesis of hepatic fibrosis is still unclear, early diagnosis and treatment of this pathology would help markedly diminish the mortality rate of patients. Plateletderived growth factors (PDGFs) are cytokines that together with the necrosis factor-α (TNF-α), transforming growth factor β1 (TGF-β1) and the ECM, play an important role in hepatic fibrogenesis [7][8][9][10]. Among these common cell regulators, PDGFs are the most powerful elements involved in stimulating HSC proliferation, differentiation, and migration. They also stimulate collagen production and deposition, and transform HSCs into myofibroblasts [14]. Blocking PDGF signaling has been found to inhibit HSC proliferation and ameliorate liver fibrogenesis [16]. It has also been observed in clinical studies that stimulation of PDGFs and their downstream molecules seems to be associated with the activation of necroinflammation and fibrosis in patients with hepatic damage [13][14][15]. Therefore, the PDGF signaling pathway is certainly crucial in the advancement and further prognosis of hepatic fibrosis. The current study shows that the plasma levels of PDGF-AA and PDGF-BB homodimers increased significantly in patients with alcoholic liver cirrhosis, whereas the plasma level of PDGF-AB heterodimer did not show such a relationship. When the stage of the disease increased, so did the concentrations of PDGF-AA and PGFD-BB in blood. Furthermore, it was observed that the serum level of both Table 3. PDGFs subgroups serum levels in studied groups PDGF-AA and PDGF-BB correlated significantly with the severity of alcoholic liver cirrhosis (measured by Pugh-Child's scale). It must be stressed that the correlation between the extent of the severity of alcoholic liver cirrhosis was stronger in the case of the dimer PDGF-BB levels than in PDGF-AA (R=0.28; p=0.027 and R=0.26; p=0,038, respectively). The non-parametric Kruskal Willis test showed the high level of inter-group differences in PDGF-AA results in cirrhotic patients (χ2 = 5.82) compared to controls, but with a tendency but a non-important statistical difference between the subgroups of cirrhotic patients (marginal p=0.054). To the best of the knowledge of the authors of the presented study, this is the first clinical study to concentrate on assessment of the plasma levels of PDGFs in patients with alcoholic liver cirrhosis. The obtained results are consistent with the data found in other causes of liver cirrhosis. Diang et al. analysed chronic hepatitis B (CHB) patients and found that the serum concentrations of PDGF-BB homodimer could reflect the extent of liver damage and liver fibrosis in those patients [11]. The authors found that liver function parameters and serum liver fibrosis markers were significantly correlated with serum PDGF-BB. Furthermore, liver fibrosis markers and serum concentrations of PDGF-BB in CHB were positively correlated with the extent of liver damage. Suprisingly, Diang et al. also stated that serum concentrations of PDGF-BB in HBeAg-negative CHB were even significantly higher than those in the HBeAg-positive CHB. Tanikawa et al. evaluated the expressions of PDGF-A, PDGF-B, and TGF-β1 in hepatic tissue and platelets from HCV-infected patients who had different degrees of hepatic fibrosis [12]. They found that the mRNA expression of PDGF-A in platelets differed depending on the degree of liver fibrosis. Patients with advanced fibrosis had significantly higher PDGF-A mRNA expressions than patients with early-stage fibrosis. Furthermore, TGF-β1 was more frequently expressed in platelets than in liver tissue, and vice versa for PDGF-B. Tanikawa et al. conclude that the stimulation of HSCs leading to fibrosis could be caused by PDGF-A mRNA expression from megakaryocytes, involving the TGF-β1 signaling pathway. In the current study of alcoholic liver cirrhosis, high serum concentrations of PDGF-AA dimers were found which differed less importantly from the controls. Analysis between subgroups of cirrhotic patients, however, showed positive correlations between the serum level of PDGF-AA and stages of liver cirrhosis (Pugh-Child), just as in the case of PDGF-BB levels in the alcoholic patients in the current study. PDGF-B isoform is recognised as the most important mitogen for HSCs. This might also be caused by the fact that PDGF-BB binds with all subunits of PDGF receptors (PDGFR-αα, -αβ and -ββ), and therefore its biological effect is more important [7][8][9][10]17]. This study of alcoholic liver cirrhosis confirms that both dimers -PDGF-AA and PDGF-BB -are important, and their levels increase in serum compared to controls, whereas this is not true for the dimer PGDF-AB. Liver cirrhosis is a serious disease involving considerable mortality [18]. Multiple clinical studies with hepatic patients evidence that the reduction and even the reversal of fibrosis is achievable with effective treatment of the underlying disease [19]. Marcellin et al. have shown that treatment of viral B hepatitis leads to a reversal of established cirrhosis in most patients [20]. Poynard et al. have also obtained good results in the treatment of viral C hepatitis [21]. The treatment of many non-viral liver diseases, however, is more difficult, and in such cases an adequate anti-fibrotic drug could help in a large number of patients [16,18]. Recent studies outline the importance of anti-fibrotic drugs acting via inhibition of PDGFs, especially its isoform B and A [22][23][24]. The current study proves the importance of PDGF-BB and PGDF-AA homodimers in alcoholic liver cirrhosis, which could be considered as possible treatment target of anti-fibrotic theraphy. Limitations of the study. Firstly, this was a single-centre study and the relatively small number of studied individuals and control subjects were insufficient to arrive at final conclusions; therefore, further prospective studies in a large population of liver cirrhosis are needed. The study involved only patients with alcoholic liver cirrhosis; thus, it cannot be determined whether the observed relationships could also concern other groups of patients with different causes of liver cirrhosis. CONCLUSIONS The serum level of PDGF-AA and PDGF-BB homodimers increases in patients with alcoholic liver cirrhosis, unlike the serum level of PDGF-AB heterodimer. Furthermore, the serum level of both PDGF-AA and PDGF-BB correlates significantly with the stage of alcoholic liver cirrhosis (Pugh-Child), the correlation being stronger in the case of PDGF-BB levels than PDGF-AA. The plasma levels of PDGF-AA and -BB seem to be good indicators of the alcohol-induced liver fibrosis process, and might be considered as possible future treatment targets. This is especially true for serum PDGF-BB levels that also show important inter-group differences in cirrhotic patients. Further prospective studies are needed to fully explain the role of these homodimers in different groups of patients with liver cirrhosis.
2020-02-06T09:03:54.653Z
2020-03-17T00:00:00.000
{ "year": 2020, "sha1": "35e4b7cda1fcea6966b8eb6b4c21de5173d63ccc", "oa_license": "CCBYNC", "oa_url": "http://www.aaem.pl/pdf-115997-47889?filename=PDGF_BB%20homodimer%20serum.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7885ac1df704925153cabeb2dccb711990040879", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264129429
pes2o/s2orc
v3-fos-license
Multimodal neuromonitoring in traumatic brain injury patients: the search for the Holy Graal © The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http:// creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Critical Care Care Multimodal neuromonitoring in traumatic brain injury patients: the search for the Holy Graal Fabio Silvio Taccone 1* , Elda Diletta Sterchele 1 and Michael Piagnerelli 2 Dear Editor, We recently perused the enlightening article by Svedung Wettervik et al. with great interest, a study that has provocatively questioned the established role of brain oxygenation pressure (PbtO 2 ) in the intricate management of traumatic brain injury (TBI) patients [1].The authors stated that PbtO 2 might not serve as an optimal outcome measure to ascertain the adequacy of cerebral hemodynamics optimization.This assertion stems from several observations: (a) low PbtO 2 values observed in only 17% of the monitoring time; (b) low PbtO 2 rarely concurrent with high intracranial pressure (ICP), low cerebral perfusion pressure (CPP) or altered cerebral autoregulation indices (PRx); and (c) the lack of correlation between PbtO 2 and ICP, CPP and PRx.While the article is undeniably of paramount significance and the authors have commendably delineated the study's limitations, we believe there are certain nuances and considerations that warrant further discussion for the readership, especially when broaching the topic of neuromonitoring in this clinical setting. Firstly, this study provides empirical evidence reinforcing the notion that cerebral perfusion and cerebral oxygenation are not inextricably linked.It is imperative to understand that cerebral oxygenation is a multifaceted entity, contingent not solely on regional cerebral blood flow but also profoundly influenced by arterial oxygen content, thus encompassing both oxygen and hemoglobin levels, oxygen consumption rates and the efficiency of oxygen diffusion, which is intrinsically tied to microvascular function, which is frequently altered after TBI [2].Given this intricate interplay, it is not surprising that a significant correlation between PbtO 2 and the aforementioned perfusion variables remains elusive.A dissociation between global perfusion and tissue oxygenation has been already reported in septic patients [3]; as such, PbtO 2 , ICP, CPP and autoregulation assessment proffer complementary insights rather than merely echoing redundant data and should therefore be integrated into a multimodal approach to better understand the consequences of TBI on tissue perfusion and oxygenation.Consequently, clinical scenarios may arise where patients exhibit satisfactory oxygenation, as might be observed in low CPP and ICP values coupled with low oxygen consumption, or in cases marked by elevated ICP, concurrent with cerebral hyperemia. Secondly, the precise positioning of the PbtO 2 probe, a detail of paramount importance, was not consistently delineated in the study.Prior investigations have shown that probes strategically placed in non-injured cerebral territories might not yield clinically relevant data; specifically, the absence of a correlation between reduced PbtO 2 values and adverse outcomes has been documented in this setting [4].Thus, it would necessitate exceedingly elevated ICP and reduced CPP levels to compromise oxygenation in regions of the brain that ostensibly appear unaffected after TBI [4].This finding could elucidate the consistently within-norm PbtO 2 measurements across wide ranges of ICP and CPP values, as reported in this study [1]. Thirdly, the study was conducted under the aegis of an institutional protocol tailored to optimize PbtO 2 , but not autoregulation, in clinical practice.This approach invariably reduced the incidence of PbtO 2 values below thresholds that were considered dangerous; however, no specific interventions were delineated for instances of compromised autoregulation.Given that PbtO 2 can register falsely reassuring readings in the presence of high PaO 2 values [2], integrating the PbtO 2 /PaO 2 ratio [5] into the analytical framework might have furnished invaluable insights on the possible association between tissue hypoxia and impaired cerebral autoregulation.If the persistent dissociation between tissue oxygenation and autoregulation indices would still be established, the ultimate objective of enhancing hemodynamics in TBI patients remains uncertain.This raises the question of whether achieving optimal hemodynamics would entail optimizing autoregulation for its maximal efficacy (taking into account vascular responses to pressure or other stimuli) or improving tissue oxygen delivery, essential for cellular function. Lastly, the study spans a notably protracted timeframe, from 2002 to 2022.It is pivotal to recognize that PbtO 2 -driven therapeutic strategies have been ushered into the clinical arena predominantly over the past decade.This evolution in clinical practice paradigms could potentially have swayed the observed associations between PbtO 2 , other pertinent variables, and overarching clinical management strategies.Incorporating time as an additional confounding factor in the subgroup analysis of the study could have helped in the interpretation of its findings.
2023-10-16T13:54:21.653Z
2023-10-16T00:00:00.000
{ "year": 2023, "sha1": "5811605abb515c9d1f6031d339e6a3bd047c17e6", "oa_license": "CCBY", "oa_url": "https://ccforum.biomedcentral.com/counter/pdf/10.1186/s13054-023-04679-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "60d52b42e26ae0bccadf9fc658169205c7bf769a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7187994
pes2o/s2orc
v3-fos-license
Initial Responses of Articular Tissues in a Murine High-Fat Diet-Induced Osteoarthritis Model: Pivotal Role of the IPFP as a Cytokine Fountain Obesity and high body mass index are associated with a higher incidence of osteoarthritis (OA). The aim of this study is to investigate the involvement of the infrapatellar fat pad (IPFP) in the sub-acute effect of a high fat diet (HFD) on the development of knee-OA. C57BL/6J male mice were fed either a HFD or a normal diet beginning at seven weeks of age. Tissue sections were evaluated with immunohistological analysis. The IPFP was excised, and mRNA expression profiles were compared using real-time RT-PCR analysis. Osteoarthritic changes were initiated in the HFD group after eight weeks of the HFD. Increased synovial cell number and angiogenesis at the anterior edge of the tibial plateau were exhibited prior to osteophyte formation. Quantitative histological analysis indicated that osteophyte volume was significantly increased in the HFD group after eight weeks, along with an increase in the IPFP volume, the size of individual adipocytes and the number of vessels in the IPFP. Histomorphometrical analysis revealed osteophyte area was significantly associated with IPFP area, individual adipocyte area and vascular area. Real-time RT-PCR analysis demonstrated elevated mRNA expression of inflammatory cytokines, growth factor, and adipokines in the IPFP after eight weeks of the HFD. These findings are in parallel with increased expression of the CD68 macrophage marker after eight weeks of the HFD. Expression levels of the adipokines were significantly correlated with expression of TNF-α, VEGF and TGF-β. Immunohistological analysis revealed that the Nampt protein was highly expressed in the IPFP especially around the site of osteophyte formation. Apoptosis and proliferation of chondrocytes were both enhanced at the site of osteophyte formation, indicating higher cell turnover at this region. These observations suggest the IPFP plays a pivotal role in the formation of osteophytes and functions as a secretory organ in response to a HFD. Introduction Osteoarthritis (OA) is a chronic degenerative joint disorder characterized by articular cartilage destruction and osteophyte formation, and is prevalent in society as a major cause of disability. OA risk factors identified by previous epidemiologic studies are limited to age, trauma history, occupation, gender and obesity [1]. Obesity and high body mass index are associated with a higher incidence of OA [2][3][4][5]. Excess weight caused by obesity introduces increased weight bearing on the knee joints, implicating the influence of mechanical factors in the development of OA especially in major joints of the lower extremity [6]. Trauma, joint instability, and developmental dysplasias are also recognized as predisposing factors in animal models of OA [1]. A number of cohort studies have demonstrated that obesity is an independent risk factor for hand OA [7,8]; however, mechanical stress cannot explain such a correlation. Therefore, it has been hypothesized that one or more systemic factors are responsible for the correlation between obesity and OA. Obesity is also associated with an increased amount of adipose tissue, which is recognized as having potent endocrine activity and can give rise to inflammation. Adipose tissue expresses and secretes a variety of bioactive peptides, known as adipokines, which act at both the local (autocrine/paracrine) and systemic (endocrine) level [9]. Activation of adipose tissue macrophages within fat depots is also accompanied with the development of an obesity-induced proinflammatory state [10,11]. Chronic inflammation triggered by obesity is associated with several diseases such as type 2 diabetes, defective immunity, hypertension, atherosclerosis and several cancers [12]. These studies suggest that inflammatory molecules secreted from adipose tissue may provide a critical connection between obesity and OA. The infrapatellar fat pad (IPFP) is located in the knee underneath the patella, between the patellar tendon, femoral condyle and tibial plateau, and is positioned closely to the synovial layers and cartilage surfaces of the knee joint. The IPFP contains adipocytes and has an increased number of immune cells such as lymphocytes, monocytes and granulocytes that have migrated from the blood circulatio [13]. IPFPs from OA patients contain inflammatory cytokines, such as basic fibroblast growth factor (bFGF), vascular endothelial growth factor (VEGF), tumor necrosis factor (TNF) alpha, and interleukin (IL) 6 [14]. Thus, the IPFP could play an important role in the initiation and progression of knee-OA. However, the precise roles of the IPFP at the initiation of OA, for instance, or whether a HFD initiates the IPFP to produce more inflammatory mediators, has not been elucidated. Activation of the synovial layer (synovitis) is seen in many osteoarthritic joints and formation of osteophytes at the junction of the periosteum and synovium is a common feature [2] [3]. This process is initiated with the formation of chondrophytes, followed by chondrocyte hypertrophy. Finally, chondrophytes develop into osteophytes as a result of ossification. However, it is unknown what initiates these processes. Various laboratories have established in vivo OA models in order to study the mechanisms of OA development. [1,[15][16][17][18] [19-21] Providing a HFD has been shown to increase the incidence of OA in male mice of C57Bl6 strain [19,20]. In order to investigate the mechanisms of OA initiation, the initial reaction of the knee joints in response to a HFD was evaluated. A detailed histological investigation has been employed to permit rapid evaluation of the murine knee joints as a consequence of a HFD. Histological grading analyses for assessment of OA were utilized rather than quantitative analyses, as there is a lack of measurable markers for OA. Taken together, we hypothesized inflammatory responses would occur in the IPFP in advance of the initiation of OA. To clarify the role of the IPFP, we induced OA with a HFD and investigated the initial responses of the knee articular cartilage and the IPFP by detailed histological analysis and real-time RT-PCR analysis. Materials and Methods Male C57BL/6J mice were purchased from Sankyo Labo (Tokyo, Japan). Ethical approval was obtained from the institutional review board of Tokyo Medical and Dental University. C57Bl6J mice were fed a diet containing 32% fat for the HFD group or 4.8% fat for the control group (HFD32 and CE-2; CLEA Japan, Inc. Tokyo, Japan) [22] from the age of seven weeks. All of the animals were allowed unrestricted activity and were provided food and water ad libitum. None of the mice died during the experimental period. Assessment of OA Severity Mice were sacrificed at four, eight, and twelve weeks after initiating the diet (n = 10 at each time point). Whole knee joints were removed by dissection, fixed in 4% paraformaldehyde, and decalcified in EDTA. After dehydration and paraffin embedding, serial 5-mm sagittal sections were made from the whole medial compartment of the joint. Three sections (from lanes 1-3, Figure 1a, b) were obtained at 100-mm intervals from the weight-bearing region of each knee joint. Lane1 was defined as the section in which the central region of the medial meniscus was continuous (Figure 1b, arrow). Lane2 and Lane3 were 100 mm and 200 mm lateral to Lane1 respectively. The sections were stained with Safranin O-fast green or HE. OA severity in the tibial plateau was evaluated according to a cartilage destruction score (1). Quantitative osteophyte determination was made in the sections from lane 1 (Figure 1a Histomorphometry Analysis Histomorphometric measurements were performed using image analysis software (Image Pro Plus 4.1, Media Cybernetics, Carlsbad, CA, USA). Based on immunostaining for anti-CD31 antibody, immunopositive cells were defined as endothelial cells and the vascular areas were quantified. For quantification of adipocytes, over 30 adipocytes in IPFP were selected and the area of each cell was quantified and averaged. RNA Extraction and Real-time RT-PCR The IPFP tissue was excised using a surgical microscope and microsurgical technique at the previously indicated periods. Total RNA was extracted from the IPFP using TRIzol according to the manufacturer's directions (Invitrogen). Realtime RT-PCR was performed using the SuperScript III Platinum Two-Step qRT-PCR kit with SYBR Green on the Mx3000PH QPCR System. Briefly, 0.5 mg total RNA was incubated with 10 ml 26 RT reaction mix and 2 ml RT, and then incubated for 50 min at 42uC. The reaction was terminated by incubating for 5 min at 85uC. The cDNA mixture was then incubated for 30 min at 37uC in the presence of RNase H. The PCR reaction was carried out using a mixture of Platinum SYBR Green qRT-PCR Super-Mix UDG, the template cDNA, 10 mM of the primer mix, and DNase-free H 2 O with a total volume of 20 ml per well. The cycling conditions were performed as indicated in the Invitrogen SuperScript TM III Platinum two-step qRT-PCR kit with SYBR Green. Gene expression was normalized to the endogenous control GAPDH, and fold changes in the genes of interest were determined using the comparative threshold cycle (Ct) method. Immunohistochemistry The protein expression of CD31, Nampt, PCNA or TGF-b1 was examined by immunohistochemistry with anti-mouse CD31 antibody, anti-mouse Nampt antibody, anti-mouse PCNA antibody (Abcam Biochemicals, Cambridge, UK), or anti-mouse TGF-b1 antibody (Santa Cruz Biotechnology, INC.) used according to the manufacturer's instructions. Briefly, tissue sections were incubated overnight at 4uC with primary antibodies, followed by a 30-min incubation at room temperature with appropriate secondary antibodies. Next, the signal was visualized using peroxidase-conjugated avidin and diaminobenzidine from a Vectastain kit, according to the manufacturer's instructions (Vector Laboratories, Burlingame, CA). TUNEL Assay The TUNEL assay was performed using a TUNEL detection kit according to the manufacturer's instructions (Takara Shuzo, Kyoto, Japan). A section was procured from lane 2 (Figure 1a, c) of each specimen and incubated with 15 mg/ml of proteinase K for 15 min at room temperature, then washed with Phosphate Buffered Saline (PBS). Endogenous peroxidase was inactivated with 3% H 2 O 2 for 5 min at room temperature. After washing with PBS, sections were immersed in buffer containing deoxynucleotidyl transferase and biotinylated dUTP and incubated for 90 min at 37uC in a humid atmosphere. After washing in PBS, signals were examined by fluorescence microscopy and the number of TUNEL-positive cells in the articular cartilage above the tidemark was determined. Statistical Analysis Data are expressed as the mean 61 SD. Statistical analysis was performed with the Mann-Whitney U test. P values less than 0.05 were considered significant. Pearson linear regression was used to determine the degree of association between mRNA expression of adipokine or inflammatory cytokine and histological values. The linear regression coefficient R 2 were reported. Values of P,0.05 were accepted as significant. Impact of HFD on IPFP Histology Over Time Mice fed the HFD weighed 10% more than normal diet mice by four weeks, 20% more by eight weeks and 50% more by twelve weeks (Figure 2c, p.0.05). Histological examinations of the slides from Lane 3 were made to estimate the effect of the HFD on IPFP histology (Figure 2a, b). The total volume of the IPFP was increased in the HFD group (Figure 2b, d) when compared to the normal diet group (Figure 2a). Average individual adipocytes found in the IPFP were also significantly increased from week eight of the HFD diet when compared to the control group mice (Figure 2e). Concurrently, active angiogenesis was observed in the IPFP of the HFD group (Figure 2f). Figure 3 depicts features of osteophyte formation at high magnification. The IPFP volume was slightly increased in the HFD group by week four (Figure 3d). Inflammatory features, including enhanced angiogenesis (arrows) and infiltration of macrophage-like round synovial cells (asterisk) were observed at the anterior edge of the tibial cartilage by week four in the HFD group (Figure 3d, e). Cartilaginous osteophytes gradually developed in the same region (Figure 3e, h, f, i, arrowhead), and the features of synovitis continued by twelve weeks (Figure 3f). Ossification of cartilaginous osteophytes was apparent, with size and maturity increasing from eight weeks to twelve weeks of the HFD (Figure 3f, i, arrowhead, Figure 4a). Osteophytes formed predominantly at the antero-medial region of the tibial plateau, so that the size of the osteophyte was larger in Lane 1 compared to Lane 2 or 3 (data not shown). Histological evaluation indicated the cartilage destruction score of the HFD group was significantly increased after 8 weeks of the HFD (Figure 4b, P,0.05). To elucidate the implication of adipogenesis and angiogenesis in osteophyte formation, the correlations between histological values and osteophyte volume were evaluated. As shown in Figure 4c, osteophyte area was significantly associated with IPFP area, individual adipocyte area and vascular area. Correlation of these values (R 2 = 0.5923 for osteophyte area and IPFP area, R 2 = 0.6358 for osteophyte area and adipocyte area, R 2 = 0.4376 for osteophyte area and vascular area) were strong. mRNA Profiles in the IPFP We evaluated the expression levels of inflammatory cytokines in the IPFP for the HFD and control groups, since the IPFP has recently been implicated in the pathology of osteoarthritis [13,23]. The IPFP was excised using a surgical microscope and microsurgical technique at eight and twelve weeks after initiation of the diet. Real-time RT-PCR analysis revealed the expression levels of adipokines (Leptin, Nampt, Chemerin and Lipocalin2), inflam-matory cytokines (VEGF and TNF-a), and growth factor (TGF-b) were significantly elevated in the IPFP from the eighth week of the HFD. Simultaneously, the CD68 macrophage marker was increased in the IPFP from the eighth week of the HFD (Figure 4d). The expression of adiponectin, which is reported to play a protective role in OA [24], was not affected by HFD both in IPFP and serum (Figure 4d). To elucidate the association between adipokines and inflammatory cytokines, the correlation coefficient among their expression levels were calculated. As shown in Figure 4a, adipokine expression and TNF-a, VEGF, and TGF-b were significantly correlated. In addition, CD68 expression positively associated with Nampt expression, indicating the influence of macrophages on Nampt expression. The expression of NAMPT is increased in the plasma and synovial fluid of patients with OA [25]. Immunohistological analysis for Nampt was performed to verify the spatial and temporal expression pattern of Nampt. Immunohistological examination revealed that Nampt protein was highly expressed in the IPFP at the twelfth week of the HFD (Figure 5a, b). Of note, Nampt expression was condensed in the vicinity of osteophyte formation (Figure 5a, b). Immunostaining for PCNA, a marker for proliferation, was conducted to estimate cell turnover activity at the site of osteophyte formation. PCNA positive cells were abundantly observed at the site of osteophyte formation in the HFD group (Figure 5d). These observations indicated an enhanced state of cell turnover at this region. Chondrocyte Apoptosis in HFD Models Apoptotic cells exist abundantly at the chondro-osteophyte, which is observed at the peripheral area of osteoarthritic joints, even in the early stage of the disease [26]. Chondrocyte apoptosis is increased in OA cartilage and is anatomically linked to proteoglycan depletion [27]. These observations prompted us to investigate the effect of the HFD on chondrocyte apoptosis. TUNEL staining was performed for the lane 2 section in the HFD mice and controls at week eight of the diet. TUNEL-positive cells were abundantly observed in the superficial layer of the articular cartilage ( Figure 5c) and at the site of osteophyte formation (Figure 5c, d, arrows) in the HFD mice. The number of TUNELpositive cells in the articular cartilage was significantly increased in the HFD group (Figure 5e, f). Combined with enhanced pro-liferation of chondrocytes in this region, the knee joint in the HFD group is in the course of osteoarthritic change, including osteophyte formation. Discussion Our findings revealed that a HFD induces hypertrophy of the IPFP in association with inflammation in less than three months time. In order to objectively quantify the severity of OA development in the HFD model, we evaluated serial sections for histological findings. We quantified the severity of OA development using osteophyte volume and the number of TU-NEL-positive cells rather than scoring of the visual findings. Several laboratories have succeeded in developing murine knee OA with more than a three-month exposure to a HFD. OA susceptibility for HFD mice depends on mice strain and gender [28]. For example, mice of strain Dba respond less readily to the HFD than mice of strain C57 black [28]. Furthermore, female mice are less susceptible to the change in dietary regimen than males. Male mice of strain C57 black fed a diet containing 29% fat from the age of six or twelve months to the end of their lives showed an accelerated onset and an increased incidence of OA as compared with control mice fed a stock diet containing 5% fat [19]. Our study reveals the initiation of OA change occurs at an earlier period than previously reported. We demonstrated enhanced angiogenesis, infiltration of macrophage-like cells and increased synovium at the anterior edge of the tibial cartilage in advance of and in combination with osteophyte formation by week eight of the HFD. We also found enhanced adipokine secretion with IPFP hypertrophy, followed by aggregation of synovial cells. The IPFP has recently been implicated as an additional joint tissue involved in the development and progression of knee-OA [13,23]. The human osteoarthritic knee IPFP was found to contain significantly elevated protein levels of inflammatory cytokines and adipokines [14]. These inflammatory mediators have been found in synovial fluid and have been suggested to influence cartilage and synovial metabolism [29]. Our study demonstrated that the mRNA expression levels of inflammatory cytokines, such as VEGF and TNF-a, and adipokines, such as Leptin and Nampt, and growth factors, such as TGF-b, were enhanced at week eight of the HFD which is consistent with previously reported data [14,25,29,30]. We clearly exhibited adipocyte hypertrophy and increased angiogenesis were strongly correlated with osteophyte volume (Figure 3c). Furthermore, the expression of adipokines (Nampt and Leptin) and adipocyte hypertrophy markers (Lipocalin2 and Chemerin) was correlated with expression of TGF-b and inflammatory cytokines in the IPFP (Figure 4a). These results indicate adipocyte hypertrophy closely links osteophyte formation through secretion of inflammatory cytokines. Leptin has been detected in synovial fluid (SF) obtained from patients with OA [30]. Leptin expression is also enhanced in both osteophyte and cartilage tissue obtained from patients with OA. Leptin is reported to act as a pro-inflammatory adipokine with a catabolic role in cartilage metabolism via the up regulation of proteolytic enzymes [31]. However, Leptin but not Adiponectin promoted the expression of cartilage-specific markers through mitogen-activated protein kinase, Janus kinase and phosphatidylinositol-3 kinase signaling pathways [32]. The metabolic function The expression of NAMPT is increased in the plasma and synovial fluid of patients with OA [25]. Although NAMPT has been reported to be produced by chondrocytes from OA patients, our study has demonstrated the highly enhanced expression of Nampt in the IPFP in response to a HFD. Nampt production is increased by IL-1b in chondrocytes [33]. Moreover, Nampt induces PGE2 release in articular chondrocytes as a result of increased mPGES-1 and decreased 15-PGDH synthesis [33]. NAMPT also triggers the synthesis and release of MMP-3, MMP-13, ADAMTS-4, and ADAMTS-5 by chondrocytes [33]. Thus, Nampt may play a pivotal role in chondrocyte metabolism, including osteophyte formation. The mechanism for regulation of Nampt expression in the IPFP has yet to be discovered. Adipose tissue macrophage infiltration during obesity. We observed that the CD68 macrophage marker was increased in the IPFP at week eight of the HFD and that this occurred simultaneously with the enhancement of Nampt and TNF-a expression. Nampt mRNA level is strongly correlated with the CD68 macrophage-specific marker and TNF-a mRNA levels in adipose tissues [34]. In this study, we report a strong correlation between mRNA expression of Nampt and CD68 in the IPFP (Figure 4a). TNF-a is a pro-inflammatory cytokine produced mainly by macrophages and lymphocytes. It is also produced by adipose tissue although the expression level is low in humans [35]. Large-scale studies of gene expression using micro-array approaches have revealed that variations in gene expression in white adipose tissues (WAT) are essentially related to a macrophage infiltration in WAT of obese mice [10]. Thus, locally present CD68-positive macrophages may play important roles in the augmentation of these cytokines. Also, studies have shown an increased inflammatory response associated with the presence of hyperleptinemia without obesity [36,37], and that leptin is able to control TNF-a production and activation by macrophages [36]. We have shown TNF-a expression to be significantly associated with Nampt and Leptin expression (Figure 4a). These observations suggest Leptin may also regulate TNF-a expression in the IPFP. The expression of Chemerin and Lipocalin2 was enhanced in the IPFP by HFD (Figure 4c). Chemerin is predominantly expressed in adipocytes and promotes calcium mobilization and chemotaxis of immature dendritic cells and macrophages [38] In 3T3-L1 adipocytes, Chemerin expression is enhanced during differentiation [39]. Human plasma levels of Chemerin were significantly associated with body mass index, circulating triglycerides, and blood pressure [39], indicating a potent function of Chemerin in immune and metabolic homeostasis. Lipocalin2 is a 25 kDa glycoprotein expressed in neutrophil granules, adipocytes and chondrocytes [40][41][42]. Lipocalin2 belongs to the large family of Lipocalins which have high affinity for small hydrophobic ligands such as steroids and LPS [42]. Lipocalin2 expression was dramatically enhanced in adipocytes by IL-1b treatment [43]. In the analysis of cartilage degradation and protein release using proteomics, Lipocalin2 is identified as a biomarker of cartilage degradation [44] Overexpression of Lipocalin2 in chondrocytes caused reduction in proliferation and promotion of hypertrophy [41]. These observations suggest potent roles of Chemerin and Lipocalin2 in the inflammatory responses of the IPFP and enhanced chondrocyte apoptosis in the HFD group. On the other hand, Adiponectin levels in plasma and the IPFP were not affected by HFD in our study. Adiponectin has been shown to be implicated in the pathogenesis of osteoarthritis [42]. Plasma Adiponectin levels were reported to be significantly higher in patients with OA than in healthy controls [45]. Conversely, Adiponectin concentrations in plasma and synovial fluid show significant inverse correlation with disease severity, suggesting a possible protective role of Adiponectin in OA [24]. Perhaps an extended time course of the HFD may alter Adiponectin levels in the IPFP. We showed that TGF-b mRNA expression was gradually enhanced by a HFD. The ability of TGF-b to induce osteophyte formation was previously demonstrated [46,47]. There is significant overlap in the location of TGF-b-induced and experimental OA-induced osteophyte formation [46]. These observations confirm that TGF-b plays a role in osteophyte development during experimental OA [48]. Leptin, which is up regulated in parallel with TGF-b in this study, stimulates chondrocyte synthesis of TGF-b in animal experiments [30]. Thus, Leptin may act as a trigger to stimulate TGF-b expression from the IPFP. Consistently, we observed TGF-b mRNA expression was positively correlated with Leptin expression in the IPFP (Figure 4a). It is still unclear whether the events observed in the IPFP are directly induced by HFD or are an indirect response to the destruction of articular cartilage. For instance, gain of weight in HFD mice may result in an increase in mechanical load and trigger the wear of cartilage, followed by local inflammation. To elucidate this problem, further experiments will be required. In conclusion, we have shown that articular cartilage degradation and osteophyte formation can be triggered from as early as eight weeks of a HFD with detailed evaluation methods. Our observations suggest pivotal roles for the IPFP in the development of osteophyte formation and cartilage degradation. Furthermore, our methods gave the ability to adjust and reproduce the episodes of applied mechanical loading and metabolic alteration. This provided an opportunity to investigate articular cartilage responses to metabolic stress and the mechanisms involved in the progression of OA.
2016-05-12T22:15:10.714Z
2013-04-12T00:00:00.000
{ "year": 2013, "sha1": "eca6f663f676e6576146edad4c1895c383ee7880", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0060706&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eca6f663f676e6576146edad4c1895c383ee7880", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14058995
pes2o/s2orc
v3-fos-license
Complexity Reduction of Multiscale UTLM Cell Clusters Time-domain electromagnetic simulations employing unstructured tetrahedral meshes offer smooth boundary approximations and graded meshes for multiscale problems. However, multiscale effects may arise not only as a consequence of fine geometry but also from CAD and mesh generation artifacts, and it is critical that the simulation algorithms can be employed in their presence without unduly compromising their computational performance. The ability of the unstructured transmission line modeling (UTLM) algorithm to coalesce small computational cells into larger entities is a key enabler for the approach. This paper demonstrates the use of complexity reduction techniques to both notably reduce the preprocessing time required for this and, as a consequence, substantially extend its capability.  Abstract-Time domain electromagnetic simulations employing unstructured tetrahedral meshes offer smooth boundary approximations and graded meshes for multiscale problems. However, multiscale effects may arise not only as a consequence of fine geometry, but also from CAD and mesh generation artifacts and it is critical that the simulation algorithms can be employed in their presence without unduly compromising their computational performance. The ability of the Unstructured Transmission Line Modeling (UTLM) algorithm to coalesce small computational cells into larger entities is a key enabler for the approach. This paper demonstrates the use of complexity reduction techniques to both notably reduce the preprocessing time required for this and as a consequence, substantially extend its capability. I. INTRODUCTION Electromagnetic simulations using numerical techniques are an established tool for many technological disciplines and, notwithstanding the inexorable increase in computational power, the demands for assessment of ever larger and more complex problems motivates their continued development, [1][2][3][4]. Multiscale problems are particularly challenging and the need to mesh down to the scale of the smallest feature can not only consume substantial memory, but also more critically for time-stepping algorithms, require the use of an impractically small value of time step in the simulations, often for reasons of algorithmic stability. In practice, multiscale geometries are present in many fields of study and here we just highlight Electromagnetic Compatibility (EMC) studies in the aerospace domain. Complex bundles of thin wires, thin panels of, for example, carbon fiber laminates, are routinely found within whole aircraft or equipment bays which are significantly larger in scale, [5][6][7]. However, there is a further source of multiscale effects which is often overlooked but which can be devastating for the ability to perform an electromagnetic simulation; CAD and meshing artifices. Large-scale simulations drawing their geometrical inputs from CAD data are routinely plagued by inconsistencies such as nanoscale gaps, misalignments and overlaps. Often, these issues are not problematic for the originator of the CAD data, for example, manufacturing engineers, but their repair is a The authors are with the George Green Institute for Electromagnetics Research, The University of Nottingham, Nottingham, UK phone: +44 1159515567 fax: +44 1159515616; e-mail: phillip.sewell@nottingham.ac.uk. substantial time consumer for the EMC modeler. Furthermore, once the physically significant inconsistencies have been removed, subsequent mesh generation can further contribute to the problem. Slightly misaligned features can lead to localized refinement of the mesh well beyond that demanded by the need to represent the physical phenomena. Fig. 1 shows an example of this: The slight misalignment between the core of a feeding coaxial probe and the central conductor of a Vivaldi antenna, [8,9], causes exactly this situation. [8,9] leads to many tiny cells appearing. As CAD repair is a labor intensive, and thus expensive, activity, there is a strong motivation to equip electromagnetic simulation methods with a robust degree of immunity to these localized effects. The work presented in this paper considerably advances a technique which has been developed for just this purpose and is based upon the Unstructured Transmission Line Modeling (UTLM) method. TLM methods have been used for a range of physical simulations for over 40 years, [2,3,[10][11][12] and their core distinguishing feature is the mapping of electromagnetic problems onto equivalent electrical network problems. Space precludes providing a complete description of this family of approaches here, but referring to Fig. 2 we provide a brief summary of the UTLM variant. The problem space is decomposed into a Delaunay mesh [13] of tetrahedral cells. Tetrahedral cells provide smooth boundary models and permit significant mesh grading, [10,[14][15][16][17]. On the four triangular faces of each cell a pair of field samples is established and by considering local analytic solutions to Maxwell's equations within the cell, it is possible to derive an admittance operator that relates the sampled electric and magnetic field values. This admittance operator permits an implementation as an electrical network node employing commensurate lengths of transmissions line of different characteristic impedances, but The value of time step used by the UTLM algorithm is, as with all similar algorithms, a fundamental parameter. It is often stated that TLM is unconditionally stable, but that the time step must be smaller than a particular value. This may appear semantically no different than declaring a stability criterion: However, the key difference is that as the time step is increased, there is a cell-by-cell remedial process, [11], that guarantees stability, but accuracy, and eventually physical meaningfulness, is the compromised quantity. Typically, a straightforward implementation of UTLM as described above needs to use a time step comparable to the value prescribed by the Courant condition to maintain acceptable accuracy. A breakthrough enabler for UTLM has been the use of implicit cell clusters, [11]. Facilitated by the availability of an equivalent circuit representation, it has been possible to implicitly preprocess clusters of small tetrahedral cells and extract larger scattering entities whose algorithmic implementation can be expressed in a canonical form which is good for computational efficiency. The most significant aspect of such cell clustering is that, whilst initially it does not change the overall response of the cluster, rather only its implementational detail, it does partition the cluster behavior into two distinct parts; the physically meaningful and that due to the particulars of the mesh detail. The latter behavior, which can be regarded as sampling noise, only contributes to the dispersion errors of the simulation and although such behavior must be present to ensure causality, it is obviously not important to respect its detail. As the time step is raised it first affects these noise terms and, only after a few orders of magnitude increase, do the physically meaningful terms become affected. Typically, clusters can be easily identified by specifying a threshold length, and adjacent cells separated by less than this amount are coalesced into a cluster. To date, clusters of many 10s of tetrahedra are routinely formed in our studies and the only limit is the time required to pre-process their response, which as shall be discussed below, practically means less than 200 tetrahedra. The time step is then set by requiring that the behavior of the remaining isolated tetrahedra which, by definition of the threshold value, are of physically significant size, is not affected. As stated above, this often means that a time step orders of magnitude larger than the Courant condition value can be used. Finally, it is noted that clustering also provides a solution to the sliver tetrahedra problem which affects methods such as finite elements. Sliver cells are simply joined with their well-shaped neighbors to form a cluster. Clustering has been the critical enabler for dealing with multiscale effects, both geometrical and due to mesh and CAD artifices. Unfortunately, the limit on its usefulness is the preprocessing time. Furthermore, it shall be shown below, that the time stepping implementation can also benefit from a valuable reduction in complexity. The objective of this paper is to extend the scale of clustering by significantly reducing its computational requirements. II. THEORY Fig. 3. The clustering concept illustrated by a pair of adjacent cells whose circumcenters are separated by a small distance Δ. (a) Without clustering, the time step is typically required to be 2 Δ (b) With clustering, this time step constraint is removed. Fig. 3a shows the canonical equivalent network associated with an adjacent pair of unclustered tetrahedral cells. The electromagnetic response of each cell comprises a superposition of the behavior of low order electric and magnetic type dipoles and multipoles [10]. The link line characteristic impedances, specifically the line inductances, have been chosen such that the magnetic dipole behavior is correctly mimicked by the network response. The link line joining the two cells is present to provide an inductance proportional to the distance between the cell circumcenters. As these centers approach each other, the time step needs to typically remain below 2 Δ in order that the parasitic capacitance associated with the line does not become dominant. Fig. 3b shows the essence of the clustering approach: replace the link line by an inductive stub and determine an overall scatter response for the pair. Note that the network of Fig. 3b permits a component of a signal to algorithmically transit between the cell centers instantaneously which is consistent with the proposition of their close proximity whereas that of Fig 3a always incurs a minimum delay time of , thus demanding that a small value of be used. The network problem of Fig. 3b can be solved efficiently by first considering the compact representation of Fig. 2c [11]). In this work, matrices are denoted by bold type. The diagonal matrices and contain values for the transmission line characteristic impedances and the area of each associated cell face which has a normal vector n . Each characteristic impedance is of value where = ⁄ is the characteristic impedance of the material, and being the permeability and permittivity. Δ is the distance between adjacent tetrahedral circumcenters. The terms u appearing in (1) are the face sampled constant field vectors associated with the three electric dipoles modeling the quasi-electrostatic behavior of each tetrahedron and their orientations are determined by a small eigenvalue equation, [10]. is a vector of amplitudes for the electric dipoles. The underpinning technique used to develop the UTLM algorithm is to seek eigen-responses of (1) which are defined as pairs of vectors and which are related by a simple scaling factor, i.e. = . The physical significance of the eigenvalues is shown in Fig. 4 after introducing the notion of incident and reflected voltages on the transmission lines of Fig. 2b, i.e. = and = . Each eigensolution provides a set of amplitudes such that, if this set of values were incident from the transmission lines onto the cell, it would reflect without distortion in shape via a simple reflection coefficient Besides such solutions, (1) is rank deficient and possesses solutions with =0, I 0 and with = ∞. These physically correspond to the quasi-magnetostatic fields and in a similar manner to (2) give = Combining these results yields the algorithm presented in Fig. 4 which can be expressed as where = and Q is a unitary matrix whose columns are the solutions of (1). The time delays 2 are implemented by using the open circuited stubs, shown in Fig. 2b, [11]. Clearly the practical viability of (4) as an algorithm depends upon whether Q is frequency dependent. Consider the frequency independent eigenproblem It is straightforward to show that the eigenvectors of (6) are also eigenvectors of (1) and that the eigenvalues are related as = which to second order accuracy are also the same. To extend this technique to cell clusters is straightforward with the natural generalization of (1) and (6): Multiple instances of (1) are linked by the short stubs shown in Fig. 3b and eigensolutions for the whole combination sought in a similar manner to that just described. The critical point that permits the cluster eigenvectors to remain frequency independent is that enforcement of the short circuiting of the inductive stubs in Fig. 3b is deferred until time stepping. (Otherwise selected elements of V in (1) would be 0 and this prevents use of (6) except for the particular case that the corresponding elements of z are also zero.) Whilst the clustering approach is very successful, at this point the scaling of its solution with the number of cells in the cluster becomes problematic and this is the focus of the remainder of this paper. Consider a cell cluster of N cells. There are 3N elements in the vector and, for large N, 2N elements in the vector I. Direct evaluation of eigenproblem (1), which is a generalized eigenvalue problem of the form = , may be achieved with the QZ algorithm [18]. Iterative approaches are not appropriate as we do require all, not just selected, solutions. The S S Stub short circuits (shown in Fig. 3b) enforced as part of the connection solution time for the problem is of the order N 3 which is untenable for large N. Moreover, (1) is not a well behaved problem. There exist solutions with = ∞, = 0, and the indeterminate = 0/0 cases where both and are zero. Numerically, this severely compromises our ability to extract the solutions that are actually needed. Finally, as suggested in Fig. 4, the resulting time stepping algorithm actually explicitly implements the short circuiting of all inductive stubs each of which physically corresponds to a geometrical close proximity and it ought to be expected that there is scope to sensibly reduce the number of degrees of freedom by seeking a reduced number of equivalent stubs that achieve the same net effect. The remainder of this section will address the following three issues. (1) Robust solution of (6) for moderate sized N. (2) A complexity reduction process that reduces the both the number of solutions columns in (5) and also the number of inductive stubs required to be present when time stepping. (3) A hierarchical combination of the robust eigensolution and complexity reduction phases for processing very large clusters. a. Robust Cluster Eigensolving In (6), let the number of elements in the vectors I and be NI and NX respectively. The overall matrix is of size NI+ NX but no more than NI solutions with finite are expected due to the form of the left hand side. This can be exploited by scaling through to give The singular value decomposition (SVD), [18], reveals solutions of the form Unfortunately, there are physically meaningful cases where elements of both z and C are zero which causes problems. It is commonplace in real meshes for the circumcenters of groups of tetrahedra to identically coincide (for example, the 5 or 6 constituent tetrahedra of a cuboid) and these yield z=0 terms. Values of C=0 can arise for extreme sliver cells and also when a cell has a coincident circumcenter with all four of its neighbors. Examination of the terms of (1) permits the following tableau picture of the situation to be presented. The symbol * denotes unlabeled non zero blocks and the elements 0 and 0 denote the zero values of z and C. Useful observations are that, as indicated, the number of nonzero elements of z is always greater than the number of nonzero elements of C due to the origin of the terms in (1). Also, the block labeled c is always 0 for similar reasons. (11) can be explicitly deflated to remove the awkward 0 z and 0 C terms. This can be achieved by a sequence of Householder reflections or more robustly, but less efficiently, using SVD in a similar manner, [18]. For example, using Householder reflections, denoted by matrices H with corresponding upper triangular matrices denoted by R which, noting that = 0 and = 0, reveals the more compact problem amenable to solution via SVD as per (8) to (10) and from which all other components of I and follow. The procedure just described has been deployed and has proved remarkably robust. However, at its core are a number of SVD steps which scale cubically with size. For clusters up to a 200 cells the run time is tractable, however beyond that it becomes unacceptable. b. Complexity Reduction Referring to Fig. 4 it is clear that whilst the clustering achieves its objective of permitting larger time steps, it does not reduce the computational complexity associated with a large cluster. It has been argued that many very large clusters follow the pattern of Fig. 1, tiny cells embedded within increasing size cells with the largest cells often forming the outside surface of the cluster. For example, one can conceive of a single large tetrahedron which is of suitable size for a particular physical scenario and with just 8 field samples present on its outer surface that has been subdivided internally to contain a very large number of smaller cells. The clustering approach still requires a computational overhead at run time consistent with the number of all the smaller cells, even though the outer cell only presents 8 degrees of freedom. All the many degrees of freedom associated with the tiny cells are essentially filtered down to 8 quantities and only serve to contribute subtle variations in the time response of the cell as the seen by the rest of the problem. The purpose of this section is to minimize the number of degrees of freedom a cell cluster must explicitly contain whilst maintaining physical integrity. To this end, it is helpful to distinguish the voltages and currents present in the evaluation of a cluster response (e.g. in (1) and (6)) which are on the outer surface of the cluster and hence in contact with the external world, Vex and Iex, and those which are internal, Vin and Iin. From (1) and (4)(5)(6), it is possible to write Equation (17) embodies an impedance matrix of order Nex+Nin, where Nex and Nin are the number of terms in Vex and Vin respectively. The desire is to collapse this to one of order Nex without compromising the physical significance. It is to be noted that the form and size of (17) is due to the fact that the short circuiting of the inductive stubs shown in Fig. 4 is implemented during time stepping. It is recalled that if this were not the case, the response times and, more problematically, the matrices Q used in (4) would be frequency dependent. Thus the demand for frequency independent Q is being bought at some significant cost in terms of problem size. However, in a similar vein to the introductory discussion on the clustering approach, it is pragmatic to suggest distinguishing between strong frequency dependencies attributable to physical phenomena and weak frequency dependencies that only affect the sampling noise and precise dispersion characteristics. Moreover, in the context of cell clustering for multiscale purposes, even though a cluster may contain many thousands of cells, its volume may still be relatively small and therefore this should permit exclusion of certain frequency dependencies by asymptotic arguments. As with the clustering approach so far, the key is to manipulate the algorithm into a form which facilitates this operation. Referring to Fig. 4, it is known that the short-circuiting of the stubs during time stepping results in a behavior equivalent to setting = (currents are positively oriented into the cluster) and similarly accounting for the inductance on the external link lines, combining (18) with (17) As discussed above, (20) has a reduced dimensionality, but frequency dependent, representation. A further manipulation yields This presents the impedance as a generalized Foster form [19] as can be clarified by a further SVD step which corresponds to the circuit shown in Fig. 5. It is noted that the matrix√ 1 √ is that which would arise if all the stub impedances in, for example, (1) were zero which gives a basis for asserting that it is positive semi-definite. At this point, the separation of the strong and weak frequency dependencies can be made on physical grounds. Each of the parallel LC circuits exhibits a resonance which precludes further simplification. However, in most cases the whole cluster does not encompass sufficient geometry for any of these to be physically relevant and indeed the LC circuits are well approximated by just a simple inductance over the full useful frequency range of the method. Hence It is commented that in the event that a number of the resonances do have a physical significance, there is no reason why selected LC circuits cannot be implemented in their complete form. This model represents the physical expectation that the quasi-electrostatic behavior of the cluster maps to a capacitive network with an inductive correction which increases as the volume of space the cluster encompasses increases. (24) can now be compared to (1) by expressing it as with a view to processing it, via the eigensolutions, = , to obtain a canonical implementation of the form shown in Fig. 4. However, this cannot be done immediately due the fact that the term , which physically corresponds to the link line impedances, is not a diagonal matrix. One might consider seeking solutions of the generalized eigenproblem defined by the matrix pair and , however, it is straightforward to show that this yields solutions such that matrix Q in (5) is no longer unitary and hence the algorithm of (4) would be unstable. To proceed requires making a second order accurate approximation. Recall that there are two types of solutions to (25), those which lie entirely within the null space of p, corresponding to the quasi-magnetostatic case, and those which overlap the range of p corresponding to the quasi-electrostatic case. The latter are sought using a diagonal approximation to → , i.e. from Note that the values contained in now become the link line impedances of the transmission lines connecting the cluster with the rest of the problem, i.e., these link lines contribute a part of the net inductance seen looking into the whole cell cluster but are no longer associated with particular geometrical distances. Denoting the solutions to (26) by , the quasimagnetostatic solutions are defined to be orthogonal to this set and satisfy (24), hence The eigenvalue physically corresponds to the total value of inductance that each particular quasi-magnetostatic solution sees when incident on the cluster. This is partly provided by the link line impedances and the remainder is now provided by a short circuit stub. Referring to Fig. 4, this leads to an algorithm of the same form as originally developed for the cluster, except that each inductive stub no longer represents a particular geometrical feature and they are fewer in number. As before, the enforcement of the short circuiting of the inductive stubs will be deferred to time stepping and Fig.6 summarizes the practical algorithm. The diagonal approximation may be selected in a number of ways, for example, = or = . However, the particular choice will only affect the precise nature of the second order dispersion error and there are a number of consequences to consider. If = , then all the values of are greater than 1, but for other choices some drop below 1 which leads to instability. However, the smaller values of correspond to high order spatial fields and in the context of cell clustering, may again be regarded as modeling mesh noise and thus may be legitimately approximated by setting = 1 which also has the added attraction of removing a number of stubs from the algorithm. Indeed, as explained in the introduction, the clustering approach already adopts the same approach towards the high order spatial quasi-electrostatic solutions corresponding to small values of , by setting = . In summary, the manipulation of this section, coupled with the physically grounded approximation of resonance phenomena that are only relevant at frequencies where dispersion noise dominates, has reduced the number of degrees of freedom from Nex+Nin to of the order Nex. In fact, this gain proves even more useful in the context of hierarchical clustering discussed in the next section. c. Hierarchical Clustering The foregoing analysis and the hitherto deployed scheme described in section (a) make significant use of SVD which scales cubically with matrix size. The purpose of this section is to propose the construction of the responses of large cell clusters using a subdivision approach. It is straightforward to modify (1) or (25) to permit this: In place of the dipole related terms in (1), √ and , will appear the quantities Q and in the manner of (5) and (6) of the smaller constituent clusters that have already been processed. The previous section has reduced the degrees of freedom of the constituent sub-clusters to the number of exterior sample points, removing all their internal detail which is a substantial gain. III. RESULTS Fig. 7. The meshed Vivaldi antenna, the full details of which are given in [8,9]. The first example presented derives from the geometry shown in Fig. 1, a compact Vivaldi antenna [8,9]. The geometry has been meshed relatively crudely as shown in Fig. 7. The antenna is of length 55 mm and so the threshold Dth controlling cluster formation is set at 0.1 µm. Table 1 shows that many small clusters are formed which is typical. However, there are two large clusters approaching 700 cells. These can be located where the misalignment of Fig.1 causes very fine localized meshing. The cluster with 695 cells is shown in Fig. 7c and it is seen that the outer surface of the cluster in contact with the rest of the mesh is much cruder than the highly resolved interior detail. The overall cluster fits within a cube of side 2 µm and is barely geometrically significant in the overall problem context. The cluster problem solved by (1) has 695 cells and Nex=112 and Nin=2892. Note that many of the numerous faces visible in Fig. 7c are adjacent to the perfect conductor of the antenna and so define short circuit boundary conditions rather than belonging to the set of external faces. Fig. 8a shows the values of recovered from (1). There are 2006 solutions and there is little to distinguish which are the most significant contributors to the physical meaningful behavior of the cell. After applying the complexity reduction process using alternatively = and = , the total number of solutions to (26) is 24 in both cases. Fig 8b shows the associated inductive stub values, , recovered from (27), numbering 79 and 89 respectively. For the case of = , these remain above, albeit many approach, 1. For = , half of the values of drop below 1. If the latter set were to be used these would be given a value of = 1 which removes the need for a stub. The net value of the complexity reduction on this example is that at time stepping the size of the matrix Q in (5) is reduced from 112*2892=323904 elements to (24+89)*(112+89)= 22713, which is a factor of 14 saving in both memory and calculation time. The benefit of the previous example has been a significant gain at run time, albeit with increased, rather than reduced preprocessing time. Fig. 9 shows the substantial improvement if a recursive cluster splitting policy is adopted. A threshold number of cells, NT, is defined and clusters with more than this number are split into two, each part of which is solved independently using (1), (26) and (27). The solutions from each part are then combined by a further application of (1), (26) and (27). Naturally this split-solve-reduce-combine-reduce strategy is applied recursively. Fig. 9a shows the preprocessing time required as the threshold level NT is varied for cell clusters of different sizes. (These are obtained by changing the clustering parameter that gave Fig. 7c and so are related. All timings in this paper were obtained using a serial code running on a Sandy Bridge CPU). The case of = is considered and all stub solutions to (26) and (27) are kept. A very large improvement in run preprocessing time is observed which reaches its optimum when NT~100 as shown in Fig. 9a. Below this value the ratio of Nex to Nin, which is analogous to a surface area to volume ratio, is no longer small enough to provide an advantage and the bookkeeping overhead of the recursion then causes the time to increase again. Fig. 9b plots the preprocessing time versus overall cluster size and it is clear that the approximately cubic relationship of the scheme without using recursion or complexity reduction becomes more logarithmic in nature when both are employed and that recursive splitting without complexity reduction is of little value. Space precludes extensive explicit demonstration, but it is stated that when using complexity reduction, the recursive approach yields the same net solutions as the non-recursive case consistent with the second order accuracy of the overall method. An overall simulation example is given below to support this contention. In this work, the splitting of clusters was performed using [20,21] seeking equal volume partitioning. There is clearly scope to further investigate both this aspect and the choice between using = and = or else another possibility. Space precludes further comment here, but it is clear from Fig 9b that the ability to viably move to using clusters of 1000s, rather than 100s of cells in simulations involving multiscale effects strongly motivates continued study. Fig. 10 shows a second test case arising from a study of complex wiring configurations [22]. The scale of the wiring detail is significantly smaller than that of its enclosure (not shown) and Fig. 11 shows a section through the tetrahedral mesh which is extremely multiscale in nature and not unrealistic for aerospace scenarios. This example differs in nature to the Cluster size expressed as the number of coalesced tetrahedral previous one as now the multiscaling is physically intended and not due to CAD and meshing artifacts. Fig. 12 shows the numbers and sizes of clusters formed for different values of the clustering threshold and Table 2 presents the total preprocessing time of the clusters with and without use of the hierarchical complexity reduction approach. Table 2 shows that for all threshold values, the time required to perform the complexity reduction approximately doubles the preprocessing time. However, use of the hierarchical scheme compensates for this and for Dth=0.4 and 0.5 mm, provides a net improvement. Whilst these results may not appear as impressive as those for single large clusters arising from the Vivaldi antenna problem, i.e. Fig. 9, it must be recognized that the preprocessing task parallelizes perfectly on a cluster by cluster basis, so that the true impact of a few large and very slow to process clusters is that they undermine the ability to load balance the parallelization and in fact dominate the total time almost as if serial computation were used. This argument is confirmed by the case of Dth=0.5 mm which is significantly improved by the hierarchical scheme. In this case, there are a total of 165799 clusters but only 31 have more than 200 cells and it is these that dominate the calculation time. Table 2 also shows a measure of the impact of the complexity reduction on the run time performance. The accumulated size of all the matrices Q, (5)¸ quantifies the amount of parameter (not variable) data that must be retrieved through cache at each time step. Moreover, the time to evaluate the scattering using (4) linearly depends upon the number of elements of the Q matrices. Therefore, this is a useful measure of the impact of the complexity reduction scheme on the computational efficiency of the time stepping algorithm. It is clear that the complexity reduction scheme provides a consistent and useful gain on this measure. Fig. 13 shows more detail of how the different cluster sizes contribute to both figures of merit in Table 2 and confirms that the most significant impact is for the particularly large cluster sizes. Finally, it is noted that load balancing for parallelization of the time stepping algorithm suffers from similar problems to that of the preprocessing in the presence of large clusters, so reducing the size of their Q matrices by an order of magnitude is extremely advantageous. As this work concerns the computational efficiency of preprocessing and time stepping rather than the results of the simulations themselves, we do not show comprehensive simulation outputs due to limited space. However, it is stated that the foregoing development fully respects the second order accuracy of the UTLM technique and that the overall simulation results agree within this framework. To illustrate this point, Fig. 14b shows the return loss of a Vivaldi antenna when mounted Return loss (dB) The 2 Curves, with and without the use of complexity reduction are indistinguishable upon a perfectly conducting plate; a configuration, shown in Fig 14a, that has been studied more fully as part of an investigation into the coupling between airborne antennas and radomes, [9], wherein the full details of the geometry and other simulation parameters are presented. With the same mesh used to generate Fig.2 of [9], a relatively large value of Dth=5 μm has been chosen which causes three large clusters of 220, 453 and 528 cells to be formed, all other clusters comprising less than 30 cells. The two curves presented in Fig. 14b for the return loss, obtained with and without the complexity reduction approach for preprocessing the simulation, are indistinguishable. This confirms that the complexity reduction does not compromise the practical accuracy of the method. However, in this example the pre-processing time is reduced by 35% and the subsequent simulation by 8% when using the complexity reduction techniques presented in this paper. This clearly shows the dominant impact that just 3 large clusters can have and why the adoption of complexity reduction schemes is important. IV. CONCLUSION In this work a significant expansion of the value of cell clustering for the UTLM time domain electromagnetic solver has been demonstrated. First a robust and efficient means of solving the clustering equations has been presented. Second, a complexity reduction technique has been used to substantially reduce the number of run time parameters involved in a simulation. This reduction has been achieved by a combination of SVD techniques and physically grounded approximations. Finally, a hierarchical method of evaluating large clusters has proved to be highly effective, extending the size of clusters that can be practically employed by an order of magnitude.
2017-03-27T21:39:05.804Z
2017-02-17T00:00:00.000
{ "year": 2017, "sha1": "4615ca4e5b154d5fc7e38270701e490b65f1205f", "oa_license": "CCBY", "oa_url": "https://nottingham-repository.worktribe.com/preview/845390/final_recursive_clusters.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "3ff6c5d06bd44d3df6cfb5c3384b88f8598c5dc8", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
11188766
pes2o/s2orc
v3-fos-license
BRCA1‐like signature in triple negative breast cancer: Molecular and clinical characterization reveals subgroups with therapeutic potential Triple negative (TN) breast cancers make up some 15% of all breast cancers. Approximately 10–15% are mutant for the tumor suppressor, BRCA1. BRCA1 is required for homologous recombination‐mediated DNA repair and deficiency results in genomic instability. BRCA1‐mutated tumors have a specific pattern of genomic copy number aberrations that can be used to classify tumors as BRCA1‐like or non‐BRCA1‐like. BRCA1 mutation, promoter methylation, BRCA1‐like status and genome‐wide expression data was determined for 112 TN breast cancer samples with long‐term follow‐up. Mutation status for 21 known DNA repair genes and PIK3CA was assessed. Gene expression and mutation frequency in BRCA1‐like and non‐BRCA1‐like tumors were compared. Multivariate survival analysis was performed using the Cox proportional hazards model. BRCA1 germline mutation was identified in 10% of patients and 15% of tumors were BRCA1 promoter methylated. Fifty‐five percent of tumors classified as BRCA1‐like. The functions of genes significantly up‐regulated in BRCA1‐like tumors included cell cycle and DNA recombination and repair. TP53 was found to be frequently mutated in BRCA1‐like (P < 0.05), while PIK3CA was frequently mutated in non‐BRCA1‐like tumors (P < 0.05). A significant association with worse prognosis was evident for patients with BRCA1‐like tumors (adjusted HR = 3.32, 95% CI = 1.30–8.48, P = 0.01). TN tumors can be further divided into two major subgroups, BRCA1‐like and non‐BRCA1‐like with different mutation and expression patterns and prognoses. Based on these molecular patterns, subgroups may be more sensitive to specific targeted agents such as PI3K or PARP inhibitors. Introduction The heterogeneous nature of breast cancer, both at the histological and molecular levels, has been well documented and this information is routinely used to guide treatment decisions (Curtis et al., 2012;Dvinge et al., 2013;Perou et al., 2000;Sørlie et al., 2001;van de Vijver et al., 2002). Despite the reduced incidence of death from breast cancer overall in the last two decades in the industrialized world, certain subtypes remain difficult to treat due to limited treatment options (Hudis and Gianni, 2011). One such subtype which makes up around 12e17% of all breast cancers, triple negative (TN) breast cancer, is characterized by low or lack of expression of estrogen (ER) and progesterone (PR) receptors and lack of human epidermal growth factor receptor 2 (HER2) overexpression (Criscitiello et al., 2012;Foulkes et al., 2010). Other than conventional chemotherapy few treatment options are currently available for these patients (Linn and Van 't Veer, 2009). Multiple studies have shown poor recurrence-free and overall survival for patients with TN breast cancer, which tends to be aggressive and metastasize early, independent of other known breast cancer prognostic factors such as tumor size, grade and number of positive lymph nodes (Hudis and Gianni, 2011). Depending on the ethnic background and age of the investigated cohort, around 10e15% TN breast cancers are mutant for the tumor suppressor, BRCA1 (Foulkes et al., 2003). BRCA1-associated breast cancer displays a high frequency of TP53 mutations (Mani e et al., 2009). Furthermore, BRCA1mutant breast cancers are commonly high-grade and most frequently classified as basal-like breast cancers e those that display basal cellular markers, such as cytokeratin5/6 (Foulkes et al., 2003). The categories of basal-like and TN do not completely overlap, although, it has been previously reported that a substantial proportion of BRCA1-mutant tumors are TN, basal-like or both (Linn and Van 't Veer, 2009). DNA double-strand-breaks (DSBs), most frequently caused by UV light and metabolic processes are repaired by several mechanisms. Homologous recombination (HR) repair e is the cell's most error-free mechanism (Bouwman and Jonkers, 2012;Moynahan et al., 1999). Cells without functional BRCA1, most often through mutation and loss-ofheterozygosity or promoter methylation, are deficient in HR. These cells utilize an alternate mechanism to repair DSBs known to be highly error-prone, called non-homologousend-joining (NHEJ), which results in genomic instability (Turner et al., 2004;Wang et al., 2001). Thus, BRCA1-mutant tumors have numerous copy number aberrations (CNAs). Importantly, these tumors display a very characteristic pattern of gains and losses of genomic DNA. This specific pattern of CNAs was used to develop classifiers to identify tumors with the same pattern and to identify some sporadic, non-BRCA1-mutant tumors that have the same pattern of CNAs as BRCA1-mutated tumors. This group with the characteristic CNAs pattern, larger than the BRCA1-mutant tumors alone, is referred to as BRCA1-like (Lips et al., 2011;Schouten et al., 2013;Vollebergh et al., 2011;Wessels et al., 2002). These tests assign copy number profiles to BRCA1-like or non-BRCA1like status based on the copy number pattern alone and can be implemented on all types of copy number data such as array data, next generation sequencing data and MLPA (multiplex ligation-dependent probe amplification) data. The BRCA1like category, which is based on only copy number pattern frequently includes tumors with a BRCA1 mutation (as the classifiers were trained on these samples) and tumors with BRCA1 promoter methylation. However, a large number of tumors identified by these classifiers lack an apparent defect in BRCA1 itself. A number of these classifiers are capable of predicting benefit from specific therapies regardless of mutation status, particularly those utilizing DSB-inducing agents, such as bifunctional alkylators and intensified platinum-based chemotherapy (Lips et al., 2011;Schouten et al., 2015;Vollebergh et al., 2011). The mechanisms underlying genomic instability in TN breast cancer are complex and although these tumors are frequently BRCA1-associated, it remains unclear what role BRCA1 deficiency may play in the process (Bouwman and Jonkers, 2012;Turner et al., 2004). In this study, we first aimed to characterize our TN cohort with respect to BRCA1 mutation, promoter methylation and BRCA1-like status. To gain insight into the mechanism that results in genomic instability in TN BRCA1-like tumors, we sought to identify genes and their functions that are differentially expressed between BRCA1like and non-BRCA1-like tumors. In addition, we compared the mutation frequency of non-BRCA1-like and BRCA1-like tumors in a subset of DNA repair genes and PIK3CA, the second most frequently mutated gene in breast cancer besides TP53 (Cancer Genome Atlas Network, 2012). Finally, we retrospectively assessed outcome of BRCA1-like patients in comparison to non-BRCA1-like patients. Patient selection and characteristics The FP7 Potential TN primary tumors were selected from tumor registration and existing study databases at both sites (biobank). The primary inclusion criterion for this RATHER TN cohort was availability of sufficient frozen tissue for DNA, RNA and protein isolation for all RATHER project assays in the tissue bank. As a result the cohort may be skewed towards larger tumors. Neoadjuvant treated patients were an exclusion criterion when establishing the cohort. In addition, we enriched for patients diagnosed before 1999, since in that era only node-positive patients received adjuvant systemic therapy. Premenopausal node-positive patients would receive adjuvant chemotherapy while postmenopausal node-positive patients would receive adjuvant endocrine therapy. Only from 2000 onwards was the estrogen-receptor status taken into account when prescribing adjuvant endocrine therapy. These historical treatments mean 23/112 patients were treated with endocrine-therapy despite having TN disease. The criteria allowed us to study a cohort of mainly adjuvant systemic therapy-na€ ıve patients with long-term follow-up. We collected database information on ER, PR and HER2 status, tumor size and grade, number of tumor positive lymph nodes, surgery and treatment information, age at diagnosis, diagnosis date as well as follow-up data. Frozen tumors with 30% or greater tumor content (based on average score before and after sectioning, 2  8 mm serial sections, hematoxylin and eosin-stained (JJFM)) were used for further analyses. Formalin fixed paraffin-embedded (FFPE) material was used to construct tissue microarrays (TMAs) for expert pathological review (KJ), which included ER, PR and HER2 status, grade and tumor percentage determination. Samples were defined as ER-or PR-positive when 10% or more of tumor cells stained positive with immunohistochemical staining. HER2 samples with intensity !2 were considered positive and confirmed when possible using TargetPrint Ò (Agendia BV, Amsterdam, Netherlands) (Roepman et al., 2009). Samples with missing ER, PR or HER2 status upon review were included, as diagnostic information was originally indicative of TN status. The local medical ethical authorities of both centers approved of the collection protocols. 2.2. DNA/RNA isolation All samples were processed following one standard operating protocol to isolate high quality nucleic acids. Each frozen tumor was serially sectioned for DNA and RNA isolation (30  30 mm serial sections for both). DNA was isolated using the DNeasy kit for purification of total DNA for animal tissues using two spin columns (Qiagen). On each column samples were eluted twice with 100 ml volumes of buffer AE for a final volume of 400 ml. For RNA extraction, depending on the size of the tumor sample, 20e30 sections of 30 mm were used for RNA extraction. Sections were homogenized in Qiazol (Qiagen) using a tissue lyser (Qiagen) and total RNA was isolated with Qiazol according the manufacturer's instructions. The RNA was further purified using the RNeasy Mini Kit (Qiagen). Microarray hybridization and analysis The RNA quality was assessed by a 2100 Bioanalyzer (Agilent Technologies) and samples with RIN above 5 were selected for further analysis. RNA was amplified, labeled and hybridized to the Agendia custom-designed whole genome microarrays (Agilent Technologies) and raw fluorescence intensities were quantified using Feature Extraction software (Agilent Technologies) according to the manufacturer's protocols. The microarray expression dataset was imported into R/Bioconductor software (R version 3.0.2, www.bioconductor.org) for pre-processing. Feature signal intensities were processed and extracted according to the 'limma' Bioconductor R package (Bolstad et al., 2003) with background subtraction using an offset of 10. All probe intensities <1 were set as missing values. The log2transformed probe intensities were quantile normalized using 'limma'. A principal component analysis showed a batch effect for biobank, which was adjusted for using ComBat (Johnson et al., 2007). Missing values were imputed by 10nearest neighbor imputation. Genes with multiple probes were summarized by the first principal component of a correlating subset. Differential analysis, clustering and visualization of the data was performed with R (version 3.1.2) using the 'heatmap.3' package standard settings. Differential expression between classes was assessed using ANOVA in R with the significant genes selected univariately with FDR <0.001 and a fold change >1. 2.4. Capture library and next-generation sequencing and analysis For each sample, Illumina TruSeq indexed libraries were constructed according to manufacturer's instructions (Illumina) before enrichment by capture with a biotinylated RNA probe set targeting the human kinome and a range of cancer related genes (Agilent Technologies). We sequenced 10e12 samples on a single Illumina HiSeq lane to generate 50, 51 or 60 bp paired-end reads. Raw sequence data were aligned using Bur-rowseWheeler Aligner (BWA) to the human genome (Ensembl 37). Single nucleotide variants and indels were called using SAMTools on unique paired aligned data. Matched normal germline DNA was unavailable for most samples so we used dbSNP and variant data from the Exome Variant Server to remove potential germline variants. We further focused on variants predicted to alter protein coding sequence or splicing of genes according to Ensembl VariantEffectPredictor and not identified in a pool of 80 normal DNAs taken from various tissues. All variants found in the COSMIC database were retained. In addition, we retained any BRCA1 or BRCA2 variants that were clinically relevant according to the Breast Cancer Information Core database (BIC, http:// research.nhgri.nih.gov/bic/). Following our filtering steps, samples with variants were termed BRCA1/2-mutant. For all other genes, the same criteria were applied except the BIC database step and the remaining variants were termed mutant. BRCA1 mutations were validated when possible by germ-line sequencing using the Nextera Custom Enrichment kit (Illumina) on matched normal DNA according to manufacturer's instructions, traditional capillary sequencing, or small PCR amplicon pooling targeting the variant using Illumina TruSeq indexing. BRCA1-like classification We used the MLPA (multiplex ligation-dependent probe amplification) method to determine the BRCA1-like status of the tumor DNAs. The assay was performed, fragments analyzed and data normalized according to the manufacturer's protocol using the SALSA MLPA P376 BRCA1ness probemix (MRC-Holland). Class prediction (BRCA1-like/non-BRCA1like) was carried out on the normalized data according to published instructions (Lips et al., 2011). BRCA1 promoter methylation Semi-quantitative BRCA1 promoter methylation was determined using the MS-MLPA (methylation-specific-MLPA) method. This assay combines copy number detection with methylation-specific enzymatic restriction. The assay was performed, fragments analyzed, data normalized and a cutoff of 20% was used to call a sample methylated according to manufacturer's protocols using the SALSA MLPA ME001 Tumour suppressor probemix 1 (MRC-Holland). Statistical analysis Patient characteristics were compared between BRCA1-like and non-BRCA1-like classes and statistical significance was examined by a Wilcoxon rank-sum test or Pearson's chisquared test using R version 3.1.2. Survival analysis was conducted in R using the 'survival' package to employ the Cox proportional hazards model. We observed patients from date of diagnosis until 2012 for distant recurrence-free survival and censored data in accordance with the STEEP (Standard Efficacy Endpoint) system (Hudis et al., 2007). We used only follow-up data up to 10 years. An event includes distant recurrence, death from breast cancer, death from non-breast cancer cause and death from unknown cause. Co-variates in the Cox proportional model included BRCA1-like status, patient age at diagnosis, treatment (radiotherapy/hormonal/chemotherapy), tumor size, grade and number of tumor positive lymph nodes. We included stratification of all Cox models for biobank (NKI/Addenbrooke's Hospital). To assess the accuracy of the model we included a test for the Proportional Hazards (PH) assumptions using cox.zph in the 'survival' package. The 'Mutascape' R package (manuscript in preparation) was employed to test differences in gene mutation frequency for two analyses: 1) within classes and 2) between classes (BRCA1-like and non-BRCA1-like). Within classes a binomial test was employed to determine if the number of mutations in a gene was greater than expected by chance. Given the total number of mutations in the dataset and the size of the gene, probabilities of occurrence were computed, which can be interpreted as the probability of a gene's mutation frequency being random (modeled by the null binomial distribution) or not. A multiple testing correction was applied to the p-values with the BenjaminieHochberg method. The distribution of mutation location was also determined and visualized in a bubble plot to examine mutation recurrence as well as frequency within genes. For genes identified as significantly mutated we used the Fisher's exact test to compare the distribution between the BRCA1-like and non-BRCA1-like groups. The p-values were adjusted as above. Results 112 TN samples were available for further analysis based on inclusion criteria, central pathological review of TN immunohistochemical status and BRCA1-like status data availability. To characterize the TN cohort with respect to BRCA1 deficiency and characteristic genomic instability, we assessed the samples for BRCA1 mutation, promoter methylation and BRCA1-like status. We identified 62 of 112 as BRCA1-like tumors, 10 of 104 as BRCA1 mutated tumors (8 with missing data) and 14 of 94 (18 with missing data) as BRCA1 promoter methylated tumors ( Figure 1). 'Missing data' indicates a failed experiment. We found BRCA1 germline mutation and BRCA1 promoter methylation overlap with BRCA1-like status in 70% (7/10) and 79% (11/14) of the samples, respectively. Patient characteristics and association with BRCA1-like status are found in Table 1. To better understand the mechanisms that can result in genomic instability, we analyzed gene expression data of the TN samples in combination with BRCA1 mutation/promoter methylation and BRCA1-like status. We aimed to explore the gene expression data for association with BRCA1-like status (top variable genes with a fold change >1, N ¼ 3569). Figure 2A shows the unsupervised clustering of the 279 most significantly differentially expressed genes between BRCA1like and non-BRCA1-like samples (ANOVA, FDR < 0.001, foldchange >1). We also classified our samples according to known gene expression TN subgroups from the Lehmann group with the TNBCType tool (http://cbc.mc.vanderbilt.edu/ tnbc/) using the top variable genes with a fold change >1 (N ¼ 3569). Significantly different genes showed no specific association with TNBCType classifications (Chen et al., 2012) (Supplementary Figure 1). Each box represents one sample, with color indicating type of data for that sample in each row: positive (black), negative (gray) and no data due to failed experiment (white). BRCA1 mutation, promoter methylation and -like status data were obtained for 104, 98 and 112 samples, respectively. Ingenuity Pathway Analysis (IPA, Ingenuity) was used to identify key biological processes regulated by the differentially expressed genes. The most down-regulated genes in BRCA1-like tumors are related to cellular maintenance and proliferation and development of lymphocytes (Supplementary File 1). The most up-regulated genes were enriched for cell cycle and DNA replication, associated with recombination and repair, within a network centered on FOXM1 ( Figure 2B). FOXM1 gene expression was significantly up-regulated in BRCA1-like samples (P < 0.001) along with CDK4 and CDK6 (P < 0.001 and P ¼ 0.03, respectively). Although MYC has been found to be amplified in BRCA1 germline mutated tumors (Adem et al., 2004;Grushko et al., 2004) with associated over-expression (Blancato et al., 2004) we did not observe significant increased MYC gene expression in BRCA1-like versus non-BRCA1-like tumors (P ¼ 0.78). BRCA1-mutant versus wild type (BRCA1-like removed) showed no significant difference in MYC gene expression (P ¼ 0.41). Using next-generation sequencing, we analyzed the exons of 21 genes known to be involved in DNA repair for mutations as well as PIK3CA (Supplementary File 1). Significantly mutated genes were identified taking their genomic size into account (see Methods). In both classes, TP53 was significantly more frequently mutated than expected by chance, with non-BRCA1-like tumors mutated at 50% and BRCA1-like at 84% (adjusted P < 0.001 and adjusted P < 0.001, respectively) ( Figure 3A and B). Only TP53 was significantly differentially mutated between classes with BRCA1-like tumors more frequently mutated than non-BRCA1-like tumors (adjusted P ¼ 0.002). Additionally, we investigated the type of TP53 mutations (non-truncating or truncating) identified in both classes and found a trend indicating more truncating mutations in BRCA1-like tumors than in non-BRCA1-like tumors ( Figure 3B, Supplementary File 1). Interestingly, we only observed a high frequency of PIK3CA mutations in the non-BRCA1-like tumors (21%; adjusted P ¼ <0.001). There was no evidence of significant associations with PIK3CA hotspot mutations between the two classes although numbers were very small (Supplementary File 1). We observed 33 events in 112 patients. Distant recurrencefree survival of the cohort was visualized with respect to BRCA1-like status (univariate analysis stratified for biobank) using the KaplaneMeier method, which indicated a trend in association with worse outcome for BRCA1-like patients (adjusted log rank, P ¼ 0.08) (Figure 4). We calculated the adjusted hazard ratios in multivariate and found patients with a BRCA1-like tumor had a significantly worse prognosis than patients with a non-BRCA1-like tumor (HR ¼ 3.32, 95% CI ¼ 1.30e8.48, P ¼ 0.01) ( Table 2). The proportional hazards assumption test was not significant (P ¼ 0.13) in the global model indicating the accuracy of the constructed Cox model for the dataset. The significant prognostic findings were also true for breast cancer specific survival and recurrence-free survival. The test for interaction between chemotherapy and BRCA1-like status was not significant (P ¼ 0.75, Supplementary Table 1). There are no patients for whom treatment data were not available. Because of the significant difference in TP53 mutation frequency between classes, we also calculated adjusted hazard ratios in a multivariate model for TP53 mutation status. We observed no significant association with prognosis for TP53 mutation status (HR ¼ 1.39, 95% CI ¼ 0.55e3.53, P ¼ 0.49). Discussion The underlying mechanisms responsible for genomic instability are complex and although BRCA1 deficiency and genomic instability are frequently associated, the exact role of BRCA1 in the process remains elusive. To further explore the role of BRCA1 and characteristic genomic instability in TN tumors, we examined differences in mutation and gene expression patterns between BRCA1-like and non-BRCA1-like tumors combined with BRCA1 mutation and promoter methylation data. The percentages of BRCA1 mutation, promoter methylation and BRCA1-like status indicate the group is representative of the larger TN population. In addition, although our selection criteria may have skewed the cohort toward larger tumors, we did not observe evidence for this when comparing tumor size classes with a similar previously published cohort (n ¼ 180, Fisher's exact test, P ¼ 0.69) (Dent et al., 2007) (data not shown). We did find tumors of grade III were more frequent in the RATHER cohort compared to the Dent cohort (Fisher's exact test, P < 0.001) (data not shown). As has been reported previously, we found overlap of BRCA1-like status with most BRCA1 germline mutation and promoter methylation cases. Consistent with other reports, not all BRCA1 mutations are co-detected with the BRCA1-like assay. The relative ability to detect true signal between mutation detection methods and the BRCA1-like status assay may be a contributing factor in these apparent discrepant samples. It is also possible that a BRCA1 germline mutation carrier may develop a non-BRCA1-like (sporadic) tumor. Our definition of mutation status is however imperfect as we have no supporting functional data to determine the impact of the putative mutation on the protein function. Both classes, BRCA1-like and non-BRCA1-like alone, had a significantly higher frequency of TP53 mutation than expected by chance. While TN tumors are known to be enriched for TP53 mutations and are frequently associated with BRCA1 associated breast cancer (Mani e et al., 2009), we observed TP53 was more frequently mutated in the BRCA1-like than the non-BRCA1-like class. Although TP53 has been reported to be mutated at around 80e90% in basal-like breast cancers (Cancer Genome Atlas Network, 2012;Mani e et al., 2009), this study and another using similar sequencing technology have found a slightly lower frequency in TN breast cancer (54 and 68%, respectively) . This difference may be reflective of the difference between basal-like and TN disease or because many of the reports of TP53 mutation and basal-like breast cancer have been carried out using capillary sequencing technology. This technology is likely to have more precision in mutation detection in low complexity regions and for insertions/deletions, but is economically unfeasible for large sequencing projects. The prognostic value in breast cancer of mutations in TP53 has been found to be specific to ER-positive disease (Silwal-Pandit et al., 2014). However, the predictive capacity of TP53 mutations in breast cancer for various therapies has not been thoroughly examined. In recent findings a combination of Chk1 inhibition with irinotecan, a DNA-damage inducing agent has shown promise in TN xenograft experiments in mice (Ma et al., 2012). BRCA1-like tumors may be more susceptible to such treatments due to their high frequency of TP53 mutation. It has been shown that TN breast cancer can be further subdivided into different molecular subtypes based on mRNA expression (Chen et al., 2012;Teschendorff et al., 2007;Waddell et al., 2010) and that these subtypes differ in Figure 2 e A, Unsupervised clustering of 279 top variable genes most differentially expressed between BRCA1-like and non-BRCA1-like status (ANOVA, FDR <0.001, fold-change >1) in 112 TN breast tumors. Scaled expression value is denoted as the column Z-score and plotted in redeblue color scale. Red indicates high expression and blue low expression. Information Columns 1, 2 and 3 depict BRCA1-like status, BRCA1 promoter methylation status and BRCA1 mutation status, respectively. For all sample columns, assay positive is indicated by blue, negative by gray and no data due to failed experiment by white. B, Network analysis of up-regulated differentially expressed genes, indicating level of up-regulation in BRCA1-like compared with non-BRCA1-like (red shading), direct relationships (solid lines), and indirect relationships (interrupted lines). their response to therapy (Lehmann et al., 2011;Masuda et al., 2013). We added the Lehmann TNBCType classifications to the heatmap depicting the most differentially expressed genes based on BRCA1-like status and found both 'basal-like 1' and 'mesenchymal' categories are most associated with BRCA1-like status (Pearson's chi squared test, P < 0.001). This indicates that BRCA1-like based gene expression patterns, which identify a different group of patients are novel in TN breast cancer (Supplementary Figure 1). Through differential gene expression analysis, we have identified BRCA1-like TNs may be susceptible to therapies targeting DNA repair and cell cycle pathways. The up-regulated genes in BRCA1-like tumors formed a network centered on the FOXM1 gene, which is a key regulator of cell cycle progression and DNA damage repair and has been found to be overexpressed in most human cancers (Alvarez-Fern andez and Medema, 2013). This may explain the BRCA1-like status of these samples regardless of their BRCA1 mutation/promoter methylation status as aberrant FOXM1 can lead to re-entry into the cell cycle after DNA damage induced arrest rather than apoptosis (Alvarez-Fern andez et al., 2010). Furthermore, breast cancer cell lines with FOXM1 over-expression are linked to acquired resistance to specific chemotherapeutics; these cell lines can be re-sensitized to these treatments when FOXM1 is depleted, potentially explaining the poor prognosis of BRCA1-like patients in our study (Kwok et al., 2010). Targeting FOXM1 through CDK4/6 inhibitors has shown promising results in melanoma cell lines (Anders et al., 2011) and FOXM1 suppression increases sensitivity to certain DNA damaging agents in tumor cell lines (Kwok et al., 2010;Zhang et al., 2012). Our finding that FOXM1, CDK4 and CDK6 are highly expressed in BRCA1-like tumors is of particular interest in light of recent reports that CDK4/6 inhibitors are effective in various breast tumors (Dean et al., 2012;Finn et al., 2009). BRCA1-mutated breast tumors, which are deficient in HRmediated DNA double-strand-break-repair are known to respond to PARP inhibitors, such as olaparib (Gelmon et al., 2011;Kaufman et al., 2014;Tutt et al., 2010). Inhibition of FOXM1 may sensitize cells to PARP inhibition allowing for effective combination treatments. In addition, previous studies have shown that BRCA1-like breast cancer patients benefit substantially from intensified alkylating chemotherapy in comparison to those treated with conventional chemotherapy (Lips et al., 2011;Schouten et al., 2015;Vollebergh et al., 2011). Multivariate survival analysis is routinely employed in retrospective studies to determine the independent prognostic factors, which reduce survival time. The Cox proportional hazards model was used to estimate the hazard for each co-variable including all potentially confounding covariables in the model. Using this analysis we identified BRCA1-like status to be an independent prognostic factor with BRCA1-like patients associated with a worse prognosis. In the multivariate Cox model, although it is not significant, there is an indication that chemotherapy treatment may be an influencing factor on prognosis when stratifying patients for BRCA1-like status (Table 2). To rule out that the prognostic association is influenced by a differential treatment effect, we employed a test for interaction with chemotherapy and BRCA1-like status. This test was not significant (P ¼ 0.75) indicating the differential prognosis is not influenced by Table 1). In addition we tested association of biobank with chemotherapy and found no significant association (P ¼ 0.52). These findings suggest BRCA1-like status is a prognostic marker in TN breast cancer and is not influenced by treatment effect in this cohort. It is important to note, we have only examined the prognostic capacity of the biomarker, BRCA1-like status in our series and are unable to determine its power to predict derived benefit from specific treatments such as platinum salts as we do not have access to those data and the setting is not amenable to those analyses. It remains that specific treatments which may be present in our series may mask the BRCA1-like prognostic effect. To examine this, we employed multivariate survival analysis on only the non-adjuvantsystemically treated patients in the series (no chemo-/ hormonal therapy). We observe in this subgroup the same trend of BRCA1-like status associated with a worse prognosis (HR ¼ 6.75, 95% CI ¼ 0.85e53.72, P ¼ 0.07) lending further support that BRCA1-like status is an independent prognostic factor in this series (Supplementary Figure 2 and Supplementary Table 2). Both these and previous findings identified that a portion of BRCA1-like tumors are not BRCA1 mutated/promoter methylated Vollebergh et al., 2011), indicating that alterations in another gene or genes involved in DNA-repair besides BRCA1 may be associated with the characteristic genomic instability of BRCA1-like tumors. Based on previous reports, the BRCA1-like group is likely to be susceptible to specific treatments aimed at DNA damage and cell cycle pathways, such as PARP, Chk1 or CDK4/6-FOXM1 inhibitors and/ or intensified alkylating agents or platinum compounds, regardless of BRCA1 mutation or promoter methylation status. Prospective clinical trials should deliver final proof for these assumptions. Recently we found evidence that BRCA1-like breast cancer patients derived more benefit from neoadjuvant carboplatin/veliparib added to a standard regimen of doxorubicin-cyclophosphamide, followed by paclitaxel, than non-BRCA1-like patients in the I-SPY 2 trial (Glas et al., 2014). The biomarker  treatment interaction odds ratio of achieving a pathological complete remission was 9.3 (P ¼ 0.02) with carboplatin/veliparib added to standard chemotherapy, when compared to standard chemotherapy alone, according to BRCA1-like status (Glas et al., 2014). These interesting data need confirmation in a second (neo)adjuvant trial where the addition of DNA damaging agents to standard chemotherapy has been studied. Recently Tutt et al. presented very interesting response and progression-free survival data regarding first line treatment of M1 TNBC patients randomized between docetaxel or carboplatin. Only BRCA1 mutation status interacted significantly with treatment, while another homologous recombination deficiency test (Myriad HRD Assay, Myriad Genetics) did not (Tutt et al., 2014) indicating some BRCA-associated tests are not capable of predicting response in the metastatic setting. We have recently initiated a prospective randomized-controlled trial with a 2  2 factorial design of paclitaxel AE bevacizumab versus carboplatincyclophosphamide AE bevacizumab in first line metastatic TNBC patients (NCT01898117). One of the primary endpoints of this trial is to validate the BRCA1-like status as a biomarker for alkylating chemotherapy benefit. PIK3CA is a frequently mutated gene in breast cancer, although most often associated with luminal-type tumors rather than basal-like or TN tumors (Cancer Genome Atlas Network, 2012). Patients with tumors mutated in PIK3CA may benefit from inhibitors of the PI3K/AKT/mTOR pathway. We found PIK3CA mutation frequency significantly higher than expected within the non-BRCA1-like class alone with a frequency of 21%. This is roughly 2 times higher than previous reports of basal-like tumors (Cancer Genome Atlas Network, 2012) suggesting non-BRCA1-like classification enriches for patients who may benefit from PI3K inhibitors. To substantiate this, a small trial with a Bayesian design would be helpful. It has been reported that not all hotspot mutations confer pathway activation as measured by downstream-activated proteins (Beelen et al., 2014). For this reason, downstream- activated proteins should be included in any trials assessing the inhibition of PI3K pathway in PIK3CA mutated patients. Patients with TN tumors are currently unlikely to be tested for PIK3CA mutations or pathway activation, however, our findings indicate a portion of these patients may benefit from PI3K/AKT/mTOR pathway inhibition. Importantly, the non-BRCA1-like subgroup makes up AE45% of all TNs. In summary, our data indicate one sizeable subgroup (around 9.5%) may be more susceptible to PI3K/AKT/mTOR inhibitors which are currently available in clinical practice but are not routinely administered to TN patients. In addition, we have indicated a second subgroup which is more likely to respond to DNA damaging agents and putatively also to CDK4/6 inhibitors based on the gene expression pattern and high frequency of TP53 mutations. In conclusion, while TN breast cancer currently has few treatment options, using mutation and gene expression analysis to molecularly characterized these tumors we have identified relatively large subgroups within the subtype that may benefit from specific tailored treatments, potentially impacting a substantial portion of TN breast cancers in total. Disclosure of potential conflicts of interest SCL is an advisory board member for Cergentis, Novartis, Roche, and Sanofi. SCL received research support funding from Amgen, AstraZeneca, Roche, and Sanofi. SCL is named inventor on a patent application for the BRCA1-like classifier. SLC, TMS, IMS and JP are all named co-inventors on a patent application for a BRCAness gene expression classifier. All other authors have no competing interests. Authors' contributions TMS, JP and SCL lead the analysis. TMS wrote the manuscript with contributions from all authors. IM, MM, AB, PCS, S-FC, BP, MAG, TB, RJCK, JJFM, KJ, RK, LW, RB, IMS, CC and SCL contributed to acquisition of data, data collection and analysis. JJFM and KJ provided pathological expertise. TMS, JP and SCL conceived of and designed the study. SCL and IMS provided project supervision.
2018-04-03T05:48:46.908Z
2015-05-07T00:00:00.000
{ "year": 2015, "sha1": "7ba89d69da7f0f7bc2c9be65457aec55fe8facc1", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.molonc.2015.04.011", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7ba89d69da7f0f7bc2c9be65457aec55fe8facc1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18184910
pes2o/s2orc
v3-fos-license
Effective Lagrangian for Heavy and Light Mesons: Semileptonic Decays We introduce an effective lagrangian including negative and positive parity heavy mesons containing a heavy quark, light pseudoscalars, and light vector resonances, with their allowed interactions, using heavy quark spin-flavour symmetry, chiral symmetry, and the hidden symmetry approach for light vector resonances. On the basis of such a lagrangian, by considering the allowed weak currents and by including the contributions from the nearest unitarity poles we calculate the form factors for semileptonic decays of $B$ and $D$ mesons into light pseudoscalars and light vector resonances. The available data, together with some additional assumptions, allow for a set of predictions in the different semileptonic channels, which can be compared with those following {}from different approaches. A discussion of non-dominant terms in our approach, which attempts at including a rather complete dynamics, will however have to wait till more abundant data become available. Introduction In this letter we shall present an analysis of semileptonic heavy meson decays into light hadrons P → Πℓν l (1.1) P → Π * ℓν l (1.2) (P = heavy pseudoscalar meson, Π and Π * = pseudoscalar and vector light mesons), based on the use of heavy quark spin flavour symmetry [1], chiral symmetry, and the hidden symmetry approach for light vector resonances. Specifically, our framework will make use of: (i) the heavy-light chiral lagrangian proposed in refs. [2] [3] [4] [5], which describes the interaction of the pseudoscalar mesons belonging to the low-lying SU(3) octet and the negative parity J P = 0 − , 1 − heavy Qq mesons; (ii) the introduction through the hidden gauge symmetry approach of the vector meson resonances belonging to the low-lying SU(3) octet within the heavy-light chiral lagrangian [6]; (iii) the inclusion of low lying positive parity Qq heavy meson states within the formalism. We shall first summarize the well known description of the interactions of heavy mesons and light pseudoscalars in terms of an effective chiral lagrangian. In such a lagrangian we shall add a term describing the octet vector meson resonances and their interactions with the heavy mesons and the light pseudoscalars. We shall then introduce the effective lagrangian containing the low-lying positive parity heavy meson states and their interactions with the light pseudoscalars, with the negative parity heavy meson states, and the couplings of the light vector resonances of the octet to both positive and negative parity heavy meson states. Unavoidably such an effective description, will require the introduction of a set of coupling constants. The study of the semileptonic decays (1.1) and (1.2) will be shown to yield some information on such constants. To this end one has to use all the symmetry constraints to characterize the form of the effective weak interaction of a heavy negative or positive parity meson with the light pseudoscalars and of a negative parity heavy meson with the light vector resonances. For a first numerical analysis of the leptonic decays we shall be forced to neglect higher derivative terms, which is justified only in limited portions of phase space. After having introduced such a formal setting we shall analyze the semileptonic decays (1.1) and (1.2). Their form factors will be calculated at maximum momentum transfer and at leading order in the inverse of the heavy quark mass by including the contributions of the low lying pole contributions. The heavy-light chiral lagrangian To be self-contained and to establish the notations we shall start by reviewing the description of heavy mesons and light pseudoscalars by effective field operators and of their effective chiral lagrangian. Negative parity heavy Qq a mesons are represented by fields described by a 4 × 4 Dirac matrix Here v is the heavy meson velocity, a = 1, 2, 3 (for u, d and s respectively), P * µ a and P a are annihilation operators normalized as follows with v µ P * aµ = 0 and M H = M P = M P * , the supposedly degenerate meson masses. Also v /H = −Hv / = H,Hv / = −v /H =H. The pseudoscalar light mesons are described by and f π = 132MeV . Under the chiral symmetry the fields transform as follows where Σ = ξ 2 , g L , g R are global SU(3) transformations and U is a function of x, of the fields and of g L , g R . The lagrangian describing the fields H and ξ and their interactions, under the hypothesis of chiral and spin-flavour symmetry and at the lowest order in light mesons derivatives is where < . . . > means the trace, and Besides chiral symmetry, which is obvious, since, under chiral transformations, the lagrangian (2.11) possesses the heavy quark spin symmetry SU(2) v , which acts as Explicit symmetry breaking terms can also be introduced, by adding to L 0 the extra piece (at the lowest order in m q and 1/m Q ): The last term in the previous equation induces a mass difference between the states P and P * contained in the field H, such that The preceding construction can be found for instance in the paper by Wise [2], and we have used the same notations. Introduction of light vector resonances The vector meson resonances belonging to the low-lying SU(3) octet can be introduced by using the hidden gauge symmetry approach [6] (for a different approach see [7]). The new lagrangian containing these particles,to be added to L 0 + L 1 , is as follows [6]: where F µν (ρ) = ∂ µ ρ ν − ∂ ν ρ µ + [ρ µ , ρ ν ], and ρ µ is defined as ρ is a hermitian 3 × 3 matrix analogous to (2.6) containing the light vector mesons ρ 0,± , K * , ω 8 . g V , β and a are coupling constants; by imposing the two KSRF relations [6] one obtains a = 2 We note that the quartic term in the heavy fields H in (3.1) is added to obtain the simple lagrangian L 0 in the formal limit m ρ → ∞, when the ρ field decouples. Inclusion of positive parity heavy mesons For our subsequent analysis of the heavy mesons semileptonic decays we shall have to introduce the low-lying positive parity Qq a heavy meson states. For p waves (l = 1), the heavy quark effective theory predicts two distinct multiplets, one containing a 0 + and a 1 + degenerate states, the other one comprising a 1 + and a 2 + state [8], [9]. In matrix notation they are described respectively by [10] and The two multiplets have s l = 1/2 and s l = 3/2 respectively, where s l is the angular momentum of the light degrees of freedom which is conserved together with the heavy quark spin s Q in the infinite quark mass limit because J = s l + s Q . The lagrangian containing the fields S a and T µ a as well as their interactions with the Goldstone bosons and the fields H a has been derived in ref. [10]: Notice that a mixing term between the S and T µ fields is absent at the leading order. Indeed, by saturating the µ index of T µ with v µ or γ µ gives a vanishing result, and derivative terms are forbidden by the reparametrization invariance [10]. We add here the coupling of the vector meson light resonances to the positive and negative parity states We shall see in the following that some information on the coupling constants g, µ, λ and ζ can be obtained by the analysis of the semileptonic decays (1.1) and (1.2). Weak currents At the lowest order in derivatives of the pseudoscalar couplings and in the symmetry limit, weak interactions between light pseudoscalars and a heavy meson are described by the weak current [2]: where α is related to the pseudoscalar heavy meson decay constant f H , defined by as follows: We can in a similar way introduce the current describing the weak interactions between pseudoscalar Goldstone bosons and the positive parity S fields: and the current by which the H fields interact with the light vector mesons: All these currents transform under the chiral group similarly to the quark current qγ µ (1 − γ 5 )Q, i.e. as (3 L , 1 R ). We also observe that there is no similar coupling between the fields T µ and ξ. Indeed (5.1) and (5.4) also describe the matrix element between the meson and the vacuum, and this coupling vanishes for the 1 + and 2 + states having s l = 3/2. This can be proved explicitly by considering the current matrix element (A µ = q a γ µ γ 5 Q): Using the heavy quark spin symmetry and the methods of the first two papers in ref. [1], (5.6) turns out to be proportional to the matrix element of the vector current between the vacuum and the 2 + state, which vanishes. Semileptonic decays Let us first consider the decay (1.1). The hadronic matrix element can be written in terms of the form factors F 0 , F 1 as follows (2.18)). The form factors F 0 and F 1 take contributions, in a dispersion relation, from the 0 + and 1 − meson states respectively. We notice here that, by working at the leading order in 1/m Q , the possible parametrizations of the weak current matrix element are not all equivalent. Computed in the heavy meson effective theory, the matrix element of eq. (6.1) reads: with A and B both scaling as where the theory should provide for a better approximation). The factor √ M H which gives rise to this scaling behaviour comes just from the wave function normalization of the P operator, and no other explicit factor M H appears in the heavy meson effective field theory. If one introduces the usual form factors f + and f − through the following decomposition: one has the relations: It would seem consistent at this point to throw away the terms proportional to A, obtaining which however does not reproduce the original expression of the matrix element. This is a clear contradiction since the two terms on the right hand side of eq. (6.2) scale in the same fashion. On the other hand, by making use of the decomposition of eq. (6.1) and working at the leading order we find: which, inserted back in the eq. (6.1), fully reproduces the matrix element given in eq. (6.2). The previous example shows that one must be very careful in the definition of the form factors when working at the leading order in 1/m Q in the heavy meson effective field theory. Using the previous lagrangians (2.11), (4.6) and the currents (5.1), (5.4) we obtain, at the leading order in 1/m Q and at q 2 = q 2 max , the following results The r.h.s. in (6.7) and the first term in (6.8) arise from polar diagrams. Finally k µ is the residual momentum related to the physical momenta by k µ = q µ − MHv µ (and A similar analysis can be performed for the semileptonic decay process (1.2) of a heavy pseudoscalar meson P with a light vector Π * particle in the final state. The current matrix element is expressed as follows Notice that the tensor structures given in square brackets of eq. (6.9) have vanishing divergence and are constant in the limit of infinite M H . Such a decomposition satisfies the same properties discussed above for the form factors F 0 and F 1 . In a dispersion relation the form factor V (q 2 ) takes contribution from 1 − particles, A 0 (q 2 ) from 0 − particles and A j (q 2 ) (j = 1, 2) from 1 + states. Using the lagrangians (3.1) and (4.11) and the currents (5.1), (5.4) and (5.5) we get at q 2 = q 2 max and at leading order in 1/m Q the results where δm ′ arise from the chiral breaking terms of Eq.(2.17). The first term in (6.12) and the last one in (6.14) arises from the direct coupling between the heavy meson H and the 1 − light resonances of Eq.(5.5) and the other ones from polar diagrams. Numerical analysis The results (6.7),(6.8) and (6.11)-(6.14) are obtained in the chiral limit and for m Q → ∞; therefore they should apply (with non-leading corrections) to the decays B → πℓν ℓ or B → ρℓν ℓ . Unfortunately, for those decays there are not sufficient experimental results that could be used to determine the various coupling constants appearing in the final formulae. On the other hand, for D decays the experimental information is much more detailed and we could tentatively try to use it to fix the constants as well as to make predictions on the other decays which have not been measured yet. In order to make contact with the experimental data, we have to know the behaviour of the form factors with q 2 . Except for the direct terms in (6.8), (6.12) and (6.14) all the contributions we have collected arise from polar diagrams, which suggests a simple pole behaviour. This is also the assumption usually made in the phenomenological analysis of D semileptonic decays. Therefore we have assumed for the form factors F 1 (q 2 ), V (q 2 ), A 1 (q 2 ) and A 2 (q 2 ) (the form factors F 0 (q 2 ) and A 0 (q 2 ) are not easily accessible to measurement since they appear in the width multiplied by the lepton mass) the generic formula For the pole masses we use the inputs in Table I [11] that also agree with the masses fitted by the experimental analyses of D decays [12]. For the D → π semileptonic decay one thus gets, from (6.7) and (7.1): For f D we use the value suggested by lattice [13] QCD and by QCD sum rules analysis [14], f D = 200MeV . Let us now turn to semileptonic decays into vector mesons. The experimental inputs we can use are from D → K * ℓν ℓ and are as follows: (7.5) They are averages between the data from E653 [17] and E691 [18] experiments. The calculated weak couplings at q 2 = 0 are: Taking f D = 200MeV , from Eq.(7.5), (7.6) and (7.8) we obtain: |λ| = 0.60 ± 0.11 GeV −1 (7.9) α µ = −0.06 ± 0.02 GeV 3/2 (7.10) By using the resultα = 0.46 ± 0.06 GeV 3/2 from QCD sum rules [19], one obtains: For the A 1 coupling the experimental data do not allow a separate determination of α 1 and ζ. However we notice that the combination: is almost flavour independent and, at leading order in the 1/M Q expansion is scaling invariant. From the D → K * data given in Eq.(7.5) we find: We can now give predictions for the processes which the heavy quark and chiral symmetries relate to D → π and D → K * . Concerning D → π, we can use Eq.(7.3) together with the value of the constant g given in Eq.(7.4) to derive F 1 (0) for the various decays. Taking f B = f D = 200MeV we obtain the results given in Table II. Notice that, by using 3), we are implicitly accounting for the large corrections to the relation which is implied by lattice QCD and QCD sum rules results 1 [19]. Had we insisted in using the leading order expression of our computation, Eq.(7.2), we would have obtained the results shown in parenthesis in Table II, by fixing from the D → π data the product gα. These results agree with the previous ones in the D sector, but they obviously disagree for the B, predicting partial widths which are smaller by almost a factor of 3. For the decays which are related to D → K * the situation is more complex. We have not determined all relevant couplings of the effective lagrangian, from the D → K * data. In particular we have determined a combination of α 1 and ζ, called α ef f and given in Eq. (7.12). In the expression of V (0) we shall still choose f D = f B = 200MeV , in agreement with the lattice and sum rules calculations. This approach leads to the results given in Table III, expressed as predictions for the transverse, longitudinal and total widths Γ T , Γ L and Γ. For comparison we have also displayed in parenthesis the results obtained by working strictly at the leading order in 1/m Q , avoiding the identifications α = f D √ M D and fitting from the D → K * data the combinations λα, α ef f andαµ. The predictions for the form factors A 1 (0) and A 2 (0) are in this case the same and, as a consequence, the predicted values for Γ L coincide for all the considered decays. On the other hand V (0) for the B decays is smaller if computed at the leading order. This implies a transverse width Γ T smaller by a factor two and a total width Γ smaller by about a factor 1.6. It is curious to observe that the leading order results could have been obtained in a model independent way by assigning, in the parametrization of the matrix element, the scaling behaviour of the various form factors. For instance, for the D → K * process we can write: where v, a 1 and a 2 are constants as M D grows. This behaviour simply follows from the definitions of V , A 1 and A 2 , and from the fact that the matrix element < K * |J µ |D > scales as √ M D . The above relations are valid at q 2 = q 2 max = (M D − M K * ) 2 and they should be appropriately modified at q 2 = 0. To do so we assume a simple polar behaviour for the form factors. Notice that the quantities v, a 1 and a 2 will in general depend on M D , M K * and the relevant pole mass M Pole , with the restriction that they should be constant in the large M D limit. At q 2 max the polar behaviour provides a factor: This factor exhibits a certain flavour dependence, which we may account for by incorporating it in v, a 1 and a 2 : and similarly for a 1 , a 2 . We can assume thatv,â 1 andâ 2 are approximately flavour independent. In this way we obtain the following expressions The constantsv,â 1 andâ 2 are determined by the data for D → K * given in Eq.(7.5). A comparison with our model gives: therefore the predictions obtained from this scaling argument coincide with those obtained at leading order from an effective lagrangian. In Table IV we compare our results, for f D = f B = 200MeV with other existing calculations. The comparison is made for the ratios of the form factors at q 2 = 0 to the corresponding form factors for the D meson, from which we have fixed our parameters. Conclusions The leptonic decays of a heavy pseudoscalar meson into a light pseudoscalar or into an octet vector resonance have been studied with our effective lagrangian by including the allowed direct coupling and the lowest contributing poles. The formalism can be reliable only at q 2 max and to leading order in 1/m Q . Most of the experimental information is available only for D decays. To extract information at other momentum transfers one has to assume generic pole extrapolations. In this we follow the experimental phenomenological analyses. From the present data on semileptonic decay of D into pion, and by using f D = 200 MeV , we can extract for the coupling constant g appearing in the effective chiral coupling of pseudoscalars with heavy mesons a value |g| = 0.61 ± 0.22 in agreement with those obtained from radiative D * decays (and also from decay into K). We can then try to predict the branching ratios for the related decays D → K, D → η, D s → η, D s → K, B → π, B s → K, as shown in Table II. A similar analysis for the decays related to D → K * through heavy quark and chiral symmetries requires additional assumptions to arrive at the predictions shown in Table III for the transverse, longitudinal, and total widths. For the D decays in that table one can develop a scaling argument leading essentially to the same predictions. On the other hand the numerical estimates for B → vector resonance with the dynamical model based on the effective lagrangian differ considerably from those of such a scaling argument, as also shown in Table III. Our predictions can be compared with those of other calculations in the literature. The comparison can be made in terms of the ratios of the form factors at vanishing momentum transfer to the D meson corresponding form factors used to fix the parameters. Significant differences are noticed among different models, which shows the still uncertain status of the theory. The theoretical analysis we have presented here is based on a dynamically structured approach, using an effective lagrangian including the presumably relevant degrees of freedom. The existing data would not leave much space for a more accurate treatment by including non-leading contributions. Under such a limitation the model allows for predictions for the D → pseudoscalar leptonic decays, for the D → light vector resonance and, probably with some more uncertainties, for the B → light vector resonance leptonic decays. The present status of the subject, in particular the still insufficient experimental data, do not yet allow for a more complete theoretical approach of a precision comparable to that of low energy applications of chiral lagrangians [26]. In this sense the calculations presented here are to be considered as still exploratory. Additional experimental data would greatly help in a better determination of the parameters of the effective lagrangian proposed here and in testing for the possible necessity of non-leading corrections. Table I Pole masses for different states. Units are GeV . Table II Predictions for semileptonic D and B decays in a pseudoscalar meson. We have neglected the η − η ′ mixing. The branching ratios and the widths for B must be multiplied for |V ub /0.0045| 2 . In the first column f D = f B = 200 MeV is assumed. In parenthesis the leading order result is assumed, i.e. f B /f D = M D /M B . We also assume τ Bs = τ B 0 = τ B + = 1.29 ps.
2014-10-01T00:00:00.000Z
1992-11-16T00:00:00.000
{ "year": 1992, "sha1": "d3ceb9338803e3100d4b70a36989f6c206ccb405", "oa_license": null, "oa_url": "https://archive-ouverte.unige.ch/unige:91487/ATTACHMENT01", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fcd8659b8d7da3c1bc3ea70559e4c848bf5f9bf7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255739089
pes2o/s2orc
v3-fos-license
Dietary Patterns and Long-Term Outcomes in Patients with NAFLD: A Prospective Analysis of 128,695 UK Biobank Participants Large longitudinal studies exploring the role of dietary patterns in the assessment of long-term outcomes of NAFLD are still lacking. We conducted a prospective analysis of 128,695 UK Biobank participants. Cox proportional hazards models were used to estimate the risk associated with two dietary patterns for long-term outcomes of NAFLD. During a median follow-up of 12.5 years, 1925 cases of end-stage liver disease (ESLD) and 12,466 deaths occurred in patients with NAFLD. Compared with patients in the lowest quintile, those in the highest quintile of the diet quality score was negatively associated with the risks of ESLD and all-cause mortality (HRQ5vsQ1: 0.76, 95% CI: 0.66–0.87, p < 0.001; HRQ5vsQ1: 0.84, 95% CI: 0.79–0.88, p < 0.001, respectively). NAFLD patients with high-quality carbohydrate patterns carried a 0.74-fold risk of ESLD and a 0.86-fold risk of all-cause mortality (HRQ5vsQ1: 0.74, 95% CI: 0.65–0.86, p < 0.001; HRQ5vsQ1: 0.86, 95% CI: 0.82–0.91, p < 0.001, respectively). For prudent dietary patterns rich in vegetables, fruits and fish, the adjusted HR Q5vsQ1 (95% CI) was 0.87 (0.76–0.99) and 0.94 (0.89–0.99) for ESLD and all-cause mortality of NAFLD patients. There was a U-shaped association between the meat-rich dietary pattern and all-cause mortality in patients with NAFLD. These findings suggest that a diet characterized by a high-quality, high intake of vegetables, fruits, fish and whole grains as well as an appropriate intake of meat, was associated with a lower risk of adverse outcomes of NAFLD. Introduction Nonalcoholic fatty liver disease (NAFLD) is a very prevalent but widely underappreciated liver disease that is closely related to other metabolic disorders and was therefore called metabolic dysfunction-associated fatty liver disease (MAFLD) [1]. The global prevalence of NAFLD was estimated to be 32.4% [2]. The dramatically increased disease burden of NAFLD has been propelled by the progressively severe health effects of obesity and type 2 diabetes [3]. NAFLD covers a range of liver conditions, from simple steatosis to steatohepatitis and fibrosis, the latter of which carries a higher risk of developing end-stage liver disease [4]. The incidence rate of hepatocellular carcinoma parallels the severity of NAFLD, rising from 0.15 to 14.46 and 19.13 per 1000 person-years for steatosis, fibrosis and cirrhosis, respectively [5]. Moreover, NAFLD patients have a significantly increased risk of overall mortality, and this risk changes in tandem with the histology stage of NAFLD [6]. For the majority of patients, NAFLD is a benign condition [7]. However, advanced liver disease is usually diagnosed late, and interventions at this stage are less effective than earlier treatments [8]. A critical challenge is to identity NAFLD patients at higher risk of progressive liver disease so that early interventions could be targeted to those most in need [9]. A lifestyle modification focused on diet and exercise is the first-line treatment for NAFLD [10]. Diet intervention has raised great interest around the world. Previous studies reported the significant relationship between several nutrients and NAFLD, such as red meat consumption [11] being positively associated with the risk of NAFLD while yogurt [12] and soy milk [13] being inversely associated with the risk of NAFLD. Nevertheless, one does not eat a single nutrient but a complex mixture of foods which interact with each other [14]. Exploring the separate effects of isolated nutrient components does not represent well the dietary habits in the real world. Dietary patterns, by contrast, take the contributions of various aspects of foods into account and therefore more closely reflect the habitual diet in real-life settings [15]. Dietary patterns are usually derived by two methods: a priori and a posteriori method [16]. The a priori approach is based on hypotheses according to dietary guidelines about whether foods are favorable or unfavorable. The a posteriori approach is an exploratory analysis that accounts for variation in the habitual intake in a specific population. To date, only a few small cross-sectional studies were performed to investigate the relationship of dietary patterns with NAFLD risks [17]. Previous studies showed that a high diet quality, assessed by the alternate healthy eating index (AHEI), was inversely associated with hepatic steatosis [18]. In addition, a prudent dietary pattern was associated with an odds ratio of 0.78 for NAFLD, whereas the Western dietary pattern was associated with a 1.56-fold increased risk of NAFLD [17]. Large longitudinal studies exploring the role of dietary patterns in the assessment of the long-term outcomes of NAFLD, for instance, cirrhosis, liver cancer, end-stage liver disease and mortality, are still lacking. This investigation may provide more evidence for a risk stratification of NAFLD patients and the early identification of and interventions in NAFLD patients with poor prognosis. In this study, we examined the association between dietary patterns and ESLD and mortality in NAFLD patients, considering (i) an a priori dietary pattern based on recent dietary priorities for cardiometabolic health [19] and (ii) an a posteriori dietary pattern created by principal component analysis. The combination of a priori and a posteriori patterns may provide a more complete picture for the relation of diet with long-term outcomes of NAFLD. Study Population The UK Biobank recruited over 502,386 participants aged 37 to 73 years from 22 assessment centers throughout the UK between 2007 and 2010. At baseline, the participants were required to complete a touchscreen questionnaire and a verbal interview, undergo physical measurements and provide biological samples. The UK Biobank received ethics approval from the North West Multicenter Research Ethics Committee (reference no. 16/NW/0274). All participants provided written informed consent at recruitment. This research was conducted using the UK Biobank resource under application number 79302. Participants with NAFLD at baseline were identified by the fatty liver index (FLI), which has an accuracy of 0.84 in detecting fatty liver, and an FLI > 60 indicates the presence of fatty liver. We then excluded patients with excessive alcohol drinking (alcohol consumption ≥ 30 g/day for men and ≥20 g/day for women) and subjects with other liver diseases (viral hepatitis, Wilson's disease, hemochromatosis and autoimmune hepatitis). After further exclusion of those with ESLD and missing values of covariates at baseline, 128,695 participants with NAFLD were included in the final analysis (Supplementary Figure S1). Assessment of Dietary Quality At the recruitment assessment-center visit, each participant was asked to complete a brief touchscreen food frequency questionnaire (FFQ) with 47 dietary items covering the types and the frequency of consumption of food groups and drinks over the past year. Then, we created a diet quality score based on 10 foods [19]: vegetables, fruits, fish, dairy, whole grains, vegetable oils, refined grains, processed meats, unprocessed red meats and sugar-sweetened beverages, which was used to assess the adherence to ideal dietary patterns in patients with cardiometabolic disease (Supplementary Table S1). Each dietary component was scored from 0 (unhealthiest) to 10 (healthiest) points, and the total diet quality score was the sum of all the diet component scores and ranged from 0 to 100, with a higher score representing a higher overall diet quality. Assessment of Dietary Patterns To derive dietary patterns, the SAS "proc factor" command was used for principal component analysis (PCA) with varimax rotation. When determining the number of principal components to retain, three selection criteria were used: (i) eigenvalue greater than 1, (ii) the scree plot (Supplementary Figure S2) and (iii) the interpretable variance percentage (Supplementary Table S2). Then, the principal components were named based on the food groups that had rotated factor loadings with an absolute value ≥ 0.3. Finally, three dietary patterns were identified for analysis: (i) meat pattern (abundant in red meat and poultry), (ii) prudent (abundant in fruit, vegetable and fish) and (iii) high-quality carbohydrate (high in whole grains, but low in refined grains). At the time of the analysis, the updating dates of linkages to hospital inpatient admissions and death registries were 30 September 2021 and 31 October 2021, respectively. The follow-up time in person-y was calculated from the date of attendance until the date of ESLD diagnosis, loss to follow-up or death, whichever occurred earlier. Covariates Information on demographic factors and lifestyle factors were collected using a touchscreen, self-completed questionnaire at the baseline assessment visit for the UK Biobank. The Townsend deprivation index was used as a measure of the socioeconomic status and to categorize the sample population into quintiles from the least deprived (quintile 1 to the most deprived (quintile 5). To measure the total sedentary time, the sum of self-reported hours spent watching television and using the computer was derived on a typical day. Sedentary behavior was defined as sedentary time > 4 h. The body mass index (BMI) value was obtained from the weight divided by the square of the height in meters. Hypertension was defined as systolic pressure ≥ 140 mmHg, diastolic pressure ≥ 90 mmHg, use of medications for blood pressure or as self-reported or diagnosed by a doctor. Diabetes was defined as blood glucose ≥ 11.1 mmol/L, glycated hemoglobin (HbA1c) ≥ 48 mmol/mol, use of insulin or as self-reported or diagnosed by a doctor. Alanine aminotransferase, triglycerides and cholesterol were measured in blood samples collected at recruitment on a Beckman Coulter AU5800. The UK Biobank performed detailed quality control and correction for technical outliers. Statistical Analysis Baseline sociodemographic, lifestyle and other characteristics were summarized across diet score quartiles. The categorical variables are displayed as percentages and were tested by chi-squared tests. Continuous variables are displayed as means with standard deviations (SDs) and were tested by one-way ANOVA. The associations of diet quality and derived patterns with incident ESLD, all-cause mortality and cause-specific mortality were investigated using Cox-proportional hazard models. Hazard ratios (HR) and 95% confidence intervals (CI) for each quartile of exposure were calculated. Model 1 was adjusted for age, sex, ethnicity, Townsend deprivation index (quintiles), education level (university/college degree or others), household income (less than £18,000, £18,000 to £30,999, £31,000 to £51,999, £52,000 to £100,000, greater than £100,000 or do not know/prefer not to answer), and model 2 was adjusted for model 1 plus self-reported smoking status (never, former or current smoker), sedentary behavior, body mass index, baseline diabetes, baseline hypertension, serum alanine aminotransferase, triglycerides and cholesterol. Then, we used multivariate cubic regression splines with 3 knots (10th, 50th, 90th) to visualize the potential nonlinear associations of dietary patterns with incident ESLD and all-cause mortality discovered in the Cox model above by SAS macro %RCS_Reg. To examine the overall statistical significance as well as the non-linearity of the exposures, we used likelihood ratio tests. We then investigated whether these associations differed by age, sex and other factors by performing a subgroup analysis and fitting an interaction term to the model. The hazard ratio of the product term was the measure of the interaction on the multiplicative scale. Sensitivity analyses were performed by excluding individuals with incident ESLD or who died within 2 years, those who had extreme BMIs (BMI < 15 or >40 kg/m 2 ), those who made any major changes to their diet in the last 5 years and those whose diet varied much from week to week. SAS 9.4 was used for all analyses. All statistical tests were 2-sided, and p < 0.05 was defined as statistically significant. Baseline Characteristics The baseline characteristics of participants by diet quality score quintiles are shown in Table 1. At baseline, the participants with a higher diet quality tended to be female, older, of White ethnicity, less socially deprived and more educated. In addition, they were less often current smokers and spent less time sitting still. They also had lower levels of alanine aminotransferase, gamma glutamyl transferase, triglycerides and total cholesterol. Interestingly, those NAFLD patients with a higher diet quality were more likely to suffer from comorbid hypertension and type 2 diabetes. Association of Diet Quality with Incident ESLD and Mortality During a median of 12.5 years follow-up (1,569,342 person-years), 1925 ESLD and 12,466 deaths occurred. In Table 2, compared with the patients in the lowest quintile, those in the highest quintile of the diet quality had 16% lower odds of ESLD and 18% lower odds of all-cause mortality after adjustments for covariates in model 1. After further adjusting for lifestyle and biochemistry factors in model 2, the inverse association remained significant. Association of Dietary Patterns with Incident ESLD and Mortality The associations between dietary patterns and ESLD risk are shown in Figure 1. We observed that a high-quality carbohydrate dietary pattern was negatively correlated with the risk of ESLD [HR Q5vsQ1 : 0.74 (0.65-0.86)]. In addition, a prudent dietary pattern showed body mass index, baseline diabetes, baseline hypertension, serum alanine aminotransferase, triglycerides and cholesterol. Association of Dietary Patterns with Incident ESLD and Mortality The associations between dietary patterns and ESLD risk are shown in Figure 1. We observed that a high-quality carbohydrate dietary pattern was negatively correlated with the risk of ESLD [HRQ5vsQ1: 0.74 (0.65-0.86)]. In addition, a prudent dietary pattern showed a non-linear negative association with the risk of ESLD; the HRs (95% CIs) in quintiles 2-5 were 0.94 (0.82-1.08), 0.85 (0.74-0.98), 0.84 (0.73-0.97), and 0.87 (0.76-0.99). However, we did not find a significant association with the meat-rich dietary pattern. Figure 1. Association of dietary patterns with incident ESLD. For the meat diet pattern, the quintile with the lowest hazard ratio (Q3) was set as a reference. The model was adjusted for age, sex, ethnicity, Townsend deprivation index (quintiles), education level (university/college degree or others), household income (less than £18,000, £18,000 to £30,999, £31,000 to £51,999, £52,000 to £100,000, greater than £100,000 or do not know/prefer not to answer), self-reported smoking status (never, former or current smoker), sedentary behavior, body mass index, baseline diabetes, baseline hypertension, serum alanine aminotransferase, triglycerides and cholesterol. Figure 1. Association of dietary patterns with incident ESLD. For the meat diet pattern, the quintile with the lowest hazard ratio (Q3) was set as a reference. The model was adjusted for age, sex, ethnicity, Townsend deprivation index (quintiles), education level (university/college degree or others), household income (less than £18,000, £18,000 to £30,999, £31,000 to £51,999, £52,000 to £100,000, greater than £100,000 or do not know/prefer not to answer), self-reported smoking status (never, former or current smoker), sedentary behavior, body mass index, baseline diabetes, baseline hypertension, serum alanine aminotransferase, triglycerides and cholesterol. The results of the relation between the dietary patterns and mortality are shown in Figure 2. Here, we revealed a U-shaped association of the meat-rich dietary pattern with all-cause mortality. Compared with quintile 3, the HRs (95% CIs) in quintiles 1-2 were 1.08 (1.02-1.15) and 1.06 (1.00-1.13); the HRs (95% CIs) in quintiles 4-5 were 1.09 (1.03-1. 16 Figure 2. Here, we revealed a U-shaped association of the meat-rich dietary pattern with all-cause mortality. Compared with quintile 3, the HRs (95% CIs) in quintiles 1-2 were 1.08 (1.02-1.15) and 1.06 (1.00-1.13); the HRs (95% CIs) in quintiles 4-5 were 1.09 (1.03-1.16) and 1.12 (1.05-1.18). Similar to what observed for ESLD, a prudent dietary pattern and high-quality carbohydrate dietary pattern also demonstrated negative associations with all-cause mortality. The associations of these dietary patterns with cause-specific mortality are reported in Supplementary Table S5. For the meat diet pattern, the quintile with the lowest hazard ratio (Q3) was set as a reference. The model was adjusted for age, sex, ethnicity, Townsend deprivation index (quintiles), education level (university/college degree or others), household income (less than £18,000, £18,000 to £30,999, £31,000 to £51,999, £52,000 to £100,000, greater than £100,000, or do not know/prefer not to answer), self-reported smoking status (never, former or current smoker), sedentary behavior, body mass index, baseline diabetes, baseline hypertension, serum alanine aminotransferase, triglycerides and cholesterol. The analysis of cubic splines (Figure 3) also showed the U-shaped association of the meat-rich dietary pattern with all-cause mortality (pnon-linearity < 0.001) and the L-shaped association between a prudent dietary pattern and ESLD as well as all-cause mortality (All pnon-linearity ≤ 0.001). For the high-quality carbohydrate dietary pattern, there was a linear association with ESLD (pnon-linearity = 0.675) and all-cause mortality (pnon-linearity = 0.155). Figure 2. Association of dietary patterns with all-cause mortality. For the meat diet pattern, the quintile with the lowest hazard ratio (Q3) was set as a reference. The model was adjusted for age, sex, ethnicity, Townsend deprivation index (quintiles), education level (university/college degree or others), household income (less than £18,000, £18,000 to £30,999, £31,000 to £51,999, £52,000 to £100,000, greater than £100,000, or do not know/prefer not to answer), self-reported smoking status (never, former or current smoker), sedentary behavior, body mass index, baseline diabetes, baseline hypertension, serum alanine aminotransferase, triglycerides and cholesterol. The analysis of cubic splines (Figure 3) also showed the U-shaped association of the meat-rich dietary pattern with all-cause mortality (p non-linearity < 0.001) and the L-shaped association between a prudent dietary pattern and ESLD as well as all-cause mortality (All p non-linearity ≤ 0.001). For the high-quality carbohydrate dietary pattern, there was a linear association with ESLD (p non-linearity = 0.675) and all-cause mortality (p non-linearity = 0.155). Subgroup Analyses and Sensitivity Analyses The subgroup analyses for diet quality according to different risk factors are shown in Supplementary Tables S6 and S7. There were no significant differences across all investigated subgroups for ESLD. For all-cause mortality, we found that a higher diet quality was associated with a decreased risk among current/previous smokers (p interaction < 0.002). We performed a number of sensitivity analyses to examine the robustness of the findings. When we excluded the first 2 years of follow-up, the patients with extreme BMIs, those who made any major changes to their diet in the last 5 years, and those whose diet varied much from week to week, we found that the observed associations of diet quality with ESLD and all-cause mortality remained unchanged (Supplementary Table S8). Subgroup Analyses and Sensitivity Analyses The subgroup analyses for diet quality according to different risk factors are shown in Supplementary Tables S6 and S7. There were no significant differences across all investigated subgroups for ESLD. For all-cause mortality, we found that a higher diet quality was associated with a decreased risk among current/previous smokers (pinteraction < 0.002). We performed a number of sensitivity analyses to examine the robustness of the findings. When we excluded the first 2 years of follow-up, the patients with extreme BMIs, those who made any major changes to their diet in the last 5 years, and those whose diet varied much from week to week, we found that the observed associations of diet quality with ESLD and all-cause mortality remained unchanged (Supplementary Table S8). Discussion In this large longitudinal study, we observed that dietary patterns were significantly associated with the long-term outcomes of NAFLD. First, a higher a priori-derived diet quality score was inversely related to the risks of ESLD as well as all-cause mortality and liver-, CVD-and cancer-related mortality. Second, a greater adherence to a posterioriderived dietary patterns (prudent and high-quality carbohydrate patterns) was associated with a lower risk of ESLD in NAFLD patients, whereas this association was nonsignificant for the meat-rich pattern. Third, there was a U-shaped association between the meat-rich pattern and all-cause mortality for NAFLD patients, while this association was negative for the prudent and high-quality carbohydrate patterns. A large body of cross-sectional studies have reported an inverse relationship of the diet quality score, as assessed by AHEI [20] and MDS [21], with prevalent NAFLD. In addition, whether different dietary patterns were related to NAFLD risks was also investigated, but contradictory results were reached [17]. A cross-sectional study conducted in 229 Brazilian adults demonstrated that a prudent pattern was negatively associated with NAFLD diagnosed by ultrasonography [22]. In contrast, another cross-sectional study covering 999 Chinese patients found this association to be nonsignificant [23]. An Australian prospective cohort study including 995 adolescents observed that a Western dietary pattern high in red and processed meat, soft drinks, refined grains and sauces at age 14 was associated with a 1.59-fold risk of NAFLD three years later [24]. However, till now, these studies were mainly small cross-sectional studies and centered on the prevalent risks of NAFLD. The evidence for a relationship between dietary pattern and long-term outcomes of NAFLD remains sparce. In this study, we used a diet quality score which was adopted in previous epidemiological studies of cardiovascular disease [25] and type 2 diabetes [26], created on the basis of recent dietary priorities of cardiometabolic health [19]. In addition, in recent years, the role of dietary patterns generated by principal component analysis has been extensively investigated in observational studies of cardiometabolic disease [27]. They are more representative of dietary patterns in a given population [28]. Therefore, we combined these two methods in this study to provide a more complete picture for the relation of diet with the long-term outcomes of NAFLD. Our analysis showed that NAFLD patients with a higher diet quality carried lowers odds of ESLD and all-cause and cause-specific mortality. In addition, a prudent dietary pattern high in vegetables, fruits and fish was negatively associated with a poor prognosis of NAFLD. This association was also observed for the high-quality carbohydrate dietary pattern which was high in whole grains and low in refined grains. For the clinical implications of this study, it provides a more comprehensive understanding of the effects of dietary patterns on the development of severe outcomes of NAFLD. The advanced liver disease in the late course of NAFLD is associated with a severely impaired quality of life and poor prognosis [29]. Given the sheer number of NAFLD patients and the fact that advanced liver disease is usually diagnosed late, a better risk stratification of NAFLD is urgently needed [30]. The early recognition of NAFLD patients with adverse outcomes would allow policy makers and clinicians to plan and implement a more effective secondary prevention [9]. This study showed that NAFLD patients may benefit from a high diet quality and prudent and high-quality carbohydrate dietary patterns. Conversely, NAFLD patients with other diet patterns may be more likely to suffer from adverse health outcomes and warrant more close attention during regular follow-up. There are several possible mechanisms linking the dietary patterns with the longterm outcomes of NAFLD. The prudent dietary pattern has been shown to have beneficial effects on NAFLD due to its anti-inflammatory, anti-fibrosis and antioxidant capacity [31]. Carotenoids and polyphenols are two major antioxidants that are abundant in vegetables and fruits. In experimental studies of NAFLD models, they improved insulin sensitivity, accelerated β-oxidation and repressed de novo lipogenesis [32]. Furthermore, they inhibited the activation of hepatic stellate cells and therefore ameliorated carcinogenesis [32]. Omega-3 poly-unsaturated fatty acids contained in fish oil can alleviate insulin resistance, reduce hepatic lipid accumulation and improve steatohepatitis [33,34]. The mechanism through which whole grains exert favorable impacts on NAFLD is multifaced. First, wheat bran, a more abundant compound in whole grains than in refined grains, reduced the liver triglyceride content in an in vivo model of metabolic syndrome [35]. Second, several phytochemicals that are significantly reduced after grain refining can promote the synthesis of VLDL and thus export lipids outside the liver [36]. Third, whole grains may display beneficial effects on the composition of the gut microbiota [37], which may influence the progression of NAFLD through the gut-liver axis [38]. This study has several limitations that warrant discussion. First, we selected several important diet constituents to create a dietary quality score based on the recent guidelines of cardiometabolic health. However, other components may also display a key role in the progression of NAFLD. Second, we analyzed the association between dietary patterns at baseline and the risk of adverse outcomes of NAFLD. The dietary patterns were not assessed during the follow-up. We were unable to assess the longitudinal dynamic change in dietary patterns, which may more closely reflect the habitual eating in real-world life. Third, as with all observational studies, we were unable to draw causality about the relationship between dietary patterns and long-term outcomes of NAFLD. The only way to clearly measure this relationship is through experimental designs. Conclusions In conclusion, higher diet quality and greater adherence to a prudent dietary pattern rich in vegetables, fruits and fish were associated with a lower likelihood of ESLD and mortality in NAFLD patients. High-quality carbohydrate dietary patterns showed the same association. NAFLD patients with inappropriate meat dietary patterns had a higher risk of adverse outcomes. These findings need to be confirmed with further interventional studies to assess whether the improvement of the dietary patterns is effective in the primary and secondary prevention of NAFLD. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/nu15020271/s1, Figure S1. Summary of study design and analytical strategy; Figure S2. Scree plot of the factor analysis; Table S1. Components and scaling methods of diet quality score used in the UK Biobank study; Table S2. PCA -derived dietary patterns and their factor loadings; Table S3. Criteria for the ESLD, CVD, Cancer; Table S4. HRs of cause-specific mortality for quintiles of diet quality score; Table S5. HRs of cause-specific mortality for quintiles of dietary patterns; Table S6. Subgroup analyses in diet quality and ESLD; Table S7. Subgroup analyses in diet quality and all-cause mortality; Table S8. Sensitivity analyses of the HRs for the associations of diet quality with ESLD and all-cause mortality. Data Availability Statement: Data described in the manuscript, code book, and analytic code will be made available upon request pending application. This research was conducted using the UK Biobank resource under application number 79302.
2023-01-12T17:18:53.977Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "776599a068170fad30fbdaf3048216927799c818", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/15/2/271/pdf?version=1672911485", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9c8cc3249b80ef5d5c05b337ceab923ec2787c5", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
14190042
pes2o/s2orc
v3-fos-license
A Bayesian Approach to Biomedical Text Summarization Many biomedical researchers and clinicians are faced with the information overload problem. Attaining desirable information from the ever-increasing body of knowledge is a difficult task without using automatic text summarization tools that help them to acquire the intended information in shorter time and with less effort. Although many text summarization methods have been proposed, developing domain-specific methods for the biomedical texts is a challenging task. In this paper, we propose a biomedical text summarization method, based on concept extraction technique and a novel sentence classification approach. We incorporate domain knowledge by utilizing the UMLS knowledge source and the na\"ive Bayes classifier to build our text summarizer. Unlike many existing methods, the system learns to classify the sentences without the need for training data, and selects them for the summary according to the distribution of essential concepts within the original text. We show that the use of critical concepts to represent the sentences as vectors of features, and classifying the sentences based on the distribution of those concepts, will improve the performance of automatic summarization. An extensive evaluation is performed on a collection of scientific articles in biomedical domain. The results show that our proposed method outperforms several well-known research-based, commercial and baseline summarizers according to the most commonly used ROUGE evaluation metrics. Introduction Biomedical information available for researchers and clinicians is accessible from a variety of sources such as scientific literature databases, Electronic Health Record (EHR) systems, web documents, e-mailed reports and multimedia documents [1,2]. The scientific literature provides a valuable source of information to researchers. It is widely used as a rich source for assessing the new comers in a particular field, gathering information for constructing research hypotheses and collecting information for interpretation of experimental results [3]. It is interesting to know that the US National Library of Medicine has indexed over 24 million citations from more than 5,500 biomedical journals in its MEDLINE bibliographic database [4]. However, the larger quantities of data cannot be used to attain the desirable information in a limited time. Required information must be accessed easily at the right time, and in the most appropriate form [2]. For clinicians and researchers, efficiently seeking useful information from the ever-increasing body of knowledge and other resources is excessively time-consuming. Managing this information overload is shown to be a difficult task without the help of automatic tools. Automatic text summarization is a promising approach to overcome the information overload, by reducing the amount of text that must be read [5]. It can be used to obtain the gist efficiently on a topic of interest [1]. It helps the clinicians and researchers to save their time and efforts required to seek information. A good summary of text must have two main properties: it needs to be short, and it should preserve valuable information of source text [6]. The majority of summarization systems deal with domain-specific text (e.g. biomedical text) in the same way as other domain-independent texts are summarized. In other words, they are designed as general-purpose tools [7]. From the earliest basic methods, like position and word-frequency, to more recent methods that leverage artificial intelligence, machine learning, and graph-based algorithms, the general-purpose summarizers are not adequate for using in the biomedical domain. The characteristics of biomedical domain raise the need to analyze the source text at a conceptual level and to employ domain knowledge in summarization process [7]. Concept-level analysis of text, rather than term-level analysis, has been turned into a preliminary step in biomedical summarization process. This is required to extract a rich representation of source text [7][8][9][10]. The conceptual analysis of text is performed by focusing on concepts, rather than terms, as the building blocks of text. It can be facilitated by using biomedical knowledge sources and ontologies such as Unified Medical Language System (UMLS). A summarization system must decide which sentences are the best for the summary and which sentences can be ignored, based on its constructed model from the source text. From this point of view, text summarization can be modeled as a classification problem. However, there are some important questions. In biomedical summarization that utilizes domain knowledge rather than traditional measures, what features do determine which sentences are summary sentences and which are non-summary? Furthermore, in biomedical summarization that uses domain knowledge, a given document has its distinct concepts, and these concepts have a particular distribution in the given text. Thus, what data must be used as training data? Are there any training data available for this purpose? Can we model the biomedical summarization as a classification problem? In this paper, we provide answers to such questions and try to address the related issues. We employ a well-known classification method, namely naïve Bayes classifier, in combination with biomedical concept extraction, to construct a classification model for biomedical text summarization. Some of the summarization systems have been proposed based on classification methods [11][12][13][14]. These systems require training data, to learn which part of the text should be selected for the summary and which part should be discarded. However, when a document is analyzed at the conceptual level, similar to our method, the source text is represented by its contained concepts. Every document may have its own set of concepts, and it is impractical to generalize a learned model to summarize new material. We propose a summarization method, which uses the distribution of important concepts within the source document to classify the sentences and to construct the final summary. In our proposed method, biomedical concepts are extracted from input text by utilizing the UMLS [15], an important and well-known knowledge source in biomedical sciences, maintained by the US National Library of Medicine. Each sentence of the input document is represented as a vector of boolean features. The features are important concepts of the entry text, and a feature would be right if its similar concept appears in the sentence. Otherwise, it would be false. We use the naïve Bayes classifier [16] to label the sentences as summary or nonsummary. The distribution of important concepts within the text is known, as well as the number of summarylabeled sentences. Although, we do not initially know which sentences are labeled as a summary, we just know how many sentences must be selected (the compression rate specifies it). This information is enough to estimate the prior and likelihood probabilities for Bayesian inference. A primary assumption of our method is that the distribution of important concepts within the final summary must be similar to their distribution within the source text. Thus, we can estimate the posterior probabilities given the prior and likelihood probabilities that were calculated before. Consequently, the posterior odds ratio is calculated for each sentence. Eventually, the sentences with higher posterior odds ratio are selected to form the final summary. To evaluate the performance of the proposed method, we conducted a set of experiments on a collection of articles from the biomedical domain and compared the results with other summarizers. The results demonstrate that our method performs better than similar research-oriented, commercial competitors and baseline methods in terms of the most commonly used ROUGE evaluation metrics [17]. The remainder of the paper is organized as follows. Section 2 gives an overview of text summarization and concept extraction from biomedical text, as well as a review of the related work in biomedical summarization. In Section 3, we introduce our biomedical summarization method based on the naïve Bayes classification method. Then, we describe the evaluation methodology. Section 4 presents the results of the assessment of the system configuration and the experiments that evaluate our system compared with other summarizers. Finally, Section 5 draws the conclusion and describes future lines of work. Background and related work Early work on automatic text summarization dates back to the 1950s and 1960s with the pioneering work of Luhn [18] and Edmundson [19]. However, most of the progress in this field happened during the last two decades. There are some well-known summarization methods, such as MEAD [20], MMR [21], LexRank [22], PageRank [23], TextRank [24] and HITS [25], widely referenced by the research community in the last two decades. In recent years, many works have been done in text summarization using Natural Language Processing (NLP), clustering, machine learning, statistical and graph-based methods. However, the biomedical text summarization is a relatively younger research area with a history of almost two decades. In this section, first, we present a commonly used categorization of text summarization methods. Then we focus on the concept extraction method in the biomedical domain. Finally, we review the previous work on biomedical text summarization. Types of summarization Text summarization methods can be divided into abstractive and extractive approaches [1,26]. An abstractive summarizer uses NLP methods to process and analyze the input text, then it infers and produces a new version. On the other hand, an extractive summarizer selects the most representative units (paragraphs, sentences, phrases) from the original wording and puts them together into shorter form. Another classification of text summarization differentiates single-document and multi-document inputs [1,2]. Single-document summarizer produces the summary which is the result of condensing only one document. In contrast, a multi-document summarizer gets a cluster of papers and provides a single summary as the result of extracting the most representative contents from the input documents. Another classification of summarization methods is based on the requirements of user: generic vs. user-oriented (also known as query-focused summarizers) [1,2,27]. A general summary presents an overall implication of input document(s) without any specified preference in terms of content. While a useroriented summary is biased towards a given query or some keywords, to address a user's specific information requirement. Our proposed biomedical summarization method is extractive, single-document and generic. Concept extraction from biomedical text In the biomedical domain, there are several knowledge sources such as MeSH, SNOMED, GO, OMIM, UWDA and NCBI Taxonomy, which can be used in knowledge-intensive data and information processing tasks, as well as in text processing tasks related to the biomedical domain. These knowledge sources along with over 100 controlled vocabularies, classification systems, and additional information sources have been unified into the Unified Medical Language System (UMLS) [15] 32] has been developed by the National Library of Medicine. MetaMap uses a knowledge-intensive approach based on NLP, computational linguistic and symbolic techniques to identify noun phrases in the text. First, lexical variations are generated, and phrases and concepts are partially matched, in order to compute the matches between each noun phrase and one or more Metathesaurus concepts. Then, based on the closeness of the matches between each noun phrase and the concepts, the candidate concepts are assigned scores. Eventually, the highest scoring concept and its semantic type are returned. For a noun phrase, it is possible to map to more than one concept, and MetaMap will return multiple concepts in this case. Summarization in the biomedical domain In the biomedical field, various summarization methods have been proposed. These methods are reviewed in a survey of early work [2] and in a systematic review of recently published research [1]. Reeve et al. [9] applied the method of lexical chaining [33] to biomedical text, but they used concepts rather than terms. In their proposed method, named BioChain, automatically identified UMLS concepts in the original text are chained together based on their UMLS semantic types. Then, the strongest chains are identified through scoring. Strong concepts within each strong chain are determined, and sentences are scored based on the number of such concepts they contain. High scoring sentences are selected to form the summary. In BioChain, less frequent concepts that belong to strong chains participate in sentence scoring. More frequent concepts that do not belong to any strong chains are discarded for sentence scoring. As a result, the important concepts that demonstrate the main topics but do not belong to any strong chains will not participate in sentence scoring, and the accuracy of the summarizer may be affected negatively. FreqDist [10] is a context-sensitive approach, proposed to score the sentences according to a frequency distribution model, along with the ability to remove information redundancy. In the FreqDist method, the unit items (concepts and terms) within the original text are counted, and a frequency distribution model is formed. A summary frequency distribution model is also created based on the unit items found in the original wording. Then, in an iterative manner, sentences are selected for adding to the summary. Selection of a sentence must lead to the frequency distribution of the summary be closely aligned with the frequency distribution of the original text. Reeve et al. [5] combine the BioChain and the FreqDist and propose a hybrid method. In a feature-based method [34], in addition to commonly used traditional features, a vocabulary of cue terms and phrases unique to the medical domain is identified and is used as domain knowledge. The classic features used in summarization are word frequency, sentence position, the similarity with the title of the article, and sentence length. The presence of cue medical terms and phrases, as well as the presence of new terms, are two additional features. The sentences are scored based on these features, and the summary is generated by putting the high-scoring sentences together. A graph-based approach to biomedical summarization is proposed by Plaza et al. [7]. They use UMLS to identify concepts and semantic relations between them, and a semantic graph is constructed to represent the document. Different topics within the text are determined by applying a degree-based clustering algorithm on the semantic graph. Three different heuristics are intended for sentence selection according to identified topics. Moen et al. [35] present several text summarization methods for summarizing clinical notes. Most of their proposed methods are based on the word space models, resulted from distributional semantic modeling. They perform a meta-evaluation on the ROUGE metrics by developing a manual evaluation scheme, in order to assess the similarity between the automatic assessment and the opinions of health care professionals. An investigation on the impact of the knowledge source used in a semantic graph-based summarization approach is performed by Plaza et al. [36], in terms of the quality of the automatically generated summaries. Different combinations of vocabularies and ontologies within the UMLS are used to retrieve domain concepts. Moreover, various types of relationships are considered to link the concepts in the semantic graph. They also show that the use of appropriate knowledge source to model the original text significantly improves the quality of the generated summaries. Besides extractive summarization methods, various abstractive methods have been proposed in the biomedical domain. Fiszman et al. [37] present a multi-document semantic abstraction summarization system for MEDLINE citations. Their system relies on the semantic predications extracted by SemRep [38], a parser based on linguistic analysis and domain knowledge contained in the UMLS. The system generates abstracts using four transformation principles: novelty, relevance, saliency, and connectivity. The output of the system is a graphical summary. Fiszman et al. [8] extend the semantic abstraction summarization system [37] for evidence-based medical treatment. Their focus is on the topic-based evaluation of summarization of drug interventions. Two other abstractive summarizers based on semantic abstraction summarization system [37] are proposed by Workman et al. [39] and Zhang et al. [40]. An abstractive graph-based clustering method [41] is presented for automatic identification of themes in multi-document summarization. The output of the system is a graph composed of semantic predications. The aim of their method is to summarize a large set of MEDLINE citations. Unlike domain-independent summarization methods such as SUMMA [42] and SweSum [43], our proposed method utilizes domain knowledge and analyzes the source text at a conceptual level. Existing approaches that rely on classification methods will require training data to learn the classifier. Moreover, the majority of the summarization methods use a number of general-purpose features such as sentence position, sentence length, keywords, and the presence of cue-words to represent the sentences as vectors of features. However, in text summarization methods that utilize domain knowledge and concept-level analysis of text, every document will have its particular set of concepts, leading to a potentially higher accuracy. A set of general features is not enough to summarize all new material. In our method, the naïve Bayes classification method helps to classify the sentences based on the distribution of concepts within the source text, without any requirements for training data. In our proposed method, the important concepts are identified according to a threshold value, to demonstrate the main topics of the document and represent the sentences as vectors of features. Compared to the BioChain method that ignores the important concepts that do not belong to any strong chains, our method is expected to perform more accurately. Moreover, our method assumes that the distribution of important concepts within the final summary is same as the source text. This assumption could improve the informativeness of the final summary. In order to classify the sentences of a document as summary and non-summary based on this assumption, the naïve Bayes classifier is a reliable method as it can discriminate the sentences based on the prior distribution of important concepts within the source document. The proposed method Our proposed summarization scheme consists of a preprocessing phase and a classification phase. In the preprocessing step, the input document is mapped to UMLS concepts and is prepared for another stage. In the classification phase, the sentences represented as vectors of features are classified into summary and non-summary classes using the naïve Bayes classification method. One of the main components of our summarization process is the naïve Bayes classifier. We begin this section with a brief review of the naïve Bayes classification method. Then, we explain our proposed biomedical summarization process in detail. The naïve Bayes classifier The naïve Bayes [16] is an easy to build and robust classifier. It is known as a proven data mining algorithm [44]. Based on this method, the training phase, and the actual classification could be performed efficiently. There is no need to complicated iterative parameter estimation schemes. In general, Bayesian classifier is based on the Bayes theorem, defined by Eq. 1 below: where C and X are random variables. In classification tasks, they refer to observing class C and instance X, respectively. X is a vector containing the values of features. | is the posterior probability of observing class C given instance X. In classification, it could be interpreted as the probability of instance X being in class C, and is what the classifier tries to determine. | is the likelihood, which is the probability of observing instance X given class C. It is computed from the training data. and are the prior probabilities of observing class C and instance X, respectively. They measure how frequent the class C and instance X are within the training data. Using Eq. 1, the classifier can compute the probability of each class of target variable C given instance X, and the most probable class, the class that maximizes | , should be selected as the result of classification. This decision rule is known as Maximum A Posteriori or MAP. It is represented as follows: where is the jth class (or value) of target variable C. in Eq. 2, the denominator is removed because it is constant and does not depend on . We represent the instance X as X=<x1, x2, …, xn>, where the xi is the ith feature of X. Assume each instance X has a vector of values for 20 boolean features, and the target variable C is also boolean. When modeling | , we need to estimate approximately 2 × 2 = 2 = 2,097,152 parameters, that heavily increases the complexity of classifier. Using the naïve Bayes classifier dramatically reduces the number of parameters to be estimated to 2 × 20 = 40. The naïve Bayes classifier achieves this reduction in the number of parameters to be estimated by making a conditional independence assumption. It means that the probability of each value of feature xi is independent of the value of any other feature, given the class variable cj. In fact, it assumes that the effect of the value of predictor xi on a given class cj is independent of the values of other predictors. Therefore, the naïve Bayes classifier finds the most probable class for target variable by simplifying the joint probability calculation as follows: The conditional independence assumption plays a crucial role here because it simplifies the representation of | , and the problem of estimating this value from training data. A well-known measure to assess the confidence of classification in the naïve Bayes classifier is the Posterior Odds Ratio. The Posterior Odds Ratio shows a measure of the strength of evidence in favor of a particular classification compared to another method [45]. It is calculated as follows: where is the posterior odds ratio that measures the strength of evidence in favor of classifying the instance as a class variable = against classifying the instance as class variable = . A value of exactly 1.0 could be interpreted as the evidence from the posterior distribution supports both classes and equally. A value greater than 1.0 demonstrates that the posterior distribution favors the = classification, while a value less than 1.0 demonstrates that the posterior distribution favors the = classification. Summarization Method In this subsection, we present our naïve Bayes summarization method. The process of document summarization is accomplished through six steps: (1) document preprocessing, (2) mapping text to biomedical concepts, (3) feature identification, (4) preparing sentences for classification, (5) sentence classification using naïve Bayes, and (6) creating the summary. Fig. 1 illustrates the architecture of our summarization method. A detailed description of each step will be given in the following subsections. Document preprocessing Before applying the summarization process, a preliminary step is needed to be done, in order to prepare the input document for the subsequent steps. In the preprocessing step, the following actions are done: • The portions of the text that seems to be unnecessary for inclusion in the summary are removed. These include title, the author information, abstract, keywords, heading of sections, competing interests, acknowledgments, and references. Although this preprocessing step is applied to biomedical articles, it can be customized for any textual document based on the logical structure of the text. We have customized the preprocessing step for biomedical articles because: (1) a vast amount of materials are commonly used in this domain, (2) one of the main reasons for proposing summarization methods in the biomedical field is to overcome the information overload in the biomedical literature, and (3) we will evaluate our method on biomedical articles. Mapping text to biomedical concepts In this step, the document resulted from the preprocessing step is mapped to concepts of the UMLS Metathesaurus. Each concept has a semantic type extracted with it that determines the semantic category of concept. The semantic types are included in the UMLS Semantic Network. In this paper, we use the 2014 version of MetaMap program for the mapping step and the 2014AB UMLS release as the knowledge base. When the MetaMap is faced with lexical ambiguity, it often fails to specify a unique mapping for a given phrase [46]. For example, for the text fragment The significance of the identification of APOE, the MetaMap returns two candidate concepts with equal scores for APOE, i.e. Apolipoprotein E with semantic type aapp (Amino Acid, Peptide, or Protein), and APOE gene with semantic type gngm (Gene or Genome). This behavior occurs because some words may have multiple meanings, and each meaning depends on the context in which it appears [47]. The MetaMap returns all mappings in such cases that it cannot distinguish the context in which the phrase appears. If the MetaMap is invoked with the word sense disambiguation option, i.e. -y flag, it uses the Journal Descriptor Indexing (JDI) algorithm [48] to resolve Metathesaurus ambiguity. We use the -y flag to force the MetaMap to select a single mapping in cases that the number of candidate concepts for a given phrase is more than one. Although, there are cases that the JDI may fail to return a single mapping, and in such situations our method selects all mappings returned by MetaMap. It has been shown in [47] that the All Mappings is relatively an appropriate Word Sense Disambiguation (WSD) strategy for concept identification. In Fig. 2, a sample sentence and its identified concepts are illustrated. After mapping the document text to concepts, those concepts belong to semantic types which are very generic must be discarded, because they are excessively broad and almost frequently appear in every document. These semantic types have been identified empirically by [7], including functional concept, qualitative concept, quantitative concept, temporal concept, spatial concept, mental process, language, idea or concept, and intellectual product. Therefore, in Fig. 2, the following concepts are discarded: Widening, analysis aspect, Further, Relationships and Etiology aspects. Feature identification After concept extraction and dropping the generic concepts, the important concepts are identified and are selected as the classification features. First, all remained concepts are added to a list, named All_concept_list. Second, the frequency of each concept in the All_concept_list is calculated by counting the number of sentences which the concept has appeared. Third, the important concepts are specified based on this rule: a given concept is important if its frequency is equal or greater than the value of threshold . In the following, the value of threshold is presented: where the !"#$ is the average of all concept frequencies in the All_concept_list, and the '()_)# !"#$ is the standard deviation of all concept frequencies in the All_concept_list. We select the value of threshold presented in Eq. 5 based on a set of preliminary experiments (Section 3.3.3). Finally, concepts which their frequency is equal or greater than the in Eq. 5 are selected as features, in order to represent the sentences as vectors of features for the classification step. In Fig. 3, the identified important concepts along with their semantic types and frequencies are represented for a sample document concerning genetic overlap between autism, schizophrenia and bipolar disorder. For each concept, the semantic type is represented in brackets. Preparing sentences for classification After identifying the main concepts and considering them as features, the sentences of the document must be After this step, we have a collection of sentence-vectors with their feature values specified. Their class variable is unknown, and they must be classified as summary sentences (True) or non-summary sentences (False). Every document has its particular set of concepts, and therefore, the features of each text are different from others. Thus, there are no training data in our method. In the next subsection, we will show how we give a hint to the naïve Bayes classifier, and it can attain any information required to classify the sentences. Sentence classification using naïve Bayes As mentioned earlier, our proposed method does not use any training data for learning. On the other hand, the naïve Bayes classifier needs to know the distribution of feature values and the values of class variable in training data, in order to classify the previously unseen instances based on this information. We estimate the prior probabilities from those sentence-vectors which must be classified. Moreover, we make an assumption that simplifies the estimation of likelihood probabilities. In summarization systems, there is a parameter called Compression Rate, which is used to determine what percentage of text must be extracted from the primary document as the final summary. Initially, we do not know where the !"#$ is the frequency of concept corresponding to ! (the kth feature in the ith sentence-vector). Consequently, the presence of more frequent concepts increases the chance of selecting a sentence for the final summary. In the next step, the posterior probability of class value No given ith sentence-vector is estimated as follows: Creating the summary The last step is summary creation. It has been determined that which sentences should be selected to make the final summary. Those sentences with their corresponding sentence-vector classified as Yes are added to the summary. The sentences are arranged in the same order as they appear in the primary document. Finally, the figures and tables in the main document that are referred to in the summary are added to finalize the summarization process. Evaluation methodology The evaluation methods of summarization systems could be divided into two broad categories: Intrinsic and Extrinsic [49]. For the intrinsic evaluation, the quality of generated summaries is assessed according to certain criteria such as accuracy, relevancy, comprehensiveness, and readability. Such criteria could be represented by two main properties: informativeness and coherence. In intrinsic evaluation, the generated summaries are evaluated by comparing with a gold standard or rating by a human. In the extrinsic evaluation, the impact of a summarization system on the performance of a specific informationseeking task is assessed. Extrinsic evaluation could be performed according to measures such as decision-making accuracy, success rate, and time-to-completion. We evaluated the performance of our biomedical summarization method using intrinsic evaluation. Evaluation corpus The most common method of evaluating the summaries generated by an automatic summarizer (also known as system or peer summaries) is to compare them against manually generated summaries (also called model or reference summaries). The metric of such evaluation method is the similarity between the content of system and model summaries. The more content shared between the system and model summaries, the better the system summary is assumed to be. Obtaining the manually generated summaries is a challenging and time-consuming task because they have to be written by human experts. Moreover, the human-generated model summaries are highly subjective. To the authors' knowledge, there is no corpus of model summaries for biomedical documents. However, most scientific papers have an abstract which is usually considered as model summary for evaluation. To evaluate our proposed method, we used a collection of 80 biomedical scientific articles, randomly selected and downloaded from the BioMed Central online library (http://www.biomedcentral.com). According to [50], the size of evaluation corpus is large enough to allow the results of the assessment to be significant. The abstract of each paper was used as model summary for evaluating the system summary generated for that paper. Evaluation metrics: ROUGE As noted earlier, in the intrinsic evaluation of summarization methods, two properties are regarded as the measure of summary quality: coherence and informativeness. Coherence is a property for measuring the readability and cohesion of summary. Informativeness is a feature for representing how much information from the original text is provided by the summary [51]. In spite of advances in evaluating the coherence and readability of automatic summaries [52][53][54], this evaluation approach is still very preliminary, and the research community has not adopted any standard readability assessment approach yet. On the other hand, advances in automatic evaluation of informativeness are more impressive [55,56], and the research community has agreed upon a standard approach for this evaluation approach. For performance evaluation, in terms of the informativeness of automatic summaries, we used the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) package [17]. ROUGE compares a system-generated summary with one or more model summaries and estimates the shared content between the system and model summaries by calculating the proportion of n-grams in common between them. In a comparison of two system summaries generated for the same document by two different summarizers, a system summary is assumed to be better if it contains more shared data with the model summary. The ROUGE metrics produce a value between 0 and 1, and a higher value is preferred as it demonstrates a greater content overlap between the system and model summaries. In this paper, we used four ROUGE metrics: ROUGE-1, ROUGE-2, ROUGE-W-1.2, and ROUGE-SU4. ROUGE-1 and ROUGE-2 compute the number of shared unigrams (1-grams) and bigrams (2-grams) between the system and model summaries. ROUGE-W-1.2 computes the union of the longest common subsequences between the system and model summaries. It takes into account the presence of consecutive matches. ROUGE-SU4 will measure the overlap of skip-bigrams (pairs of words having intervening word gaps) between the system and model summaries. It allows a skip distance of four between the bigrams. It is worth to note that the Document Understanding Conference (DUC) has adopted ROUGE as the official evaluation metric for text summarization. In spite of its simplicity, ROUGE has shown high correlation with the human judges [17]. In DUC 2005 conference, it achieved a Pearson correlation of 0.97 and a Spearman correlation of 0.95 compared with human evaluation. Nevertheless, ROUGE has a significant drawback. For measuring the overlap between the system and model summaries, ROUGE metrics assess the lexical matching instead of semantic matching. It means for a given document if a system summary is worded differently compared with other system summaries, but carries the identical information, the assigned ROUGE scores may be different. System configuration We performed two sets of experiments to evaluate our proposed method. In this subsection, we describe the first and preliminary set of experiments that determines the best system configuration. In Section 3.3.4, we will define the second round of experiments that compares our proposed method against other summarizers. We performed a set of preliminary experiments, in order to determine the optimal value for the threshold involved in recognizing the important concepts for feature selection (Section 3.2.2). A possible choice for the value of this parameter could be calculated using Eq. 5 that we selected for the evaluation. We evaluated the performance of our summarization method under two other possible choices for the value of this parameter. The two other choices could be calculated using (11) and (12) as follows: In our preliminary experiments, we also assessed the impact of the coefficients and on the performance of our summarization method (Section 3.2.5). We eliminated the from the Eq. 6 and the from the Eq. 8 and evaluated the system with and without the coefficients. We combined the two configurations of the impact assessment of coefficients with the three configurations of the value of threshold , hence a total of six configurations were evaluated. The preliminary experiments were performed according to ROUGE scores. For evaluating the system configuration, we used a separate development set consisted of 25 papers, randomly selected and downloaded from the BioMed Central online library. The abstracts of the articles were used as model summaries. Comparison with other summarizers We compared our biomedical summarization method against six summarizers. Three summarizers are research prototypes, namely SUMMA, SweSum, and BioChain. One of the summarizers is a commercial application, Microsoft AutoSummarize, and two summarizers are baseline, namely Lead baseline and Random baseline. The size of summaries generated by all of the summarizers is set to 30% of the original document. The choice of 30% as the compression rate is based on a well-accepted de facto standard that says the size of a summary should be between 15% and 35% of the size of original text [57]. In the following, a brief description of the six summarizers is presented. • SUMMA: SUMMA [42] is a popular research summarizer and is available for public usage. It is used as a plugin in the GATE architecture for text engineering [58], and must be implemented as processing resources and language resources. It could be utilized as both single-document and multi-document summarizer. SUMMA is customizable based on several statistical and similarity-based features. The customized features are used for scoring the sentences and extracting them for the summary. The features we used for the evaluation were the frequency of sentences' terms, the position of sentences within the document, the similarity of sentences to the first sentence, and the overlap of sentences with the title. • SweSum: SweSum [43] is a multi-lingual summarizer with its text summarization for English, Danish, Norwegian and Swedish considered to be state-of-the-art and for Persian, French, German, Spanish and Greek is in a prototype state. SweSum uses several features to score the sentences, and the user can specify the weight of each feature. We used the online version of SweSum (http://swesum.nada.kth.se/index-eng-adv.html) for the evaluation. The type of text was set to 'Academic', and these features were used: sentences in the first line of text, sentences containing numerical values, sentences containing keywords extracted by the summarizer. SweSum provides a function named 'User keywords' that considers user-defined keywords as a measure to score the sentences. We did not use this feature in our evaluation. • BioChain: BioChain [9] is a biomedical summarizer that uses an NLP method, named Lexical Chaining, for summarization. However, BioChain uses a set of concepts instead of terms and changes the lexical chaining to concept chaining. The concepts are extracted from the original document using UMLS, the semantic types are considered as the head of chains, and concepts with the same semantic type are chained together. Those chains that contain the core concepts of text are identified as strong chains. Then, the most common concepts of each strong chain are identified and used to score the sentences. The high-scoring sentences are extracted, and the final summary is returned. • Microsoft AutoSummarize: Microsoft AutoSummarize is a feature of the Microsoft Word software [59]. Microsoft AutoSummarize is based on a word frequency algorithm, and a score is assigned to each sentence of a document based on the words it contains. However, the algorithm is not documented in detail; it is stated in the online help for the product that a higher score are assigned to sentences which contain frequently-used words, compared with sentences which contain less frequent words. Although the word frequency is a simple measure, it is identified as a well-accepted heuristic for summarization. In order to test the statistical significance of the results, we used a Wilcoxon signed-rank test with a 95% confidence interval. Results and discussion In this section, we first present the results of configuration and the effect of the aforementioned coefficients in the proposed model. The second subsection presents the results of evaluating the system and comparing to existing methods. Configuration results We performed a set of preliminary experiments, in order to select the best setting for the naïve Bayes summarizer. The initial experiments were conducted to find: (1) the optimal value for the threshold , that is used in Section 3.2.2 for identifying main concepts and selecting them as features, and (2) the impact of two coefficients ( and ), involved in Section 3.2.5, on the summarization performance. The central concepts identified by the algorithm are used as features for the classification step. A higher value of threshold leads to less concepts to be identified as important concepts, hence less features would be used for the classification. We assessed three possible values for the threshold to determine the value with more positive impact on the performance of summarization. The two coefficients used in sentence classification affect the posterior probability of a class value given a sentence vector based on the degree of importance of the concepts appeared in the sentence. We evaluated the impact of presence and absence of the two coefficients on the performance of summarization. The two groups of experiments were performed together, in order to select the best combination of the value of threshold and the coefficients. The results of the experiments are presented in Table 1. For legibility reasons, only ROUGE-2 and ROUGE-SU4 scores are shown. It can be observed from Table 1 that according to the ROUGE scores, the use of coefficients improves the performance of the summarizer, and the best value among the three values of the threshold is two standard deviations above the average of frequencies. We discuss the results of system configuration, shown in Table 1 Evaluation results To evaluate the performance of our summarization method, we compare the ROUGE scores obtained by our method with the ROUGE scores of the other six summarizers. Other summarizers, as described in Section 3. The results in Table 2 show that the two summarizers that use domain knowledge in summarization process, i.e. our naïve Bayes summarizer and BioChain, perform better than the general purpose and baseline summarizers. Moreover, the proposed naïve Bayes summarizer increases the accuracy of summarization, in terms of generated summaries' informative content quality, as compared to another biomedical summarizer. The results obtained by our naïve Bayes summarizer show its effectiveness as a classification method for such modeling requirements. In many cases, several concepts within a biomedical textual document represent the main topics of text. It seems that identifying these important concepts and utilizing them to show the sentences as vectors of features, is a more accurate approach to model the biomedical summarization problem. Moreover, the simplicity of naïve Bayes classification method helps the summarizer to select the most informative sentences based on the distribution of important concepts within the source text. Therefore, the informativeness of generated summaries is increased, and consequently, the performance of summarization is improved. Conclusion In this paper, a novel biomedical text summarization method was proposed based on the naïve Bayes classifier. Our method extracts biomedical concepts within the document using UMLS and identifies the important concepts to show the main topics of text. The identified important concepts are then used as features to classify the sentences as summary and non-summary. There is no need to training data, and the naïve Bayes classifier estimates the prior and posterior probabilities based on the distribution of important concepts within the original document. Besides, a useful hint that helps the estimation of probabilities is the distribution of important concepts within the summary must be same as within the source text. The proposed method was evaluated by summarizing a collection of 80 scientific biomedical papers, selected from BioMed Central online library. Comparing the results showed that the proposed naïve Bayes summarizer actually improved the performance of summarization, compared with generalpurpose summarizers and baselines. It confirms that in the biomedical domain, the use of domain knowledge and concept-level analysis rather than term-level analysis of text can be very useful to improve the informativeness of automatically generated summaries. Moreover, our proposed method performed better than BioChain, which also uses domain knowledge. It indicates that the use of essential concepts to classify the sentences by the naïve Bayes classifier could be a viable approach to automatic summarization. There is no need for training data to estimate the required probabilities, and the method uses the distribution of important concepts within the source text to calculate the probabilities. It also showed a considerable improvement in the quality of summarization. More accurately, we can now answer the questions raised in Section 1. Important concepts are the features that determine which sentences are summary sentences and which are non-summary. There are no training data for such type of summarization, and in fact, a learned model is not applicable to be generalized to classify the sentences of a new document. It is possible to model this summarization approach as a classification problem and deal with it by the naïve Bayes classifier. The classifier estimates the probabilities and classifies the sentences according to the distribution of essential concepts within the original document. While our proposed biomedical summarization method performs well in single-document summarization, it seems that in multi-document summarization that the redundant information is inevitable, the performance of summarizer may be decreased due to selecting the sentences containing redundant information. We will concentrate on addressing this problem in future work.
2016-05-10T11:33:33.000Z
2016-05-10T00:00:00.000
{ "year": 2016, "sha1": "b9560d6472f1536396e286174a5eeae5306cd7a6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b9560d6472f1536396e286174a5eeae5306cd7a6", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
215798873
pes2o/s2orc
v3-fos-license
Molecular dynamics analysis of conserved water mediated inter-domain recognition of His667-Trp669 in human ceruloplasmin The human ceruloplasmin (hCP) is the copper containing ferroxidase enzyme with multifunctional activities (NO-oxidase, NO2-synthase,oxidation of neurotransmitters including antioxidants). Therefore, it is of interest to probe the multi-domain hCP using moleculardynamics simulation. Results explain the role played by several conserved water centers in the intra and inter-domain recognition throughH-bond interaction with the interacting residues. We observed seventeen conserved water centers in the inter-domain recognition. Weshow that five invariant water centers W13, W14, W18, W23 and W26 connect the Domain 5 to Domain 4 (D5…W…W4). We also show thatfive other water centers W19, W20, W27, W30 and W31 connects the Domain 5 to Domain 6 (D5…W…W6) that is unique in the simulatedform. The W7 and W32 water centers are involved in the D1…W…W6 recognition. This is important for the water-mediated interaction ofGlu1032 to the trinuclear copper cluster present at the interface between these domains. The involvement of W10 water center in theD3…W10…D4 recognition through Gln552…W10…His667 H-bond interaction is critical in the complexation of CP with myeloperoxidase(Mpo). These observations provide insights to the molecular recognition of hCP with other biomolecules in the system. ©Biomedical Informatics (2020) copper centers of CP [5,6]. Nevertheless the multifunctional enzyme need to have a well-defined and organized structure in order to maintain the functionality in physiological system, though some degree of flexibility of the domains and their recognition are required for catalytic activity or interaction with other macromolecular system. The structural and functional role of conserved water molecules and their endowment towards the intra and inter-domain recognition of proteins or metalloenzymes are well known [7,8]. However due to complexity in the structure of CP, the role of conserved water molecules in inter-domain recognition, hence their influence to the stability of overall structure or function is still not clear due to the limitations of resolution in the available crystal structures. Until now only one complex structure of CP with myeloperoxidase (the protein (Mpo) which is involved during inflammation) was solved at 4.69Å resolution by small angle X-ray diffraction method (PDB Id 4EJX) [9]. Nevertheless the coupling between acidic and basic residues (of different domains) through conserved water molecules is also thought to be an important aspect for intra/inter-domain stabilization, moreover in few cases those water centers may also participate in the redox coupling or proton exchange reaction.MDsimulation studies have provided the key insights into the importance of conserved water molecules in the intra/inter-domain recognition and their influence to the stability of CP structure. Furthermore, investigation of conserved water mediated recognition between Glu552 to His667, alongwith the conformational dynamics of His667 and Trp669 have also been made because of their importance in the complexation of ceruloplasmin with macromolecule myeloperoxidase (Mpo). Material and Methods: The PDB structure Id: 2J5W of CP [10] having 2.8Å resolution was used for MD-simulation studies . In the asymmetric unit, few metal ions, small organic molecules and 341 number of water molecules were present along with the ceruloplasmin molecule. The numbering scheme for copper ions, amino acid residues, and water molecules were followed in according to 2J5W crystal structure. Structure preparation: A) (i) O2 -bound hCP (from 2J5W PDB-structure): The N-acetyl-D-glucosamine (NAG) molecules, an oxygen atom near to Cu3049, glycerol molecules, Ca 2+ ion and one extra Cu 2+ ion (which were incorporated in the crystal during crystallization) were removed from the 2J5W-PDB structure. The missing residues at the sequences 476-482 (Tyr-Asn-Pro-Gln-Ser-Arg-Ser), 885-889 (Tyr-Leu-Lys-Val-Phe) and 1042-1046 (Asp-Thr-Lys-Ser-Gly) were successively added to protein structure. The six integral copper atoms of trinuclear cluster and T1 mononuclear centers alongwith the O2 molecule was kept fixed at their respective crystallographic positions. Successive energy minimization of the structure was followed by steepest descent (1000 steps) and conjugate gradient (2000 steps) methods. Superimposing it on 2J5W crystal structure checked the final structure of protein, stereochemical arrangements of the residues were verified using Ramachandran plot. (ii). O2 -bound hCP (from 4ENZ PDB-structure) [9] Superimposition between the 2J5W and 4ENZ crystal structures has shown the RMSD value of 0.34Å. The 4ENZ crystal structure (having 2.6 Å resolution) of CP contains only 82 number of water molecules, so we have also compared the water molecular position in both the PDB structures.The N-acetyl-D-glucosamine (NAG) molecules, glycerol molecules (and the other ions which were incorporated in the crystal during crystallization) were removed from the 4ENZ-PDB structure. The missing residues at the different sequences were successively added to protein structure. All the copper atoms of trinuclear cluster and T1 mononuclear centers and O2 molecule were kept fixed at the respective crystallographic positions. Successive energy minimization of the structure was done by steepest descent (1000 steps) and conjugate gradient (2000 steps) methods. Superimposing it on 4ENZ crystal structure checked the final structure of protein, and stereochemical arrangements of the residues were verified using Ramachandran plot. B) Apo structure of CP: The six copper atoms and O2 -molecule were removed from the final energy minimized modelled structure of CP (built from the 2J5W-PDB structure). Then energy minimization was followed for removing the steric clashes, abnormal bond lengths and angles. Finally, the stereochemical arrangement of all the residues was checked again by Ramachandran plot. The standard deviation of the protein backbone between the simulated structures of 2J5W and Apo form of hCP was 0.08ÅIdentification of conserved water molecules. The 3DSS server [11] and Swiss PDB viewer program [12] were used to find out the conserved water molecules among the MD simulated and X-ray structures. The 2J5W PDB structure [13] was taken as reference and the MD-simulated structures at different time of simulation were successively superimposed on it. The cut-off distance between the pairs of superposed water molecules was taken to be 1.8 Å and only those were considered which have at least one hydrogen bond with the protein residue [14,15]. When a water molecule is found at a particular position (or within 1.8 Å) in the X-ray structures of a macromolecule or has high residential frequency (~98-100%) at that site during simulation then it considered as static conserved water molecule (site), on the contrary when that water site in the X-ray structure was occupied ©Biomedical Informatics (2020) 211 by different water molecules at different time of simulation then that hydrophilic site is defined as dynamic conserved water center. Molecular dynamics (MD) simulation: Molecular dynamics simulation of both the O2-bound CP structures (2J5W and 4ENZ) and apo-form of ceruloplasmin structure were performed by NAMD v.2.6 [16,17] with CHARMM36 force field [18][19][20]. For O2-bound CP structure, the charges for copper atoms (Cu3046: 0.7937, Cu3047: 1.4304, Cu3048:1.4957, Cu3049: 1.4158, Cu3051: 0.7108, Cu3052:1.0425) and oxygen molecule (O1:-0.5084, O2:-0.5309) were obtained from our previous studies and they were successively added to the respective copper and oxygen atoms of O2 molecule [5] in both the 2J5W and 4ENZ structures. Then both the apo and O2-bound CP structures were converted to Protein Structure File (PSF) by Automatic PSF Generation Plug-in within VMD program v. 1.9.2 [21]. All the crystal water molecules, 341 in 2J5W and 82 in 4ENZ were added to the respective structures. In apo structure water molecules of the 2J5W structure were added accordingly. Then all these water molecules of the structures were converted to TIP3P water model [22]. Then adding appropriate number of sodium and chloride ions neutralized each system. Subsequent energy minimizations of the structures were performed by conjugate gradient method. The process was conducted in two successive stages; initial energy minimization was performed for 1000 steps by fixing the backbone atoms, followed by a final minimization for 2000 steps were carried out for all atoms of the system to remove residual steric clashes. Then each energy minimized structure was simulated separately at 310 K temperature and 1atm pressure by Langevin dynamics [23] using periodic boundary condition. The Particle Mesh Ewald method was applied for full-electrostatics and the Nose-Hoover Langevin piston method used to control the pressure and dynamical properties of the barostat. Then for each structure (apo-form and the two O2bound form of CP) water dynamics was performed for 2 ns by fixing the protein residues and allowing the water molecules to move freely. Then all-atom molecular dynamics simulations for 50ns were carried out separately for both the apo and O2-bound human ceruloplasmin (2J5W modeled) structures. Moreover, 50ns MD-simulation of 4ENZ-modeled structure was also done. Atomic coordinates were recorded at every 2 ps for analysis. For each simulated structure, root mean square deviation (RMSD) of MD structures were calculated (by taking the X-ray structure as reference molecule) using RMSD trajectory tool in VMD (Figure 1). . Several water molecules are observed to involve in the intra and interdomain recognition in both the X-ray and MD-simulated structures of CP through H-bond interaction with the residues. Almost thirtyfour number of water molecules are found to be conserved in both the X-ray and MD-simulated structures of 2J5W PDB-structure. Among these, seventeen number of water centers are played role in the stabilization of intra-domain residues whereas the other seventeen centers are involve to inter-domain recognition which have given in Table 1. The occupation frequencies (O.F.) of those water sites are also included in that table. The conserved water centers that are involved in the inter-domain recognition have shown in (Figure 2). The interaction of conserved water centers with the different residues in the X-ray and MD-simulated structures of 2J5W are given in (Table 2). ©Biomedical Informatics (2020) Hydrogen bonding interaction of the residues with the different conserved hydrophilic water sites during the simulation of O2bound ceruloplasmin and also in the X-ray structures (PDB Id 2J5W). Residues interacts with the conserved water sites in the X-ray and MD-simulated structure during 50ns* Interdomain (D) recognition by conserved water center (D···W···D) Conserve water molecules in intra-domain recognition: Among the seventeen conserved water molecules (Tables 1 and 2), three water centers (W1, W2 and W3) are observed to interact with the residues of D1, W5 interacts with D2, two water centers (W8 and W9) to D3, three water centers (W11, W15 and W17) recognize D4, four water centers (W21, W22, W24 and W25) are interacting with D5 and the other four water centers (W28, W29, W33 and W34) have interact with D6 domain. During simulation of CP the static or dynamic character of the conserved water centers have been mentioned in (Table 1). Fifteen water centers (W1, W2, W3, W5, W8, W9, W11, W15, W21, W24, W25, W28, W29, W33 and W34) are observed to have ~100% occupation frequency (O.F.) and the rest other (W17 and W22) have ~95%. Stabilization of the intradomain residues through conserved water (W1, W3, W8, W26) mediated salt-bridge interaction (acidic···water···basic) have also been observed in the simulated structures of 2J5W (Table S2), however some of them were not found in its crystal structure. Superimposition of 4ENZ crystal structure on the 2J5W crystal structure has also revealed the presence of eight invariant water molecules at the W2, W9, W21, W24, W25, W28, W29 and W33 sites ( Table 1). The interactions of water molecules with the residues are almost found to be same in the crystal and simulated structures of 2J5W. Moreover, simulation studies of apo-ceruloplasmin structure have also revealed the presence of thirteen to fourteen static conserved water centers having~ 100% O.F. (Table S1) and they were also observed to be static in the 2J5W crystal and simulated structures with 100% O.F. Compiling all these results it may be presumed that at least the four invariant water centers W2 (W2040), W5 (W2066), W24 (W2270) and W29 (W2300) may play role in the structural stabilization of the respective D1, D2, D5 and D6 domains. ©Biomedical Informatics (2020) Table S2: Intra and Inter-domain recognition through direct and conserved water mediated salt bridge interaction of the residues (Acid···Water···Base) in the X-ray and MD-simulated (2J5W) structures of human Ceruloplasmin. All the distances are given in Å unit. H-bonding distances (Å) of the acidic and basic residues form the water molecules X-ray structure (PDB Id 2J5W) Ranges of distances for water mediated salt bridge interaction during MDsimulation Recognition through conserved water mediated salt bridge interaction (Acid (A) In the simulated structure of 2J5W, beside these water mediated coupling between the acidic and basic residues, some of those residues have also been stabilized by direct fork-fork type of saltbridge between Glu844 and Arg652, where the OE1···NH2 and OE2···NH1 distances were varied from 2.7-3.1 and 2.74-3.2 Å. The Glu844 (OE1) residue also forms a water (W26) mediated saltbridge with Arg882 (NH1). Similar type of fork-fork geometry has also been found in the salt-bridge between Glu784 and Arg 945, where the OE1···NH2 and OE2···NH1 distances were ranging from 2.7-3.2 and 2.63-3.23Å respectively. Moreover, fork-stick type of geometry has been observed in the salt-bridge Glu207(OE1)···Lys50(NZ) and Asp671(OD2)···Arg 845(NH1) where the distances were ranging from 2.5 to 3.0 and 2.51 to 2.8 Å respectively and these interactions were also observed in the 2J5W X-ray structure . Role of water molecule in conformational dynamics of His667 and Trp669 MD-simulation studies have also revealed the importance of a conserved or pseudo conserved water center (W10) in the recognition of D3···W10···D4 domains. The influence of that water molecule in the conformational dynamics of His667 and Trp669 residues has also been observed. In the 2J5W and 4ENZ PDBstructures these two residues are observed to stabilize by stacking interaction (with distance ~4. Tables 1 and 2. Nevertheless, such stabilization mechanism of His667 rotamer has also been observed in the simulated structure of CP though there were some variations in the torsion angles. During simulation of 4ENZ structure, His667 shows two preferred conformations I and II, where the χ1 and χ2 values are ~176° and 50° (for I), and ~ -75° and -80° (for II). The rotamer I of His667 exists from 0 to 6.2 and 19.5 to 37.8ns, whereas the II-rotamer is exists from 6.25 to 19.45 and 37.83 to 50ns. The variation of torsion angles of that residue with time has shown in Figure 3. From the initial stage, the conformation I of His667 is stabilized by stacking interaction with Trp669 upto 6.2ns. However, after adopting conformation II at ~6.25ns the residue is stabilized by water mediated inter-domain D3···W10···D4 interaction through His667(NE2)···W10···Gln552(OE1) H-bonds. After 19.5ns, His667 is again revert back to conformation I, and the water molecule (W1202) is observed to migrate from that conserved site (W10) at ~20ns. The occupation frequency of W10 water center is observed to be ~40%. Again the imidazole residue seems to adopt conformation II at ~37.83 which exists upto 50ns. However in the entire simulation period, the Trp669 stays almost at its position (conformation I), where the χ1 and χ2 values are ~ -65° and 104° which are shown in Figure 3. During simulation of 2J5W structure, His667 is found to stabilize by Trp669 through π…π interaction upto 0.5ns, however after that period histidine adopts conformation II which exists upto ~50ns (where the χ1 and χ2 values are ~ -75° and -80°) and it is stabilized by Gln552 bound water molecule W2126 through His667(NE)···W2126···Gln552(OE) H-bond interaction, where the His667(NE)···W2126 and Gln552···W2126 distances were varied from 2.86 to 3.11 and 2.65 to 2.98Å. Actually W2126 water molecule has occupied the conserved water site W10 with ~100% O.F. During simulation of hCP, Trp669 shows different conformations, initially from 0 -9.98ns, it stays almost at its initial position with conformation I (where the torsion angles χ1 and χ2 are ~ -68° and 98°). But after ~10ns the indole ring is parallel displaced from its initial position and adopts conformation II (χ1 and χ2 angles are ~ 68° and -100°) which exists upto ~17.5ns, where the residue was stabilized by H-bond interaction with water molecules (W···Trp669(NE)). The conformation II (of Trp669) has reappeared at ~21.9ns and exist upto ~40ns. After that period, tryptophan adopts conformation III (where the χ1 and χ2 values are ~ 25° and -100°), where the indole ring lies almost perpendicular to previous conformation-I thus it stabilized by Trp669 (π)···water (W2175) interaction. The variations of torsion angles for Trp669 and His667 with time are shown in Figure 3. In the two simulated structures variation of occupation frequency of Gln552 bound water molecule (W10) may arise due to lower number of water molecules in the asymmetric unit of 4ENZ crystal compared to 2J5W structure. In ceruloplasmin, several conserved water molecules are playing role in the inter-domain recognition and structural stabilization. The W7 water center is playing role in the interaction of Glu1032 to trinuclear copper cluster. It is interesting to observe the role of conserved water molecule in the dynamics of His667 and Trp669 residues which may be important for the interaction of CP with the macromolecule myeloperoxidase (Mpo). Possibly, nature of these conserved water centers and their interaction with the intra and inter-domain residues are also thought to be important for keeping the proper structural flexibility of that multifunctional enzyme and the recognition of hCP with other biomolecules. Conclusion: Molecular dynamics analysis of the O2-bound ceruloplasmin structure show 34 conserved water sites. We observed that 17 centers are directly interacting and stabilizing the intra-domain residues through H-bonds. However, 17 other water centers are involved in the inter-domain recognition and are connected with the inter-domain residues through conserved water mediated Hbonds. The four invariant water molecules at the W2, W5, W24 and W29 sites are involved in the structural stabilization of ceruloplasmin. We report 10 conserved water centers involved in the inter-domain stabilization of domain 5 (D5). The 5 water centers W13, W14, W18, W23 and W26 are connected with domain 4 (D5···W···D4). Moreover, the 5 other water centers W19, W20, W27, W30 and W31 are involved in D5···W···D6 recognition. The W7 and W32 water centers connect the D1-domain to D6-domain through H-bonds. These water-mediated interactions (Glu1032···W7···Cu-cluster) are important to the electron transfer process of hCP as the trinuclear copper cluster is situated at the interface between domain 1 and 6 as described elsewhere [5]. The water molecule at the W10 center participates in the D3···W10···D4 recognition by Gln552···W10···His667 H-bond interaction, which stabilizes the complexation of CP with Myeloperoxidase (Mpo). This is interesting. The conserved water mediated interaction of the residues and their involvement to inter-domain stabilization have implication in the recognition biology of CP with other biomolecules.
2020-03-26T10:43:22.274Z
2020-03-30T00:00:00.000
{ "year": 2020, "sha1": "d8de4f40d726086f4093bf085f9d8c5e37e4d611", "oa_license": "CCBY", "oa_url": "http://www.bioinformation.net/016/97320630016209.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9327e2c553bc0d9ebbb9c5a0e95fdeab9d5c4f3c", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
90303874
pes2o/s2orc
v3-fos-license
Using molecular ecological network analysis to explore the effects of chemotherapy on intestinal microbial communities of colorectal cancer patients Intestinal microbiota is now widely known to be key roles in the nutrition uptake, metabolism, and the regulation of human immune responses. However, we do not know how changes the intestinal microbiota in response to the chemotherapy. In this study, we used network-based analytical approaches to explore the effects of five stages of chemotherapy on the intestinal microbiota of colorectal cancer patients. The results showed that chemotherapy greatly reduced the alpha diversity and changed the specie-specie interaction networks of intestinal microbiota, proved by the network size, network connectivity and modularity. The OTU167 and OTU8 from the genus Fusobacterium and Bacteroides were identified as keystone taxa by molecular ecological networks in the first two stages of chemotherapy, and were significantly correlated with tumor makers (P < 0.05). Five stages of chemotherapy did not make the intestinal micro-ecosystem regain a steady state, because of the lower alpha diversity and more complicated ecological networks compared to the healthy individuals. Furthermore, combing the changes of ecological networks with the tumor markers, the intestinal microbiota was closely linked with clinical chemotherapeutic effects. Importance A deeply understanding of the role of intestinal microbiota contributes to help us find path forward for improving the prognosis of colorectal cancer patients. In addition, diet or probiotics interventions will be a possible attempt to improve the clinical chemotherapeutic effects for colorectal cancer patients. marker for detecting the disease is feasible (5, 6). As a consequence, the plasticity of 54 intestinal microbiota can be leveraged for therapeutic interventions (7) and to improve 55 therapeutic effect (8). 56 Cytotoxic drugs continue to be the mainstay of therapy for most CRC patients, 57 whereas the related treatment response is unpredictable. The personalized cancer 58 therapies are now emerging, and targeted therapies have made revolutionized 59 outcomes in CRC (9). However, it still appears novel problems such as idiosyncratic 60 4 adverse effects, acquired resistance and high costs (10,11). Recent studies have 61 implicated intestinal microbiota at the species level in influencing the drug response 62 and toxicity of CRC patients (12). Drug metabolism by intestinal microbiota was well 63 recognized since the 1960s (13). Intestinal microbiota played a key role in confirming 64 the efficacy and toxicity to a broad range of drugs (14). With the development of 65 high-throughput sequencing, the importance of intestinal microbiota for drug 66 modulation and discovery is increasingly recognized (15). Timothy A. Scott et al. 67 reported that intestinal microbiota had an influence on fluoropyrimidines, which are 68 the first-line treatment for CRC, through drug interconversion involving bacterial 69 vitamin B6, B9, and ribonucleotide metabolism (16). Leah Guthrie et al. suggested 70 that metagenomic mining of the microbiome, associated with metabolomics, was 71 considered as a non-invasive approach to develop biomarkers for CRC treatment 72 outcomes (2). These findings highlight intestinal microbiota as the potential 73 therapeutic power to ensure host metabolic health and disease treatment. 87 To identify intestinal microbial assemblages that potentially interact or share 88 niches within intestine, we used molecular ecological network analysis (24) which means that only a few nodes in the network have a great many of connections, 127 whereas most nodes have no or few connections (25), as proved by R 2 of power law 128 ranging from 0.72 to 0.87 (Table 1). Less modularity values (0.40-0.50) in T samples 129 indicated that these networks could be impossibly isolated into multiple modules. The 130 networks from the T0 to the T5 presented inconspicuous regularity, identified by the 131 changes of the average connectivity (avgK), harmonic geodesic distance (HD) and 132 modularity (Table 1). In addition, the constructed networks were significantly 133 different from random networks obtained using identical numbers of nodes and links 134 (P < 0.05, Tables 2). These metrics suggested that the network structures were 135 non-random and unlikely due to chance. 137 The intestinal microbial co-occurrence patterns were profoundly different for 138 colorectal cancer patients and healthy individuals networks (Figure 1), which were 139 also proved by multiple network topological properties (Table 1). Healthy 140 assemblages formed larger networks with more nodes than the CRC patients' 141 networks (Table 1). With the treatment stage going on, the nodes in CRC patients 142 increased ranging from 99 to 103, except for T3 (80). However, the CRC networks 143 contained more links between nodes than healthy networks, which increased the Table 1). The increased complexity of T networks was also reflected by the shorter 146 harmonic geodesic distances (HD) and the increased average degree (avgK) (24). 147 However, none of network size and connectivity in CRC patients was close to the 148 8 healthy individuals. Collectively the above results indicated that the network in CRC 149 patients did not present a regular change with treatment stage increasing, and do not 150 totally close to the healthy network. In addition, we examined the correlation of alpha 151 diversity and network size and connectivity for CRC patients according to multiple 152 univariate diversity metrics. The results showed that there were no significant 153 relationships between them (P > 0.05), except that the network size was significantly 154 positively correlated with richness (P = 0.025, r = 0.87, Figure S2). 156 To identify microbial assemblages that potentially share or interact intestinal niches 157 during the treatment process of CRC patients, we focused on representative networks 158 from CRC patients with five treatment stages. We focused on modules with at least 159 five nodes, and visualized the phylogeny for these modules ( Figure 2). Total of 8 160 modules were detected in the healthy individuals. There were 6, 6, 5, 6, 7, 7 modules, 161 respectively in T0, T1, T2, T3, T4 and T5 network (Table S2). Networks from all 162 CRC patients contained modules with modularity (M) values < 0.50 (Table 1). 163 Overall, taxa tended to co-exclude (negative correlations, blue lines) rather than 164 co-occur (positive correlations, gray lines); negative correlations accounted for 47-75% 165 of the potential interactions observed at each treatment stage ( Figure 2). The negative 166 correlations in CRC patients decreased from T0 to T5; however, they were still more 167 than that in healthy individuals (45%). 168 The composition of modules differed within each network and changed over Table S3, Table S4 and Table S5). The nodes in each network (Table S4). 195 In addition, we selected some key OTUs and tumor markers (CA242, CEA,196 CA199, and CA724) to make Spearman correlation analysis to explore the linkages 197 between microbial correlation networks and clinical chemotherapeutic effects. The 198 results showed that the OTU167, OTU8 and OTU9 were significantly correlated with 199 CEA, CA724 and CA242 (P < 0.05, Table S6). The OTU167, OTU8 and OTU9 were (Table S1). However, the result was inconsistent with the 249 changes in the ecological networks of intestinal microbiota, which was the lowest 250 13 after the third chemotherapy (T3 , Table S2). Pathological disease states might create a 251 state of dysbiosis, and chemotherapy further exaggerated the effect of intestinal 252 microbiota. The intestinal microbiota might adapt to the effect of chemotherapy by 253 increasing the connectivity or the complexity of specie-specie interactions, proved by 254 higher links (Table 1). Therefore, it presented the dissimilarity in the changes of alpha 255 diversity and ecological networks of intestinal microbiota. 256 The identified modules within the networks probably arise from microbe- 356 Modularity is used to demonstrate a network which is divided into distinct sub-groups 357 naturally (47). Average degree (avgK) represents the complexity of the network.
2019-04-02T13:14:23.486Z
2018-05-27T00:00:00.000
{ "year": 2018, "sha1": "c39f0b1e76bfb407187e2863a0f57a048fc773a3", "oa_license": "CCBYNC", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2018/05/27/331876.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "58e62ca11180c7046f7f65cb30cf84fdd5669fab", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
149739054
pes2o/s2orc
v3-fos-license
The Role of Psychological Hardiness on Performance of Scissors Kick The main purpose of this study was to find the effect of psychological hardiness on performance of scissors kick in football. The present research was practical in terms of purpose and correlation in term of nature. The statistical population of research was Ataturk University female students (18-24 yrs old) unfamiliar to football scissors kick, in 2018. So, 28 participants were selected randomly based on Morgan Table from 30 vulenteers. In this study, two quastionnars of hardiness and Charbonneau sport performance were used to collect data. To test hypotheses, regression test was used. In this research, SPSS 20 software was used in all calculations and tests.The results showed that psychological hardiness play a significant role on performance of scissors kick. It is one of the most important characteristics of a successful athlete. Introduction In the last few decades, sports psychology has become an important part of sport, especially as coaches are well-known and experienced before the time of the competition and during training and exercises.One of the applications of psychology in sport is to help improve performance, to learn and to execute more correctly and more easily the skills, using different techniques and techniques such as mental training, self-confidence and self-efficacy.Admittedly, these psychosocial skills are the main and most prominent components of continuous performance at high levels of competition (Newland et al., 2013). The inability to work effectively with sports stress is detrimental to the athlete's performance (Stoeber, 2001).However, it seems that the effect of stress on performance depends on individual athletes' differences, because some players or coaches manage and manage more stressful and difficult situations than others.Kobasa (1979) stated that the distinct responses of individuals to the same stressful events were a personality difference, the best described for it being psychological hardiness (Kobasa, 1979).Mental hardness is a practicing skill that helps you to deal with challenging situations.Psychic hardness is a comprehensive term that indicates the power of your mental games.The mental tenacity does not guarantee your victory, but it helps you to withstand the hard pressures and will allow you to take this opportunity to your success.Sport psychology defines mental tenacity as the athlete's ability to stay focused, stay motivated, committed to achieving goals, especially in confronting hardships and failures.It is not just the day of the game itself and you have it. If mental hardiness is a skill, then it can be developed with technical and physical skills.According to Kobasa, the tenacity, restraint, and protection barrier is structured around three components of commitment, control and combat.Belief in commitment vs. self-alienation tends to be deeply involved in doing something.The committed people are no longer in high-stress situations.Control is the belief that events of life and their consequences are predictable and controlled.A person with a sense of control, anxiety and self-control, is more than luck, and believes that he can manage what is happening around him with his own efforts.Combatting is a belief that changes in natural life are perceived and positive perceptions of it are deduced (Ucan, 2018).Skills to challenge the correct performance are called technique.Skill in the word means agility and mastery.Skill is one of the most controversial topics in the field of exercise and sports science.Human beings in the 21st century, in contrast to the past century, have more than ever been governed by their own virtues and the environment due to the advancement of technology, all of which, of course, owe the task of teaching and exercising different skills.Undoubtedly, the importance of implementing skills in human development is far beyond the perspective of his thoughts.Therefore, since the environment of human life is always subject to change, man has to perform skills to overcome these transformations.In this discussion, we will talk about the skills in the field of movement science and sports.Skill is one of the most controversial topics in the field of exercise and science of sport (Ucan, 2018). Hardness is a practicing skill that helps you deal with challenging situations.Psychic hardness is a comprehensive term that indicates the power of your mental games.The mental tenacity does not guarantee your victory, but it helps you to withstand the hard pressures and will allow you to take this opportunity to your success.The importance of what may happen during a match and your calm and self-confidence fluctuate, such as the bad decisions of an arbitrator, and the importance of a high level of combat readiness for a combat athlete.Therefore, this issue is of paramount importance. Mental toughness has been historically one of the most used but least understood terms in sport psychology.However, despite the apparent breadth of opinion, a general definitional consensus is emerging from the literature reflecting the cognitive-behavioural multivariate nature of the construct (Sheard, 2009). The three hardiness characteristics amount to the existential courage that motivates athletes to work hard at transforming potentially stressful situations into opportunities.As such, hardiness is a pathway to resilience under stress, where performance is enhanced by active or decisive coping efforts in stressful situations.In addition to evidence from rugby league, and sport in general (Sheard & Golby, 2006a), the positive influence of hardiness on performance has been reported in such diverse samples as human resource consultants and university undergraduate students (Sheard, Golby, 2007;Sheard, 2009). According to WHO, fifty two million people of world are suffering from severe mental diseases and as many as 250 million people are grappling with mild diseases.Research based evidence suggest that there is a positive and significant relationship between the psychological hardness and mental health (Hasanvand et al., 2014).Sheard (2009) revealed that the Australian Universities players had significantly higher mean scores on positive cognition, visualization, total mental toughness, and challenge than their opponents from Great Britain.The Australian Universities players were also the tournament winners.The findings concur with previous research indicating superior mental toughness and hardiness are related to successful sport performance.Practical implications focus on the potentiality of ameliorative cultural environments. In a research by Sing (2010) it was revealed that athletic coaches were found to be signifi cantly younger, signifi cantly greater in control disposition of personality hardiness and having signifi cantly lesser amount of competition anxiety when compared with the wrestling coaches (Sing, 2010).Hasanvand et al. (2014) revealed that there was a positive and significant relationship between mental health and emotional intelligence and its components (self-motivation, self-consciousness, self-control, social awareness, and social skills) with psychological hardiness.They believed that by promoting psychological hardiness through increasing mental health and emotional intelligence, we can overcome stressful and anxious factors, as well as factors resulting in most psychological problems.Newland et al. (2013) found that among the firefighters, in addition to maintaining health, muscular analysis and heart disease, there were significant points, and there was a significant relationship between marital status and body mass.Hystad et al. (2012) confirmed the factors of tenacity by factor analysis and their meaningful positive relationships among male employees.Thomson (2017) states that elite rugby players will increase their hardiness, their performance will increase.Also, stress control has increased stiffness and performance Lin et al. (2017) has shown that the stress level, depression and anxiety of individuals' decreases with increasing their hardiness.Due to the increasing severity of their mental health, their hardiness levels are close to their level of physical fitness. Eris (2018) studied the effects of physical fitness and mental hardness on the performance of elite basketball players in Turkey.Physical fitness affected psychological characteristics and performance.Since their importance is clear to everybody, we need to plan to enhance their psychological and mental health. Competitive conditions, stress to overcome challenges and optimal performance of sport have a negative effect on mental health of elite athletes.Lin et al. (2017) showed that personality traits lead to the use of problem-solving strategies, hardiness and mental health, and hardiness is a good predictor of mental health and performance. Given the above, is it a question of how psychological hardiness role in the performance of sports skills of female footballers? Method In terms of purpose, the view was implemented as a descriptive (correlation) research.For analyzing the research data, regression correlation analysis was used. Society and Sampling Method The statistical population of the study was all Ataturk University female students who vulenteerly participated in this study in number of 30 in 2018.By Morgan table 28 of them were randomly selected for this research. Long and Gullet Hardiness Questionnaire (2001) One of the characteristics of personality that is stressed as a moderator is psychological hardiness, which means endurance, ability, and tolerance in difficult and difficult situations (Jafari et al., 2010).Hardness refers to the performance of a person based on cognitive assessment, and includes three components of commitment, control and struggle.The validity of this questionnaire is a self-report scale that consists of 42 questions and is developed by Long and Gollet (2001). Charbonneau Sports Performance Questionnaire The questionnaire was made in 2001 by Charbonneau.The questionnaire has five questions in the Likert scale and is designed to evaluate the performance of the athletes and is completed by the respective instructor for each athlete.The scores derived from the five questions show the final scores of the athlete's performance.This question is on a scale from 1 (very poorly) to 5 (very special) (Charbunio, Barbing and Kilwa, 2001).The scores derived from the five questions are summarized and the final score of the athlete's performance is obtained, which is the final score of the performance The athlete is in the range of 5 to 25 (at least up to the maximum), and the average of the reliability coefficients of this questionnaire is calculated by Charbono, 0.71. Data Analysis Method In order to analyze the data obtained from the collected questionnaires, regression was used.In this research, 20SPSS software has been used in all calculations . Results Frequency distribution and percentage of statistical sample are based on the age of the participants in Table 1. 2, the significance level of the Pearson test is 0.000 and this level is less than the minimum level of 0.05 and also calculated according to the Pearson correlation coefficient of 0.367, and this The amount of critical mass of Sphrman with a degree of freedom of 305, which is 197.0, is greater, therefore, psychological hardiness affects the performance of the sports skills of the Erzurum footballers.3 that the significance level of the corresponding test is 0 000, it can be argued that the above test is significant with a 0.05 error or a confidence level of 0.95.According to the R2 detection coefficient, which is the ratio of the changes explained by the x variable to the total changes, is 113.It can be argued that 11.3% of variations in skills performance are explained by changes in psychological hardiness Discussion and Conclusion The results of the research showed that psychological hardiness is effective on the performance of skills.The research is consistent with Sheard (2009), Singh (2010), Thompson (2017), Hasanvand et al. (2014).Also, among the components of hardiness, the combat component was a significant predictor of athletic performance and 3.9% of the variance Exercise explained.Although all the features considered important by the subjects and necessary for the ideal performers in terms of mental tenacity, it was necessary to change the focus of the sport or not, according to the estimates, as the last obvious case.The mental hardness has focused on elite athletes (such as athletes, athletes, athletes, or national level), and existing measurements have been made based on studies done on elite athletes.Studies on academic athletes have paid little attention and it is possible that psychological hardness tools for elite athletes would be more standard than academic athletes.The findings showed that tough athletes had better performance.Therefore, it seems that evaluating and strengthening hardiness can be used to interact with the purpose of increasing performance. According to the WHO (2011), mental health is the perfection of physical, psychological and social well-being, and not just the absence of diseases.For this organization, mental well-being, prevention of mental illness, and the treatment and rehabilitation of people suffering from mental problems also include mental health (Eris, 2018). In explaining this, we can point to Gibson, who stated that psychological hardiness is related to the inner point of control and self-efficacy.This intrinsic and acquired advantage over the years of experience enables performers to have outstanding self-regulation skills.Generally, those who are psychologically hard-core are more likely to be concentrated, more confident and more controllable at the time of pressure and the high level of exercise they are in.The psychological hardiness will reduce the stressful events and physical and mental arousal resulting from these events, thus leaving a positive impact on peoples' health (Sheard, Golby, 2007).Besharat (2007) demonstrated that people with high psychological hardiness handle the stressful conditions better in comparison with people with low psychological hardiness while the former group uses more effective coping strategies.The psychological hardiness protects the youth against the psychological problems and the psychological impacts of the problematic events (Pinquart, 2009).This issue will lead to a promotion of problem solving skills among people (Salehi Fadardi et al., 2010).Sheard (2009) revealed that superior mental toughness and hardiness are related to successful sport performance.Practical implications focus on the potentiality of ameliorative cultural environments. In fact, mental hardiness is one of the characteristics of personality.The tenacity of a set consists of a personality trait that acts as a source of resistance as a protective shield in the face of stressful life events.Stiff people often find life events interesting, diverse, informative and challenging.They consider life events as realistic or with a long-term vision, and are therefore more optimistic about the whole of life's events.Perhaps there is the same optimism that makes the hardline people out of unpleasant incidents, and so on, expecting more illnesses.They consider life events as realistic or with a long-term vision, and for this reason, relative to all life events are more optimistic (Erciş, 2018). Table 1 . Frequency distribution age of participants Table 2 . Correlation between psychological hardiness and skill performance Table 3 . Analysis of variance of regression model of psychological hardiness on skill performance
2019-05-12T14:24:40.090Z
2019-01-03T00:00:00.000
{ "year": 2019, "sha1": "033159114c490064aab85d9577d9560b48cf505a", "oa_license": "CCBY", "oa_url": "https://redfame.com/journal/index.php/jets/article/download/3932/4117", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "033159114c490064aab85d9577d9560b48cf505a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
201450246
pes2o/s2orc
v3-fos-license
Sustainability Initiatives in the Fashion Industry A heightened awareness toward the fashion industry ’ s environmental impact has emerged in recent years, stirred by mounting evidence of intensified global clothing consumption and driven by the increased accessibility and affordability of clothing. In the last 3 years, the release of several comprehensive reports years detailing the extent of the fashion industry ’ s environmental impact, as well as the founding of several fashion industry-targeted sustainability campaigns (e.g., the “ 2020 Commitment ” of the Global Fashion Agenda), has not only helped draw a great deal of attention to the issues but has also triggered an evident wave of intention toward a concrete, quantifiable action. With the abundance of information surrounding the subject of sustainability in the fashion industry, this chapter intends to provide an overview of (1) the most concerning environmental impacts caused by the fashion industry, (2) current leading collective sustainability campaigns mobilizing the fashion industry, (3) current available benchmarks and tools for measuring environmental impact of the textile life cycle, and (4) examples of how companies in the fashion industry are executing sustainability initiatives in their products or processes. Finally, the chapter will conclude with some of the current challenges and future opportunities in sustainability confronting the fashion industry. Introduction The taxing impact the fashion industry has had on the environment is by no means a new revelation-having accumulated a great deal of evidence over the years. However, unlike in the past when "sustainability" seemed more like an ideal adopted by individual, niche grassroot organizations, it is now considered a core value globally across the fashion industry. The fashion industry's recent wave of intentional action toward sustainability is in part motivated by several comprehensive and revealing industry sustainability reports released in the last 3 years [1][2][3], but moreover it is a collective response to the recent fashion industry-specific sustainability campaigns such as the "2020 Commitment," spearheaded in the last 2 years by several sustainability-driven coalitions (e.g., the Global Fashion Agenda and the Waste and Resources Action Programme UK), which have rallied formal commitments from a significant portion of the fashion industry toward concrete, quantifiable action for sustainability by 2020. The heightened concern toward the fashion industry's environmental impact is also stirred by evidence of intensified global clothing consumption-which according to data from the World Bank [4] has doubled from around 50 billion units of clothing sales in 2000 to over 100 billion units in 2015 (see Figure 1). This dramatic increase in clothing consumption has been fueled by fast fashion, an increasingly bargain-driven consumer, increased accessibility via an expanding online shopping landscape, and more buying power from a growing middle class, especially in emerging economies such as China (projected to surpass the United States "as the largest fashion market in the world" in 2019, according to McKinsey FashionScope [5]). Unfortunately, the increased accessibility and affordability of clothing simultaneously propagated not only a culture of excessive consumption but also a quicker disposal of clothing, as exemplified by an approximately 20% decrease in the average number of times a garment is worn before it is abandoned as shown in Figure 1. Given the abundance of information surrounding the subject of sustainability in the fashion industry from many sources, there is an opportunity for a collated overview on the subject. Therefore, the purpose of this article is to provide an overview of (1) the most concerning environmental impacts caused by the fashion industry, (2) current leading collective sustainability campaigns mobilizing the fashion industry, (3) current available benchmarks and tools for measuring environmental impact of the textile life cycle, and (4) examples of how companies in the fashion industry are executing sustainability initiatives in their products or processes. Finally, the article will conclude with some of the current challenges and future opportunities in sustainability confronting the fashion industry. The environmental impact of the textile life cycle In any given industry, each stage of the product life cycle poses an impact on the environment-by consuming environmental inputs (e.g., water for harvesting raw materials, fossil fuels to power manufacturing equipment, etc.) and releasing environmental outputs (e.g., carbon dioxide emissions from burning fossil fuels, landfill waste after product is disposed, etc.). For the fashion industry, the environmental inputs and outputs of the textile product life cycle is reflected in Figure 2. (It is worthwhile to note that the term "life cycle" used is misleading in that the above chain of processes does not form a "cycle," but is instead linear sequence of events, with a definite beginning and end. A true cyclical life cycle would be indicative of recycling or reuse, feeding the end waste back into the system to be used again). As shown, the inputs and outputs of the fashion industry's "textile product life cycle" pose impact on the environment, but it is the size of the impact which is staggering. This is partly due to the immense scale of the fashion industry, which has been evaluated to be a USD 1.3 trillion dollar industry [6], and the world's third largest manufacturing industry, after automotive and technology [7]. But also, according to a report by the Ellen MacArthur Foundation, data confirms that the greenhouse gas emissions produced by textile production exceeds that of international aviation and maritime shipping combined. If it continues down this path, it is projected that by 2050 it could account for 1/4 of the worlds' carbon emissions [1]. To put it into perspective further, the annual carbon footprint of the fashion industry's product life cycle (3.3 billion tons CO 2 emissions) is almost equivalent to that of 28 countries in the EU (3.5 billion tons) [7]. However, greenhouse gas emissions are not the only harmful environmental outputs from the fashion industry; it is just one of the numerous other inputs and outputs which have strenuous environmental implications, as exemplified in Figure 2. The below provides a summary, along with examples, highlighting some of the leading concerns (note that there are indeed many others; however, for the purpose of this condensed article, we will focus on the following): • Heavy consumption of depleting natural resources: • For example, water consumption for cotton crops • For example, coal/natural gas (nonrenewable) energy to power manufacturing facilities • Polluting waste outputs (e.g., chemicals, pesticides, carbon emissions, etc.): • For example, fertilizer/pesticide runoff from cotton crops • Dyes/chemical waste from garment factories (e.g., for dyeing and washing processes) • Microplastic pollution (e.g., from synthetic fiber shedding): • For example, shedding of polyester fibers (considered microplastics) in the laundry process: a domestic wash load can release around 700,000 fibers and, as they are unable to be completely filtered out by waste water treatment plants, end up infiltrating and accumulating in marine ecosystems [8]. This issue is exacerbated by the drastic increase in the annual consumption of polyester fibers in the fashion industry, which has grown exponentially, from 8.3 million tons in 2000, to 21.3 million tons in 2016 [6]. This section provided a condensed overview of the extent of the fashion industry's impact on the environment and highlighted the most concerning forms of impact. However, it is worth noting that the abundance of published data and literature on the environmental impact of the fashion industry is truly inundating and could easily extend beyond the scope of this section. The following section will present some of the current collective global sustainability campaigns which are striving to alleviate the environmental impact of the fashion industry in the future. Collective global sustainability campaigns in the fashion industry The intensified evidence of the fashion industry's impact on the environment in the last decade prompted the founding of several global sustainability campaigns within the last 3 years. These campaigns, spearheaded by sustainability-driven coalitions, are mobilizing companies across the fashion industry, collectively toward adopting sustainable materials and practices throughout their design, development, and supply chains, and have already garnered formal commitments from key players in the fashion industry which represent a sizable portion of the market. Two predominant global campaigns, initiated in 2018, are summarized below: • The "2020 Circular Fashion System Commitment," introduced by the Global Fashion Agenda • Mission/action points: The Global Fashion Agenda is a leadership forum engaging the fashion industry toward sustainability [9]. Its "2020 Circular Fashion System Commitment" is a call on the fashion industry to commit toward a "circular fashion system," by taking concrete action on one or more of the following points: 1. Implementing design strategies for cyclability 1. Ninety-four companies signed on (represents 12.5% of the global fashion market), including ASOS, H&M, Nike, Inditex, Kering, and Target. • The "Sustainable Clothing Action Plan (SCAP) 2020 Commitment," introduced by the Waste and Resources Action Programme (WRAP) • Mission/action points: The SCAP (spearheaded by WRAP) is a collaborative framework and voluntary commitment for organizations to deliver industry-led targets of a 15% reduction in carbon, water, and waste in the clothing industry by [10]: 1. Reinventing how clothes are designed and produced • Eighty companies signed on (represents 58.5% of the UK's retail sales volume), including ASOS, Marks and Spencer, Ted Baker, and others. The action points of both these campaigns show an emphasis on cyclability-not just of materials but also practices-and reshaping the product life cycle toward circularity [10] (see Figure 3). The number of companies committed to these campaigns so far is a promising sign that sustainability is gradually becoming an integral factor in the fashion industry. Aside from the global sustainability campaigns such as above, another industry resource supporting companies toward sustainability is the various benchmarks and tools developed to help the fashion industry gauge the environmental impact of certain materials or processes and therefore help steer decisions accordingly. The following section will explore some of these tools and benchmarks. Measuring environmental impact: benchmark and tools for the fashion industry For companies in the fashion industry to become more cognizant and proactive about minimizing the environmental impact of their product life cycles, they would need to rely on a definitive benchmarks and tools to gauge the environmental impact of their decisions regarding product or processes. However, measuring environmental impact of such decisions can be very convoluted, as results tend to be conflicting depending on which angle it is viewed from. Here are some examples of the conflicting nature of environmental impact measures: On the one hand, for example: • A polyester shirt has more than double the carbon footprint of a cotton shirt (5.5 kg CO 2 emissions vs. 2.1 kg CO 2 emissions) [11]. But on the other hand: • The processing for cotton produces a water footprint 20 times larger than that of polyester (see Figure 4). • One kilogram of cotton-equivalent to the weight of a shirt and pair of jeanscan take as much as 10,000-20,000 liters of water to produce [10]. • For an organic cotton tote to make up for the environmental impact (water use, energy use, etc.) of a classic plastic bag, it would need to be used 20,000 times [12]. The following is an outline of three established benchmarks and tools, designed to enable the fashion industry (and other industries), to measure the environmental impact of certain decisions regarding their material use or processes employed: • Higg Index, developed by the Sustainable Apparel Coalition: It is described as "a suite of tools" that enables the measure and score of a company or product's "sustainability performance" at "every stage in their sustainability journey," aiming to provide a "holistic overview" that "empowers businesses to make meaningful improvements that protect the well-being of factory workers, local communities, and the environment" [13]. It encompasses the following tools: • Product tools: 1. Higg Materials Sustainability Index (MSI): "the apparel industry's most trusted tool to accurately measure the environmental sustainability impacts of materials," by scoring materials based on their environmental impact from fiber to fabric across five environmental impact parameters (global warming, water pollution, water scarcity, resource depletion, and chemicals) (see Figure 5 for a sample screenshot of the Higg MSI interface) . The environmental impacts measured include greenhouse gas (GHG) emissions, energy use, water use, water pollution, deforestation, hazardous chemicals, and animal welfare. The social and labor impacts measured include child labor, discrimination, forced labor, sexual harassment and gender-based violence in the workplace, non-compliance with minimum wage laws, bribery and corruption, working time, occupational health and safety, and responsible sourcing. • MADE-BY Environmental Benchmark for Fibers, developed by MADE-BY in cooperation with Brown and Wilmanns Environmental, LLC: It ranks 28th in the most commonly used fibers in the garment industry into 5 classes (Class A-E), based on the following measures: greenhouse gas emissions, human toxicity, eco-toxicity, energy, water, and land [15] (see Figure 6). • Corporate Fiber and Materials Benchmark (CFMB) (formerly the Preferred Fiber and Materials Benchmark (PFMB)), launched by the Textile Exchange: Launched in 2015, it is a leading industry-led, voluntary self-assessment tool which enables companies to systematically measure, manage, and integrate a preferred fiber and materials strategy into four key areas of mainstream business operations: corporate strategy, supply chain, consumption, and consumer engagement [16] (see Figure 7 for flowchart of this framework laid out). It also provides feedback on progress and performance in comparison to peers and the overall industry. As of 2018, 111 companies have partaken in the program (an increase of 106% since 2015). As can be seen from the three examples above, there is a wide selection of benchmarks and tools for measuring environmental impact available to the fashion industry; however, there are some limitations to consider. For one, the wide selection can also be problematic as each of the different initiatives above accounts for slightly different factors or weighs them slightly differently; therefore the result obtained from one tool might not be consistent with that obtained from another. For example, based on the Higg Materials Sustainability Index, natural fibers like silk, cotton, and wool are assigned higher environmental impact scores (i.e., more damaging to environment) of 128, 98, and 82, respectively, while fossil-fuel-derived fibers like nylon, acrylic, and polyester have lower impact scores at 60, 52, and 44 [7]. This is because the Higg Index puts greater emphasis on fiber production, which is indeed more taxing on the environmental for natural fibers such as silk, cotton, and wool, as their procurement imposes a greater strain on natural resources (such as water, land, or animal welfare). Yet, in contrast, according to the MADE-BY Environmental Benchmark (Figure 6), fossil-fuel-based virgin nylon fibers and natural wool fibers are both ranked under the same Class E (the "least sustainable" category). Hence the availability of multiple benchmarks and tools could prove to be more incumbering than helpful when it comes to definitively measuring environmental impact. Another limitation of these benchmarks and tools is that they do not sufficiently weigh in, or even overlook, the impact of the in-use phase of the textile product life cycle. The in-use phase here refers to the period when the textile product is being used for what it was made for. So, for a garment, that would mean the period from when it is purchased by a customer until it is no longer used or disposed of, which mostly involves its wearing and laundering. The research of Laitala et al. reveals that energy and water consumption during the laundering process varies greatly depending on fiber content of the garments [17]. Firstly, (see Figure 8) the research presents data which indicates that wool-and silk-based garments are 3-6 times more likely to be dry-cleaned than cotton-or synthetic-based garments and furthermore that dry cleaning uses 3-6 times (depending on the type of dry-cleaning process) more electricity than wet washing methods (which is the predominant laundering method for cotton-or synthetic-based garments). However, their research also shows that on average, the water temperature of the wash setting for cotton-based garments is about 17°C higher than that for wool-based garments. With polyester and other nonbiodegradable polymer fibers (e.g., acrylic and nylon), there is the developing concern regarding the shedding of fibers (microplastic) during the washing process which, being unable to be completely filtered out by standard waste water treatment plants, end up infiltrating and accumulating in marine ecosystems. Another aspect which deserves more consideration by the benchmarks and tools is human ecology and not just environmental ecology. For example, there are manmade fibers derived from plants, such as polylactic acid (PLA) derived from corn, which are environmentally biodegradable, but not necessarily human biocompatible [18]. Therefore, the potential negative side effects or toxicity on human ecology is a factor which deserves equal attention in impact measures. These limitations in the current benchmarks and tools are a clear reminder that measuring environmental impact of product or processes in the fashion industry is multifaceted and convoluted. Currently there is no prevailing, overriding benchmark or tool that provides a definite unanimous measure of environmental impact, so it is up to companies to adopt a holistic approach when developing a strategy toward sustainability. Sustainability initiatives in the fashion industry Having reviewed several sustainability campaigns and environmental impact measure benchmarks and tools relevant to the fashion industry today, this section will now proceed to provide insight into how companies and various players in the industry have responded, i.e., the kinds of strategic initiatives being taken toward sustainability. The sustainability initiatives will be categorized into two types: (1) front-end approach and (2) back-end approach. Front-end approach Within the context of this article, this refers to the integration of sustainable initiatives at the beginning stages (front-end) of the textile product life cycle, such as in the raw material sourcing and design and development processes. So, for example, a front-end sustainable initiative could be the decision to use "low environmental impact*" textile fibers as the raw materials for the textile goods being produced. A front-end sustainable initiative could also be manifested in the design and development process, for example, by utilizing digital tools to minimize the need for physical prototype samples or by training designers to adopt an ecoconscious mindset into their creations. (*Note that we are using the term "low environmental impact" textile fibers as opposed to "sustainable" or "eco-friendly" or "green" textile fibers because the latter terms can be misleading as there are no completely "sustainable/eco-friendly/green" fibers; all materials pose some impact. Furthermore, as discussed in the previous section, it is difficult to resolutely confirm the impact of a certain material, as there are many facets of environmental impact. Therefore "low environment impact" is a more accurate representation of what is possible to strive for in sustainable materials). An industry example of a front-end approach to sustainability is the adoption of regenerated cellulosic fibers, such as Lyocell and Seacell, by various fashion companies particularly in lingerie and activewear [19]. With cotton, albeit a natural cellulosic fiber, bearing a hefty water footprint in the harvesting process, and with petrochemical-based synthetic fibers such as polyester and nylon bearing a hefty carbon footprint in the manufacturing process [20], regenerated cellulosic fibers can prove advantageous. They have the benefit of being biodegradable and derived from natural renewable resources (i.e., Lyocell is derived from wood pulp and Seacell is derived from seaweed) via a closed-loop manufacturing process, thereby consuming far less water and energy than traditional cotton, polyester, and nylon. Both Lyocell and Seacell also naturally carry antibacterial and fast-drying properties, which is why they are ideal for lingerie and activewear product. A limitation of a front-end approach in tackling environmental impact is that it is still feeding more product in the fashion pipeline which will eventually end up at the end of the textile life cycle as waste by-product (even if it is biodegradable byproduct) which needs to be managed accordingly. Therefore, in the following section, we will look at an approach which tackles the by-product end of the textile product life cycle. Back-end approach Within the context of this article, this is referring to sustainability initiatives which aim to minimize the environmental impact of the product and processes at the end of the textile product life cycle, e.g., at disposal. A prime example of this is exemplified in the now widespread initiatives of post-consumer textile recycling. The reason recycled textiles have become so prevalent as a strategy to minimize environmental impact is not only because of the exponential supply of textile waste driven by intensified clothing consumption but more strategically because research has shown that the fiber production stage (extraction and processing) of the textile product life cycle has the greatest environmental impact in terms of water and carbon footprint, as shown in Figures 9-10 [14]. Therefore, by recycling postconsumer textile waste back into the textile supply chain enables bypassing the heavy environmental toll of the fiber production stage. There has been a great deal of research invested into textile recycling, from both the industry and academia. One notable advancement in textile recycling is exemplified by Garment-to-Garment (G2G) Recycle System, a closed-loop garment recycling retail concept supported by technology which enables the recycling of blended post-consumer garments, developed by HKRITA, in partnership with H&M and Novetex [21]. The Garment-to-Garment (G2G) Recycle System brings garment recycling to the retail level, therefore paving the way for garment recycling to be more accessible to the everyday consumer. There are also several notable recycling initiatives which, instead of relying solely on post-consumer textile products, are derived from various kinds of postconsumer plastic waste. REPREVE is one example of this. Produced by the company Unifi, REPREVE is a brand of polyester fibers made from recycled post-consumer plastic waste (e.g., plastic bottles) [22]. The ability to convert various forms of Figure 10. Carbon footprint of clothing in the UK (t CO 2 e) in 2016, by process. Source: Waste and resources action programme [14]. plastic waste into usable polyester textile fibers has the benefit of resourcefulness. Even though the conversion from post-consumer plastic waste to fiber requires energy and water input for the manufacturing process, according to Unifi, it is reportedly much less than that required for virgin polyester (energy consumption is 45% less, water consumption is almost 20% less, and over 30% less greenhouse gas emissions). Over the past decade, there have been many encouraging advancements which have expanded back-end approach sustainability initiatives such as textile recycling. However, there remain limitations in the current textile recycling technologies. For example, due to the need for comprehensive shredding in breaking down postconsumer textile waste, the tensile strength of recycled cotton yarns is less than that of virgin cotton [23]. Furthermore, as recycled yarns are composed of a mixture of fibers which may have undergone different dyeing and finishing processes in their last life, even after cleaning and bleaching processes, they may not be able to achieve the same hand-feel and color vibrancy possible with virgin fibers, therefore limiting its design versatility. These are some examples of limitations which could be preventing a greater adoption of textile recycling in the industry. Conclusion This article has attempted to provide a current and overarching view on the most concerning environmental impacts of the fashion industry today, the leading global sustainability campaigns and benchmarks and tools established to help empower the fashion industry toward concrete action and, last but not least, examples of sustainability initiatives being implemented in the industry. The fashion industry's large-scale wave of movement toward sustainability is evident; however, there remain questions and challenges to be addressed, one being how successful the "2020 commitment" goals will be, with 2020 just around the corner, and considering how potentially disruptive any kind of change is in an industry which is built on long-established processes and practices and adheres to an inflexible, tight calendar. Furthermore, as discussed in this article, the array of benchmarks and tools available for measuring environmental impact can result in a convoluted process and conflicting, inconclusive information. Such challenges may deter a company from successfully achieving concrete changes toward sustainability. Even if companies are able to navigate through the intricacies in evaluating environmental impact of a textile product or process, it is important to remember that the textile product life cycle is never impact-free (at least not in the foreseeable future), as it relies on the environment to provide various inputs and outputs. With this reality in mind, companies may find that making small but carefully holistically considered steps in the right direction can be much more effective than larger uninformed leaps when it comes to sustainability.
2019-08-23T18:23:44.830Z
2019-08-06T00:00:00.000
{ "year": 2020, "sha1": "c32ebf6f5e9fdfd41e6aaf3e24e35181d5575060", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/68462", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "7dfaa80613595dba609dc8b4cb673b2f056cc76e", "s2fieldsofstudy": [ "Business", "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }