id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
246677389
pes2o/s2orc
v3-fos-license
Self-reported COVID-19 infection and implications for mental health and food insecurity among American college students While the COVID-19 pandemic affected mental health and increased food insecurity across the general population, less is known about the virus’s impact on college students. A fall 2020 survey of more than 100,000 students at 202 colleges and universities in 42 states reveals sociodemographic variation in self-reported infections, as well as associations between self-reported infection and food insecurity and mental health. We find that 7% of students self-reported a COVID-19 infection, with sizable differences by race/ethnicity, socioeconomic status, parenting status, and student athlete status. Students who self-reported COVID-19 infections were more likely to experience food insecurity, anxiety, and depression. Implications for higher education institutions, policy makers, and students are discussed. S1.1 Survey Methods Data in this study yield from an annual survey developed by researchers at the Hope Center for College, Community, and Justice at Temple University, and fielded from the 2020 #RealCollege Survey electronically at 202 colleges and universities across the U.S (1). The Hope Center provided participating colleges with email invitation language and hosted the survey. To reduce sampling bias, language in the email invitation was ambiguous as to the scope of the survey and described the survey as being about "college life". Upon opening the survey, students were presented with a consent form in compliance with Institutional Review Board standards. To take the survey, the student had to click continue as a record of consent and complete a minimum of the first page of the survey. Participating colleges were asked to use only the provided invitation language to ensure consistency across colleges. In order to boost survey response rates, some colleges also promoted survey participation through text messages and social media. In these cases, they used language and materials provided by the Hope Center. In 2020, 202 postsecondary colleges and universities fielded the survey early in the fall term, to include students who may cease being enrolled later in the year. Colleges were asked to distribute the survey to all actively enrolled students in the fall of 2020. Response rates are computed as the number of survey participants divided by the number students invited to take the survey. Participating colleges sent survey invitations to an estimated 1.8 million students and 195,629 students participated, yielding an estimated response rate of 10.6%. Participating two-year colleges sent survey invitations to an estimated 1.0 million students, and 112,204 students participated, yielding an estimated response rate of 10.8%. Participating four-year colleges and universities sent survey invitations to an estimated 800,000 students, and 83,425 students participated, yielding an estimated response rate of 10.5%. Respondents attending two-year and four-year colleges had similar mean completion rates of 82 percent. S1.2 Survey Measures Self-reported COVID-19 infection rates were assessed through a single survey question, which asked students if they had been "sick with COVID" at any point during or since January 2020 during the COVID-19 pandemic. Responses to the item were limited to either "yes" or "no". Students who answered "yes" were coded "1" for having self-reported COVID-19 contraction and "no" were relatedly coded 0. To assess food insecurity in the fall of 2020, we used questions from the 18-item Household Food Security Survey Module (shown below) from the U.S. Department of Agriculture (USDA). The 18-item survey includes a subset of questions for students with children. Food Security Module Adult Stage 1 1. "In the last 30 days, I worried whether my food would run out before I got money to buy more." (Often true, Sometimes true, Never true) 2. "In the last 30 days, the food that I bought just didn't last, and I didn't have money to get more." (Often true, Sometimes true, Never true) 3. "In the last 30 days, I couldn't afford to eat balanced meals." (Often true, Sometimes true, Never true) If the respondent answers "often true" or "sometimes true" to any of the three questions in Adult Stage 1, then proceed to Adult Stage 2. Adult Stage 2 4. "In the last 30 days, did you ever cut the size of your meals or skip meals because there wasn't enough money for food?" (Yes/No) 5. [If yes to question 4, ask] "In the last 30 days, how many days did this happen?" (Once, Twice, Three times, Four times, Five times, More than five times) 6. "In the last 30 days, did you ever eat less than you felt you should because there wasn't enough money for food?" (Yes/No) 7. "In the last 30 days, were you ever hungry but didn't eat because there wasn't enough money for food?" (Yes/No) 8. "In the last 30 days, did you lose weight because there wasn't enough money for food?" (Yes/No) If the respondent answers "yes" to any of the questions in Adult Stage 2, then proceed to Adult Stage 3. Adult Stage 3 9. "In the last 30 days, did you ever not eat for a whole day because there wasn't enough money for food?" (Yes/No) 10. [If yes to question 9, ask] "In the last 30 days, how many days did this happen?" (Once, Twice, Three times, Four times, Five times, More than five times) If the respondent has indicated that children under 18 are present in the household, then proceed to Child Stage 1. Child Stage 1 11. "In the last 30 days, I relied on only a few kinds of low-cost food to feed my children because I was running out of money to buy food." (Often true, Sometimes true, Never true) 12. "In the last 30 days, I couldn't feed my children a balanced meal, because I couldn't afford that." (Often true, Sometimes true, Never true) 13. "In the last 30 days, my child was not eating enough because I just couldn't afford enough food." (Often true, Sometimes true, Never true) If the respondent answers "often true" or "sometimes true" to any of the three questions in Child Stage 1, then proceed to Child Stage 2. Child Stage 2 14. "In the last 30 days, did you ever cut the size of your children's meals because there wasn't enough money for food?" (Yes/No) 15. "In the last 30 days, did your children ever skip meals because there wasn't enough money for food?" (Yes/No) 16. [If yes to question 15, ask] "In the last 30 days, how often did this happen?" (1, 2, 3, 4, 5, 6, 7, 8 or more times) 17. "In the last 30 days, were your children ever hungry but you just couldn't afford more food?" (Yes/No) 18. "In the last 30 days, did any of your children ever not eat for a whole day because there wasn't enough money for food?" (Yes/No) To calculate a raw score for food security, we counted the number of questions to which a student answered affirmatively. "Often true" and "sometimes true" were counted as affirmative answers. Answers of "Three times" or more were counted as a "yes." Students (with or without children) who had raw scores of zero were considered to have "high" food security. Students (with or without children) with raw scores of one or two were considered to have "marginal" food security. Students without children with raw scores of three to seven were considered to have "low" food security. Students without children with raw scores above eight were considered to have "very low" food security. Students with children with raw scores of three to five were considered to have "low" food security. Students without children with raw scores above five were considered to have "very low" food security. Respondents are considered "food insecure" if they have low or very low levels of food security. Students' anxiety levels were assessed using a validated seven-item instrument called the Generalized Anxiety Disorder Scale (GAD-7). In the #RealCollege 2020 survey instrument, the items were separated into two sections (items 1-2 and items 3-7). The student needed to cross a certain score threshold in the first two items to progress to the remaining items. The assessment asked students about the number of times in the last two weeks that they were bothered by any of the following items: • For each item, students who reported being bothered zero days were coded "0, 1 to 6 days were coded as "1", 7-12 days were coded as "2", and 13-14 days were coded as "3". Item raw scores were then summed. Composite scores of 0 to 4 are categorized as "none to minimal" anxiety. Composite scores of 5 to 9 are categorized as "mild" anxiety. Composite scores of 10 to 14 are categorized as "moderate" anxiety. Composite scores of greater than 14 are categorized as "severe" anxiety. For the purposes of this study, students who were categorized as experiencing "moderate" or "severe" anxiety, are coded as experiencing anxiety, and assigned a dummy value of 1 for anxiety. Students' depression levels were assessed using a validated nine-item instrument, called the Patient Health Questionnaire (PHQ-9). In the #RealCollege 2020 survey instrument, the items were separated into two sections (items 1-2 and items 3-9). The student needed to reach a certain response threshold in the first two items to progress to the remaining items. The assessment asked students about the number of times in the last two weeks that they were bothered by any of the following items: • Little interest or pleasure in doing things; • Feeling down, depressed, or hopeless; • Trouble falling asleep, staying asleep, or sleeping too much; • Feeling tired or having little energy; Poor appetite or overeating; • Feeling bad about yourself-or that you're a failure or have let yourself or your family down; • Trouble concentrating on things, such as reading the newspaper or watching television; • Moving or speaking so slowly that other people could have noticed; or the opposite-being so fidgety or restless that you have been moving around a lot more than usual; • Thought that you would be better off dead or hurting yourself in some way. For each item, students who reported being bothered zero days were coded "0", 1 to 6 days were coded as "1", 7-12 days were coded as "2", and 13-14 days were coded as "3". Item raw scores were then summed. Composite scores of 0 to 4 are categorized as "none to minimal" depression. Composite scores of 5 to 9 are categorized as "mild" depression. Composite scores of 10 to 14 are categorized as "Moderate" depression. Composite scores of 15 to 19 are categorized as "Moderately Severe" depression. Composite scores of greater than 20 or more are categorized as "Severe" depression. For the purposes of this study, students who were categorized as experiencing "moderate," "moderately severe," or "severe" depression, are coded as experiencing depression, and assigned a dummy value of 1 for depression. To capture self-reported socioeconomic status (SES) of the household, students were asked whether or not they had received a Pell Grant. Pell Grant eligibility is dependent on income as it is reported on students' Free Application for Federal Student Aid (FAFSA), and all students who wish to receive federal grants and loans must complete a FAFSA to calculate their eligibility. Students who self-reported receiving a Pell Grant were classified as "low SES". Student's age is self-reported from a single survey item asking for the student's year of birth. The calculation of age is an estimation based on the year the survey was administered: 2020. Student athlete status is assessed by a single survey item asking whether the student is a varsity athlete "on a team sponsored by your college's athletic department". Student parenting status is determined by a single item asking the student if they are "a parent, primary caregiver, or guardian (legal or informal) of any children". Employment status was determined by a single item asking if the student had one job, more than one job, or no job before the COVID-19 pandemic. Students who selfreported having had one job or more than one job were coded to having employment. Learning modality status was assessed through a single item where students answered how they were taking classes in fall 2020 (in-person or online only). All student subgroups are mutually exclusive and derived from survey data. Where student responses had overlapping answers (such as where a student identified with more than one race/ethnicity) an additional "multi" category was created to represent those answers. When student responses are missing information on background information, dummy variables were created and included in analyses where appropriate. Institutional characteristics, including college sector (two year, four-year), region (West, South, Midwest, Northeast), urbanicity (city, suburb, town, rural), and state are obtained through the National Center for Education Statistics Integrated Postsecondary Education Data System. S1.3 Analytic Sample Among the full set of participants from the #RealCollege Survey (N=195,629), analyses conducted for this report are from a subset of respondents who had complete information for questions pertaining to whether the student contracted COVID-19 (N=122,532), experienced anxiety (N=101,080), experienced depression (N=100,894), experienced food insecurity (N=100,803), and had trouble concentrating (N=100,488). Of our analytic sample (N=100,488), just over half (55%) of sample students were attending two-year colleges, while the rest attended four-year colleges and universities. Racial and ethnic composition was 45% White, 17% Latinx, 10% Black or African American, and 15% multi-ethnic. Two-thirds identified as female, half came from lower socioeconomic households, 20% had children, and 18% identified as LGBTQ. The sample was fairly evenly split by age: 29% were 18-20, 32% were ages 21-25, and 35% were older than 25. Almost three-quarters were employed, two-thirds lived in a city, 44% were attending at least one in-person class, and just under 2% were student-athletes. We conducted an additional analysis comparing the survey responses to the study sample and found that students included in the study sample were more likely to complete later stages in the survey, as noticed by the reduced rate of missing answers in many demographic categories which are asked at the end of the survey. This shift in item response mix has significant effects in the comparability of overall survey respondents against the study sample, as noted in the corresponding odds ratios. Nonetheless, we feel that this does not undermine the generalizability of this study, albeit these results indicate that the study is likely a lower bound estimate of the true rate of COVID S1.4 Comparison of COVID-19 contraction rates A series of two-tailed, Chi-square Goodness-of-Fit tests with multiple comparison corrections were conducted to determine if significant differences in prevalence of selfreported COVID-19 infection existed across various student subgroups in comparison to specific reference groups. S1.5 Estimating the odds of contracting COVID-19 To estimate differences in self-reported COVID-19 infection by student and institution characteristics, we implemented a series of multivariate logistic regression models. The estimates for the likelihood of contracting COVID-19 were run with an unconditional and a fully conditional models. Fully conditional models include controls for race and ethnicity, gender, SES status, parenting student status, student age, student athlete status, employment status, learning modality (in-person or on-line learning), LGTBQ status, college sector, region in which institution is located, urbanicity, and state. Unadjusted and adjusted models incorporate clustered standard errors at the institution level. S1.6 Estimating odds of experiencing anxiety, depression, or food insecurity To estimate differences in experiences of anxiety, depression, or food insecurity by whether the student contracted COVID-19, we implemented a series of multivariate logistic regression models. The estimates for the odds of experiencing anxiety, depression, or food insecurity were run with an unconditional and fully conditional model. Fully adjusted models include controls for race and ethnicity, gender, SES status, parenting student status, student age, student athlete status, employment status, learning modality (in-person or on-line learning), LGTBQ status, college sector, region in which institution is located, urbanicity, and state. Unconditional and fully conditional models incorporate clustered standard errors at the institution level. To check for potential interactions between COVID-19 contraction and certain student characteristics, we ran adjusted models with interaction effects for COVID-19 contraction and factors such as race and ethnicity, SES status, learning modality, employment status, gender, and parenting status. No significant interactions were found that alter the interpretation of the main effects.
2022-02-10T06:17:07.906Z
2022-02-08T00:00:00.000
{ "year": 2022, "sha1": "493abc29610356de1d71932742ce289b36e00414", "oa_license": "CCBY", "oa_url": "https://www.pnas.org/content/pnas/119/7/e2111787119.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d16315b7127af91faea386821def1c26f56fc9e8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
132770609
pes2o/s2orc
v3-fos-license
Modulation of terahertz radiation from graphene surface plasmon polaritons via surface acoustic wave We present a theoretical study of terahertz (THz) radiation induced by surface plasmon polaritons (SPPs) on a graphene layer under modulation by a surface acoustic wave (SAW). In our gedanken experiment, SPPs are excited by an electron beam moving on a graphene layer situated on a piezoelectric MoS 2 flake. Under modulation by the SAW field, charge carriers are periodically distributed over the MoS 2 flake, and this causes periodically distributed permittivity. The periodic permittivity structure of the MoS 2 flake folds the SPP dispersion curve back into the center of the first Brillouin zone, in a manner analogous to a crystal, leading to THz radiation emission with conservation of the wavevectors between the SPPs and the electromagnetic waves. Both the frequency and the intensity of the THz radiation are tuned by adjusting the chemical potential of the graphene layer, the MoS 2 flake doping density, and the wavelength and period of the external SAW field. A maximum energy conversion efficiency as high as ninety percent was obtained from our model calculations. These results indicate an opportunity to develop highly tunable and integratable THz sources based on graphene devices. Disciplines Engineering | Science and Technology Studies Publication Details Jin, S., Wang, X., Han, P., Sun, W., Feng, S., Ye, J., Zhang, C. & Zhang, Y. (2019). Modulation of terahertz radiation from graphene surface plasmon polaritons via surface acoustic wave. Optics Express, 27 (8), 11137-11151. Authors Sichen Jin, Xinke Wang, Peng Han, Wenfeng Sun, Shengfei Feng, Jiasheng Ye, C Zhang, and Yan Zhang This journal article is available at Research Online: https://ro.uow.edu.au/eispapers1/2847 Modulation of terahertz radiation from graphene surface plasmon polaritons via surface acoustic wave SICHEN JIN,1 XINKE WANG,1 PENG HAN,1,3 WENFENG SUN,1 SHENGFEI FENG,1 JIASHENG YE,1 CHAO ZHANG,2 AND YAN ZHANG1,4 Department of Physics, Beijing Key Laboratory for Metamaterials and Devices, Key Laboratory of Terahertz Optoelectronics, Ministry of Education, Beijing Advanced Innovation Center for Imaging Theory and Technology, Capital Normal University, Beijing 100048, China School of Physics and Institute for Superconducting and Electronic Materials, University of Wollongong, New South Wales 2522, Australia hanpeng0523@163.com yzhang@mail.cnu.edu.cn Abstract: We present a theoretical study of terahertz (THz) radiation induced by surface plasmon polaritons (SPPs) on a graphene layer under modulation by a surface acoustic wave (SAW). In our gedanken experiment, SPPs are excited by an electron beam moving on a graphene layer situated on a piezoelectric MoS2 flake. Under modulation by the SAW field, charge carriers are periodically distributed over the MoS2 flake, and this causes periodically distributed permittivity. The periodic permittivity structure of the MoS2 flake folds the SPP dispersion curve back into the center of the first Brillouin zone, in a manner analogous to a crystal, leading to THz radiation emission with conservation of the wavevectors between the SPPs and the electromagnetic waves. Both the frequency and the intensity of the THz radiation are tuned by adjusting the chemical potential of the graphene layer, the MoS2 flake doping density, and the wavelength and period of the external SAW field. A maximum energy conversion efficiency as high as ninety percent was obtained from our model calculations. These results indicate an opportunity to develop highly tunable and integratable THz sources based on graphene devices. We present a theoretical study of terahertz (THz) radiation induced by surface plasmon polaritons (SPPs) on a graphene layer under modulation by a surface acoustic wave (SAW). In our gedanken experiment, SPPs are excited by an electron beam moving on a graphene layer situated on a piezoelectric MoS2 flake. Under modulation by the SAW field, charge carriers are periodically distributed over the MoS2 flake, and this causes periodically distributed permittivity. The periodic permittivity structure of the MoS2 flake folds the SPP dispersion curve back into the center of the first Brillouin zone, in a manner analogous to a crystal, leading to THz radiation emission with conservation of the wavevectors between the SPPs and the electromagnetic waves. Both the frequency and the intensity of the THz radiation are tuned by adjusting the chemical potential of the graphene layer, the MoS2 flake doping density, and the wavelength and period of the external SAW field. A maximum energy conversion efficiency as high as ninety percent was obtained from our model calculations. These results indicate an opportunity to develop highly tunable and integratable THz sources based on graphene devices. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Introduction Terahertz (THz) radiation, which describes electromagnetic waves with frequencies in the 0.1-30 × 10 12 Hz range, is one of the most important types of radiation for light sources in the fields of sensing and imaging because of promising properties that include low photon energy, broad spectral information, and high penetration of nonpolar materials [1]. At present, THz technology is widely used in fields including semiconductor science [2], noninvasive flaw detection [3], substance identification [4], and security inspection [5]. A THz radiation source with broad bandwidth, high intensity and frequency tunability is highly desirable. Several approaches, including photoconductive antennas [6], optical rectification [7], air plasmons [8], quantum cascade lasers [9], and free-electron beam excitation have been used to produce THz wave emission. Among these approaches, free-electron THz radiation sources are of particular interest because of their high light radiation powers and continuously tunable radiation frequencies [10,11]. In contrast to traditional free-electron THz sources, in which a beam of electrons is accelerated to almost the speed of light c using an electron accelerator with large associated facilities requirements, excitation of THz radiation by a relatively low energy electron beam moving on top of graphene layers was recently proposed as a new THz source [11][12][13][14][15][16]. In this approach, surface plasmon polaritons (SPPs) with resonance frequencies in the 1-30 THz range are excited by a beam of moving electrons with speeds of less than 0.1c on top of graphene layers. The en waves when t Fig [19,20], quan des [24][25][26][27] [28]. In these studies, the SAW fields were used to generate "dynamic" grating on the metal surfaces [29][30][31] or graphene layers [32][33][34] to interact with surface plasmon or light. In this work, we present a theoretical study of SAW-modulated THz radiation from SPP resonance in a graphene layer that has been excited using a beam of moving electrons. The system is illustrated schematically in Fig. 1(a). In our gedanken experiment, the graphene layer is aligned on an n-doped molybdenum disulfide (MoS 2 ) flake with an odd number of layers that has strong piezoelectricity properties and forms a heterostructure with the graphene layers. The graphene layer and the MoS 2 flake are laid on a quartz substrate with a dielectric constant of 0 4.2ε (where 0 ε is the permittivity of a vacuum). Application of an external SAW field to the MoS 2 flake layer causes the charge carriers of this piezoelectric semiconductor to be periodically separated in space and results in the material having the dielectric response of free electrons with the same period. By summing the dielectric responses of the ions and the SAW-modulated free electrons [35], a periodic permittivity structure is realized dynamically on the MoS 2 flake. In our system, this MoS 2 flake with periodic permittivity acts as a periodic dielectric microstructure to fold the excited graphene SPP dispersion into the center of the BZ and this leads to matching of the momentums. To give an accurate description on the SAW-modulated THz radiations, we calculate the charge carrier distributions of the MoS 2 flake under the SAW field by self-consistently solving a drift-diffusion model that was coupled with a time-dependent continuity equation and the Poisson equation. The periodic permittivity is then obtained using the Drude model with the calculated charge distributions. The SPP dispersion curves and the power intensity of the THz radiation are calculated thereafter by solving the Maxwell equations with the boundary conditions at the interfaces between regions I, II, and III, as illustrated in Fig. 1(b). The crossing points of the SPP dispersion curve with the electron beam are folded into the cone of the light line around the center of the BZ under the applied SAW field. This results in conservation of the momentum of the SPPs on graphene and the electromagnetic wave in a vacuum, and this leads to THz wave emission. We also show that both the frequency and the intensity of the THz radiation can be tuned by varying the chemical potential of the graphene layer, the doping density of the MoS 2 flake, and the period and wavelength of the external SAW field. Additionally, a maximum conversion efficiency of as much as 0.9 can be obtained for the energy transition from the SPP resonance to THz radiation in free space. Periodically distributed charge and dielectric response under the SAW field The spatiotemporal distributions of the electrons ( ) , n z t and the holes ( ) , p z t on the MoS 2 flake under the applied SAW field can be described using a 1D drift-diffusion model coupled with a time-dependent continuity equation [36,37] as follows: where B k is the Boltzmann constant, T is the temperature, q is the electron charge, and n μ and p μ denote the electron and hole mobilities, respectively. The recombination rate The built-in field can be calculated by solving the Poisson equation with the dielectric permittivity ε and the donor impurity density D N , while the piezoelectric field caused by the SAW field is written as with the SAW field wavelength SAW λ and period SAW T . The intensity of the piezoelectricity field is described using the parameter SAW A . When the spatiotemporal charge distributions modulated using the external SAW field are calculated self-consistently by solving the coupled Eqs. (1)-(4) using the parameters given in Table 1, the dielectric response of the free charges ( ) , r z t ε can be calculated approximately using the Drude model [38], given by with the electron mass m and the SAW field frequency SAW ω . Because of the donor doping of the MoS 2 flake, we only consider the dielectric response of the free electrons in the following. In principle, the mobile charge carriers in the graphene layer lead to an additional dielectric screening on the MoS 2 flake. However, due to the much faster transport speed of the electron beam comparing to the SAW field, the effect of this additional screening can be viewed as a homogeneous reduction of the relative permittivity in MoS 2 flake without breaking the periodic dielectric structures. Screening induced by the electron beam in graphene layer is therefore not taken into account in the periodic dielectric structures [39]. SPP dispersion and THz radiation with periodic dielectric structure When an electron beam moves on top of the graphene layer, the electromagnetic fields in the vacuum region, the periodic permittivity region and the substrate (regions I, II and III, respectively, as labeled in Fig. 1 (6) and (7) with the following boundary conditions: and The electromagnetic field induced by the moving electron beam is then written as [11,15,16] ( ) , where the speed of the electron beam is 0 v . The electron conductivity of the graphene layer is then calculated using the Drude model as [16,43] ( ) where the tunable chemical potential is c μ and the electron lifetime is . τ Using the calculated electromagnetic field amplitude in region III ( 4 A ) with the boundary conditions given in Eqs. (8) and (9), the power intensity of the THz radiation is then calculated as ( ) with where i κ (i = 1, 2 and 3) denotes the equivalent wavevector of i k when folded into the center of the BZ. Periodic dielectric structures induced by the SAW field In Figs. 2(a) and 2(b), we plot the spatial distribution of the electron concentration and the corresponding dielectric response of the MoS 2 flakes when doped with D N = 1.0 × 10 10 cm −2 (black solid lines), 1.5 × 10 10 cm −2 (red dash-and-dotted lines), and 2.0 × 10 10 cm −2 (blue dashed lines). The amplitude, wavelength, and period of the applied SAW field were set at 8 kV/cm, 2 μm and 2 ns, respectively, in these calculations. Because of the high in-plane carrier mobility of the MoS 2 flake, the electrons and holes arrive at their equilibrium positions quickly, within 0.1 ps after application of the SAW field. The charge carriers are subsequently transported "slowly" along the z direction with the propagation of the SAW. As shown in Fig. 2(a), the electrons are localized within SAW-induced periodic "valleys" of the conduction band minimum (CBM). The dielectric responses of these periodically distributed free electrons lead to periodic permittivity in these spaces, as indicated in Fig. 2(b). Comparison of Figs. 2(a) and 2(b) shows that the "peak" permittivity values correspond to the "valleys" of the electron concentrations and vice versa, as indicated by Eq. (5). Additionally, we find that the dielectric screening effect decreases rapidly as the donor density of the MoS 2 flake increases. A negative permittivity, which corresponds to the dielectric response of the metal, is obtained when the doping density is as high as 2.0 × 10 10 cm −2 . Dynamic When an elec MoS 2 flake a structure indu BZ. The dispe line in the qua highlighted in (17) using the τ = 0.1 ps. T = 2 ns, and A electron beam cone of the lig and momentu emissions. As shown in Fig. 4, the radiation frequency is determined by the crossing point of the SPP dispersion curve and the electron beam. To tune this radiation frequency, we adjust the chemical potential of the graphene layer c μ over the range from 0.35 to 0.55 eV, the MoS 2 flake doping density D N from 1.0 × 10 10 to 1.4 × 10 10 cm −2 , the SAW field period SAW T from 1.0 to 1.8 ns and the SAW field wavelength SAW λ from 1 to 5μm. The SPP dispersion curves and their crossing points with the electron beam lines that were calculated using these parameters are presented in Figs. 5(a)-5(d). Because the size of the BZ varies with the different wavelengths of the SAW field, the x-axis in Fig. 5(d) is labeled with units of 2π / μm rather than 2π / λ . Figure 5 shows that the slope of the SPP dispersion curves varies with changes in the chemical potential, the donor density, and the period and wavelength of the SAW field, and this forms a crossing region with the dispersion curve of the electron beam. We labeled this crossing region as the working region of the THz radiation and have highlighted it in green. In Figs. 6(a) and 6(b), we plotted the modulated THz radiation frequencies that were extracted from the working region by varying the wavelength and period of the SAW field. In this figure, the THz radiation frequencies were calculated using parameter sets of c μ = 0. The period and wavelength of the SAW field were fixed at 2 ns and 2 μm, respectively, in Figs. 6(a) and 6(b) by varying the SAW field propagation velocity. Figure 6(a) shows that the radiation frequencies decrease from approximately 20 THz to a few THz when the SAW field wavelength increases from 0.5 to 5 μm. The red shift in the THz emission is the result of a reduction in the size of the BZ with increasing SAW field wavelength. In contrast to Fig. 6(a), we see a blue shift in THz emission with increasing SAW field period in Fig. 6(b). This blue shift can be understood from the curves in Fig. 5(c), where the slopes of the SPP dispersion curves increase with increasing SAW T and thus shift the working region to a higher frequency range. Additionally, the blue shift in the THz radiation frequency with increases in the chemical potential of the graphene layer and the donor density in the MoS 2 flake can be understood from the curves in Figs. 5(a) and 5(b), respectively. (a) and 7(b), w wavelength an ing the same p n intensity incr g the peak valu MoS 2 flake do AW field with ue in the case o THz radiation ing the relation 6(a). Having considered modulation of the THz radiation via the SAW field wavelength, we now turn to the effect of the period of the SAW field on the THz emission. As Fig. 7(b) shows, the THz radiation intensity increases slowly when the SAW field period is less than 1.5 ns, and arrives at a peak value when SAW T increases to approximately 1.8 ns. When the chemical potential c μ and the doping density D N are reduced, a SAW field with a long period is required to obtain the peak THz radiation value. In addition, the peak THz radiation values remain nearly constant for various doping densities and chemical potentials. As indicated by Eq. (5), the permittivity of the free electrons is proportional to the square of the SAW frequency 2 SAW ω and is inversely proportional to the square of the SAW period . To obtain the value of M II ε , a SAW field with a short period is required to balance the effects of the high doping density D N and the chemical potential c μ . In Fig. 7(d), we have plotted the THz radiation intensity that was presented in Fig. 7(b) as a function of the radiation frequency by using the relationship between the radiation frequency and the SAW field period given in Fig. 6(b). Interestingly, Fig. 7(d) shows that both the intensity and the frequency of the THz radiation remain nearly constant for the various chemical potentials and doping densities. This behavior can be understood as follows. In systems with fixed chemical potential, the SAW field period changes with the variation of the doping density D N to keep the value of II ε constant and this leads to the same radiation intensity and frequency indicated by Eqs. (12)- (14) and Eq. (17). For systems with different chemical potentials, a SAW field with a long period is required to balance the reduction of the chemical potential for the peak THz radiation intensity value. To estimate the efficiency of the conversion of the SPPs into THz radiation, we calculate the conversion efficiency Figs. 7 and 8, we see the same tuning of both the conversion efficiency and the radiation intensity produced by variation of the wavelength and the period of the SAW field. These results indicate that the large THz radiation intensity values originate from the high efficiency of the energy conversion from the SPP resonance to the THz light. Additionally, the maximum conversion efficiency of as much as 0.9 presented in Fig. 8 indicates the feasibility of THz radiation generation using SAW field-modulated SPP resonance in graphene-MoS 2 devices. Before co between the g potential and der Waals het chemical pote model calcula systems. Conclusio In summary, electron beam by an external SAW field. The spatial periodic permittivity of the MoS 2 flake is obtained using the Drude model with self-consistently calculated charge carrier distributions that are modulated using the SAW field. By folding the crossing point of the SPP dispersion curve with the electron beam line in the center of the BZ to converge the momentum of the SPPs and the electron beam within the cone of the light line, the transformation of the SPPs into THz radiation is achieved. The frequency and intensity of the THz radiation can be tuned by varying the MoS 2 flake doping density, the chemical potential of the graphene layer, and the period and wavelength of the applied SAW field. Based on our calculations, a maximum conversion efficiency of as much as 0.9 is obtained for the energy transformation from the SPP resonance to the THz emission. Our results suggest an exciting opportunity for development of dynamically tunable THz sources based on SPPs in a graphene layer.
2019-04-26T14:16:18.584Z
2019-04-05T00:00:00.000
{ "year": 2019, "sha1": "74b8888e569b3ce605edc93403151df46f8b3622", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.27.011137", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "09729a2ddfeecd3a830d06b32697102f538e121f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
21820965
pes2o/s2orc
v3-fos-license
Bilateral lichen striatus: A case report with review of literature Lichen striatus is a self-limiting dermatosis presenting with sudden eruption of lichenoid papules along the lines of Blaschko. A 5-year-old girl presented with asymptomatic hypopigmented linear lesions over both upper limbs. The histopathological examination revealed spongiosis, vacuolar alteration of the basal layer and lymphocytic exocytosis with a mild-to-moderate perivascular mononuclear infiltrate in the dermis. Lichen striatus was diagnosed based upon the characteristic clinical and histopathological findings. The pathogenetic mechanism of bilateral lichen striatus is unknown at present, however, a somatic mutation in two different clones of cells can be a possibility. Introduction Lichen striatus is an uncommon, self-limiting linear dermatosis that predominantly affects children of 5-15 years of age. It is diagnosed based upon its appearance and morphological expression along the lines of Blaschko. [1] It usually appears as a sudden eruption of small skin-colored or pink lichenoid papules forming a continuous or interrupted, linear band on the limbs or trunk. The papules are usually asymptomatic, flat topped, smooth or scaly and evolve within several days to weeks. In dark-pigmented individuals, it may appear as a band-like area of hypopigmentation. [2] The lesions may extend from a few centimeters to involve the full length of an extremity. It is usually unilateral and single, although, bilateral or multiple parallel bands have been documented. [1][2][3][4] To the best of our knowledge, only four cases of bilateral lichen striatus have been previously reported in the literature. Case Report A 5-year-old girl presented with a hypopigmented linear band on the left upper extremity since 1 month. The lesions first appeared on the inner aspect of the left forearm and later extended linearly to involve the posterior aspect of the arm and trunk on the same side [ Figure 1]. Further, her parents had also noticed hypopigmented pin head-sized asymptomatic raised lesions on the contralateral forearm in the last 5 days [ Figure 2]. The lesions were not itchy. Nails were uninvolved. There were no features suggestive of atopy. Skin biopsy from the right forearm showed mild hyperkeratosis, spongiosis, vacuolar alteration of basal layer and lymphocytic exocytosis with a mild-to-moderate perivascular mononuclear inflammatory infiltrate and melanin incontinence in the dermis [ Figure 3]. A distinctive feature of lichen striatus is a dense infiltrate extending deep into the dermis around the hair follicles and eccrine sweat glands and ducts in some cases, which was absent in our case. It was diagnosed as a case of lichen striatus based upon the characteristic clinical and histopathological findings. Emollients were prescribed for treatment and the lesions resolved within 3 months leaving no sequelae. Discussion The pathogenesis of lichen striatus is elusive, however, various etiological factors have been implicated. The most commonly accepted hypothesis is that of environmental stimuli acting in a genetically predisposed individual. [5] Linear dermatoses such as lichen striatus follow the lines of Blaschko, which are embryonic in origin. A somatic mutation in early embryogenesis results in formation of an abnormal clone of cells, which on subsequent exposure to an environmental stimulus results in formation of lichen striatus. Others believe that it is secondary to an autoimmune response mediated by T cells. [6] Atopy has been reported to be a predisposing factor. Drugs, such as adalimumab and etanercept, BCG and hepatitis B vaccination, UV exposure from a tanning bed, minor trauma, insect bite, and viral infections, such as varicella, influenza, and human herpes viruses 6 and 7 have been reported as triggers. [7][8][9] The histopathological findings are nonspecific and vary depending on the stage of evolution. Usually a superficial perivascular lymphohistiocytic infiltrate is seen, which may Kurokawa et al.(2004) [1] 25 years Bilateral lichen striatus on the lower extremities since fourteen years Lichenoid tissue reaction with foci of spongiosis and perivascular and perieccrine duct inflammatory cell infiltration Treated successfully with topical corticosteroid ointment application for 10 days Aloi et al.(1997) [2] 5 years Linear dermatosis along the Blaschko lines distributed asymmetrically and bilaterally on the face, neck, trunk, and limbs of 1-month duration Atrophic epidermis with foci of spongiosis, parakeratosis, and lymphocytic exocytosis and lichenoid infiltrate with patchy perivascular and periadnexal infiltrate Emollients were given. Lesions regressed with residual hypopigmentation 16 months later Patri (1983) [3] 8 years Pink-violet papules in linear and arciform arrangement on the trunk, arms, neck, buttocks, and right leg Mild acanthosis, edema and a moderate lymphocytic infiltrate obscuring the dermal-epidermal junction and in perivascular areas Follow up not mentioned Mopper et al.(1971) [11] 2 years 12 linear lesions were present involving the arms, back, chest and buttocks since 2 months Acanthotic and parakeratotic epidermis with destruction of basal layer and few dyskeratotic cells. heavy infiltrate was present in the papillae and around subpapillary vessels with focal extension to deeper areas No treatment was given and complete resolution in 3 months was seen without any sequelae extend focally into the lower part of epidermis causing vacuolar alteration of basal layer with melanin incontinence. The epidermal reaction pattern may include spongiosis, focal parakeratosis, and lymphocyte exocytosis. [10] The course of LS is self-limiting, usually regressing spontaneously within 3-12 months, thus requiring symptomatic treatment only. The patient should be reassured and emollients and topical steroids may be used to relieve dryness and pruritus. Post-inflammatory hypopigmentation or hyperpigmentation can develop as a sequelae. [1] Our patient had lesions which corroborated with the clinical and histopathological findings in lichen striatus, however, they were distributed bilaterally. To the best of our knowledge, only four cases of bilateral LS have been described previously [ Table 1]. [1][2][3]11] Although the pathogenesis is unknown at present, a somatic mutation in two different clones of keratinocytes can be a possibility. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-03T01:15:07.784Z
2017-07-01T00:00:00.000
{ "year": 2017, "sha1": "66e078108e1b330c892e6c4a8f9ec79a00e0a05f", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/idoj.idoj_304_16", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "904e399ca9e16170330b261d7ab26214e68c75e1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
151137054
pes2o/s2orc
v3-fos-license
Market Liquidity and Convexity of Order Book (Evidence From China) Market liquidity plays a vital role in the field of market micro-structure, because it is the vigor of the financial market. This paper uses a variable called convexity to measure the potential liquidity provided by order-book. Based on the high-frequency data of each stock included in the SSE (Shanghai Stock Exchange) 50 Index for the year 2011, we report several statistical properties of convexity and analyze the association between convexity and some other important variables (bid/ask-depth, spread, volatility, return.) Introduction Liquidity is a fundamental factor in financial market. If a market lacks of liquidity, the market order can not be executed at a stable price level. Additionally, lack of liquidity also causes a considerable extra cost, bid-ask spread, for immediate transaction, and a higher volatility of return resulted by a more sensitive price impact. However, liquidity is difficult to be measured. Amount of variables have been used to detect the liquidity [1][2] [3] , such as the depth, spread and immediacy. The best bid/ask-depth or bid-ask spread is widely used to measure the liquidity close to the market price. However, there are few micro-structure studies have found that those limit orders, beyond the best quote, also contain many useful liquidity information which would effects the price discovery process [4] . In another words, the whole order-book, included those limit orders in different quotes, could improve the transparency of market and reduce price volatility. Furthermore, limit orders beyond the best quote also provide us a prefect explanation for some interesting phenomena. Many scholars have studied the shape of price impact function and show us various results of the curve in different markets [5] [6] . Nonlinearity [8] is a very significant feature of the curve and it can be explained perfectly by the shape of order-book [9][15] [16] . Therefore, it is necessary to research a whole order-book for seeking potential liquidity information hiding behind those limit orders beyond best quote. However, recording or analyzing a high-frequency dataset of order-book is really a hard work. An unbroken order-book often contains 6 to 10 quotes and must record all the depth in those quotes. So a high-frequency dataset of order-book must be a mass data. Fortunately, with the development of electronic trading and computer technology, we are able to obtain the dataset now and many scholars have focused on this field. No surprisingly, their results are also not consistent [5][6] [7] , just like the study of price impact function. Maybe the heterogeneity of different countries and markets is the root cause of this diversity. This paper will use a high-frequency dataset covering all stocks in SSE 50 index in 2011 to research this issue and dig out the pattern in Chinese stock market. Market, Data and Measure of Order-Book SSE is purely driven by an order-book and no market markers. A whole order-book always contains many quotes and corresponding depth in those quotes. From 'TinySoft.Net', we can obtain an order-book with 5 best quotes at both buy side or sell side, donated by 5 [10] . We find the convexity is various across the markets and countries. So researching the curve in Chinese market is really an interesting problem. Now we need an index to measure the convexity of the curve based on a high frequency data. Basic Statistical Properties log , k D is the number of trading days of stock k. , , k d t  is a variable used to estimate the volatility of return [11] . (Barndorff-Nielsen O E, Shephard N. 2002) prove the follow equation 2 , , , , , , (Glosten, L., 1994). The finding implies more limit orders stake in quotes away the mid-quote. This pattern may caused by lower transparency of market and different trading mechanism, T+1 and no market marker. A lower transparency market will increase the cost of adverse selection, so uninformed trader would willing to submit a higher quote at the cost of an extra waiting. Additionally, T+1 thoroughly eliminate high-frequency traders to get benefit from bid-ask spread, so depth in best quote will decrease significantly. Time-Difference Correlation Analysis Analyzing the time-difference correlation is a fundamental stage when we research the dynamic process of auto-correlation coefficient(ask side) Obviously, there is a heavy tail in the figure. On there other hand, when we partial Coefficient by a similar way, we can find the coefficients decay rapidly. Therefore, AR model will be suitable to describe the dynamic process of to fit the curve is an effective method. We sorely concern whether / bid ask b is less than 1, because it is an evidence of long-term memory [12] . Now we estimate the equation The result is reported in table 2,3. We can see that the coefficient is significant negative close to 0. A regress model can obtain a more reliable result. Estimate the equation The result is reported in table 4. intraday pattern of convexity(ask side) We find an upward intraday pattern of convexity. At the beginning of a trading day, traders willing to submit a higher quote to decrease the cost of adverse selection due to asymmetric information. As the transaction proceeds, information spreads in the market and cost of adverse selection decreasing rapidly. Asset price tents to an equilibrium point and traders submit more limit orders to best quote. Then the convexity increases significantly. Dynamic Adjustment of Convexity Time-difference correlation analysis reveals the AR model is suitable for describing the dynamical process of convexity on both sides. However, we can involve more variables can impact the adjustment of convexity, such as return and volatility. We can find a significant coefficient in a regress model. This phenomenon implies some characteristics in financial market and can be explained by investor behavior. For each stock, we estimate the following equation respectively, log log (log log ) , 1. bid ask bid ask bid ask bid ask bid ask bid ask bid ask bid ask bid ask bid ask Table 7. Regress on ask side We can see that most of the stocks present a significant positive bid  on bid side. This result reveals a positive return will impact the expectation of traders who will submit a higher quote for avoiding an extra cost in the future due to a higher expected price. On the contrary, a negative return will influent trades to submit a lower quote for an extra benefit. On the other hand, on ask side we obtain a negative ask  . This finding can be explained similarly with bid side. A positive return impel the traders submit a higher quote because of a higher expected return, and a negative return has an opposite effect. Most of / bid ask  are significantly negative in our regress model and the economic interpretation of the phenomenon is very visual. A higher volatility reveals more serious asymmetric information which impel the traders submit a quote away from the mid-quote for avoiding a cost of adverse selection. Some similar result has been that depth and spread of an order-book also be impacted by volatility, but our finding focuses on the shape of the order-book. Contribution to Price Discovery Following the analysis of (Weber, P., Rosenow, B. 2005), we consider an assuppositional return on order-book, , , Table 8 We find a positive effect of log k d t p will increase. That means a thick order-book close to mid-quote on bid side will increase the price in next 5min. On the contrary, implementing a similar analysis, a thick order-book close to mid-quote on ask side will decrease the price in next 5min. Therefore, the shape of order-book has a contribution to price discovery, even though the regress model contains observed return, , , 1 k d t r  . By the way, , , 1 k d t r  also has an opposite effect due to a positive  and this illustrates Chasing behavior of traders. Conclusion In this paper, we analyze a liquidity variable named convexity and report the summary statistics. We find a positive convexity which contradicts to previous conclusion. The result of time-difference correlation analysis implies a long-term memory of convexity and an anti-correlation of / , , bid ask k d t c short-term fluctuation. Similar with traditional research of spread, we find intraday pattern of convexity which illustrates convexity will increase and approaches to an equilibrium during a day. This conclusion can be explained by information spreads. Additionally, we use a regress model estimate the dynamic adjustment of convexity and discover a significant impact of return and volatility on both side. Furthermore, we also find the order-book contributes to price discovery and has opposite effects to price in the future. Those finds illustrate convexity is an effective measure of market liquidity, and the depth away from mid-quote also has significant effect to improve the transparency of financial market.
2019-03-29T11:34:03.539Z
2012-11-09T00:00:00.000
{ "year": 2012, "sha1": "ab2a45c1d1b246f60f677ad1cd5b589221394f14", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d1048f9970e24c5aa7bb9f977bd19226f3a71fa8", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
16315149
pes2o/s2orc
v3-fos-license
Weakly almost periodic functionals on the measure algebra It is shown that the collection of weakly almost periodic functionals on the convolution algebra of a commutative Hopf von Neumann algebra is a C$^*$-algebra. This implies that the weakly almost periodic functionals on $M(G)$, the measure algebra of a locally compact group $G$, is a C$^*$-subalgebra of $M(G)^* = C_0(G)^{**}$. The proof builds upon a factorisation result, due to Young and Kaiser, for weakly compact module maps. The main technique is to adapt some of the theory of corepresentations to the setting of general reflexive Banach spaces. Introduction For a topological (semi-)group G, the space of weakly almost periodic functions on G is the subspace of C(G) consisting of those f ∈ C(G) such that the (left) translates of f form a relatively weakly-compact subset of C(G).We denote this set by WAP(G).Then WAP(G) is a unital C * -subalgebra of C(G), say with character space G WAP .By continuity, we can extend the product from G to G WAP , turning G WAP into a compact semigroup whose product is separately continuous, a semitopological semigroup.Indeed, G WAP is the universal semitopological semigroup compactification of G. See [2] or [11] for further details.Now suppose that G is a locally compact group, so we may form the Banach space L 1 (G), which becomes a Banach algebra with the convolution product.Then L ∞ (G), as the dual of L 1 (G), naturally becomes an L 1 (G)-bimodule.We define the space of weakly almost periodic functionals on L 1 (G), denoted by WAP(L 1 (G)), to be the collection of f ∈ L 1 (G) such that the map is weakly-compact.This is equivalent to the map L f : L 1 (G) → L ∞ (G), a → f • a being weakly-compact.Ülger showed in [20] that WAP(L 1 (G)) = WAP(G), where C(G) is naturally identified with a subspace of L ∞ (G).This fact also follows easily from [22,Lemma 6.3], using the fact that if a set is relatively weakly compact, then the weak-and weak * -topology closures coincide.Both these papers use simple bounded approximate identity arguments.The definition of WAP(L 1 (G)) obviously generalises to any Banach algebra A. In general, we can say little about WAP(A), except for some interesting links with the Arens products, see [6] and references therein, or [17,Section 1.4] and [3,Theorem 2.6.15].However, motivated by the above example, we might expect that when A has a large amount of structure, WAP(A) also might have extra structure.In this paper, we shall investigate WAP(M(G)), where M(G) is the measure algebra over a locally compact group G.In particular, we shall show that WAP(M(G)) is a C * -subalgebra of M(G) * , where M(G) is identified as the dual of C 0 (G), so that M(G) * = C 0 (G) * * is a commutative von Neumann algebra. The central idea is to develop a theory of corepresentations on reflexive Banach spaces, for commutative Hopf von Neumann algebras.Our theory exactly replicates that for Hilbert spaces, but care needs to be taken to ensure that everything works in our more general setting. The connection between weakly almost periodic functionals and representations of Banach algebras goes back to Young, [21], and Kaiser, [12].For L 1 (G), there is a correspondence between (non-degenerate) representations of L 1 (G) and representations of G. Using Young and Kaiser's work, it is easy to see that weakly almost periodic functionals on L 1 (G) correspond to coefficient functionals for representations of G on reflexive spaces.Then multiplication of functions in L ∞ (G) corresponds to tensoring representations.The existence of reflexive tensor products (see [1] for example) hence shows that the product of two weakly almost periodic functionals is again weakly almost periodic.Of course, for L 1 (G), it is far easier to use Ülger's result, and then argue directly that WAP(G) is an algebra (which follows from Grothendieck's criteria for weak compactness, see [2]).For M(G), while M(G) = L 1 (X) for some measure space X, we do not have that X is a (semi)group, and so we turn to corepresentations, which work with the algebra M(G) * directly. The structure of the paper is as follows.We first introduce some notions from the theory of tensor products of Banach spaces, in particular the projective and injective tensor norms. We then define what a (commutative) Hopf von Neumann algebra is, and show carefully that M(G) (as well as L 1 (G)) fits into this abstract framework.For the rest of the paper, we work with commutative Hopf von Neumann algebras, the results for M(G) (and, indeed, L 1 (G)) being immediate corollaries.As an immediate application, we make a quick study of almost periodic functionals.We then turn our attention to weakly almost periodic functionals, and build a theory of corepresentations on reflexive Banach spaces.The final application is then obtained by checking that the usual way of tensoring corepresentations still works in this more general setting. For an introduction to quantum groups from a functional analysis viewpoint, [13], or the pair of articles [14] and [15], are very readable.A good starting point for details about (weakly) almost periodic functionals on general Banach algebras is [9]. A few notes on notion.We generally follow [3] for details about Banach algebras.We write E * for the dual of a Banach space E, and use the dual pairing notation µ, x = µ(x), for µ ∈ E * and x ∈ E. We write B(E, F ) for the collection of bounded linear maps from E to F , we write B(E, E) = B(E), and we write T * for the linear adjoint of an operator T . Acknowledgements: The author would like to thank Garth Dales, for bringing this problem to his attention, and for careful proofreading.Thanks to Tony Lau for providing the reference [22]. Hopf von Neumann algebras We start by recalling some elementary definitions and facts from the theory of tensor products of Banach spaces.We refer the reader to the books [18] and [7], or [8, Chapter VIII], for further details. Let E and F be Banach spaces.The projective tensor norm, • π , on E ⊗ F is defined by Then E ⊗F , the projective tensor product of E and F , is the completion of E ⊗ F with respect to • π .The projective tensor product has the property that any bounded, bilinear map ψ : E × F → G admits a unique bounded linear extension ψ : E ⊗F → G, with ψ = ψ .For measure spaces X and Y , we have that L 1 (X) ⊗L 1 (Y ) = L 1 (X × Y ).We identify (E ⊗F ) * with B(E, F * ) under the dual pairing and using linearity and continuity. The injective tensor norm, • ǫ , on E ⊗ F is defined by regarding E ⊗ F as a subspace of B(E * , F ), where τ = n k=1 x k ⊗ y k induces the finite-rank operator Then E ⊗F , the injective tensor product of E and F , is the completion of E ⊗ F with respect to • ǫ .For locally compact Hausdorff spaces K and L, we have that C 0 (K) ⊗C 0 (L) = C 0 (K × L).We write A(E, F ) for the closure of the finite-rank operators from E to F ; these are the approximable operators from E to F .Then, almost by definition, we have that There is a canonical norm-decreasing map E ⊗F → E ⊗F .By taking the adjoint, we get an injective contraction (E ⊗F ) * → B(E, F * ).The image, equipped with the norm induced by (E ⊗F ) * , is the space of integral operators, I(E, F * ).The map E * ⊗E → E * ⊗E is injective if and only if E has the approximation property.We can regard E * ⊗E as a subspace of Similarly, we can regard E ⊗F as a subspace of I(E * , F ); here we use that fact that We say that E has the metric approximation property if and only if the map E * ⊗E → I(E * ) is an isometry onto its range, or equivalent, E ⊗F → I(E * , F ) is an isometry onto its range, for all F .There are characterisations of the (metric) approximation property in terms of finite-rank approximations of the identity on compact sets.We have that C 0 (K) and L 1 (X) have the metric approximation property for all K and X. Commutative Hopf von Neumann algebras A Hopf von Neumann algebra is a von Neumann algebra M equipped with a coproduct ∆ : M → M⊗M.Here ⊗ denotes the von Neumann tensor product.This means that ∆ is a normal * -homomorphism, and that (∆ ⊗ id)∆ = (id ⊗∆)∆, that is, ∆ is coassociative.We shall concentrate on the case where M is commutative, so that M = L ∞ (X) for some measure space X.Then M⊗M = L ∞ (X × X), and so, as ∆ is normal, it drops to give a contractive map ∆ As both M and M * are Banach algebras, we have natural module actions of M on M * and of M * on M. For the action of M on M * , we shall, for example, write F • a ∈ M * for F ∈ M and a ∈ M * .For the action of M * on M, we shall always explicitly invoke the map ∆ * or ∆. For an example of a commutative Hopf von Neumann algebra, let G be a locally compact group, and consider the algebra L ∞ (G) equipped with the coproduct ∆ defined by Then ∆ * induces the usual convolution product on L 1 (G). A slightly less well-known example is furnished by M(G).As M(G) = C 0 (G) * , we see that M(G) is the predual of the commutative von Neumann algebra C 0 (G) * * .As such, M(G) * = L ∞ (X) for some measure space X (see [19, Chapter III]), and so by the uniqueness of preduals, M(G) = L 1 (X).Let Φ be the canonical coproduct on C 0 (G), so that Φ is the We identify C(G × G), the space of bounded continuous functions on G × G, with the multiplier algebra of C 0 (G × G), and hence (see [19, Chapter III, Section 6]) we may identify We can hence regard Φ as a * -homomorphism From the above, we can identify M(G × G) with I(C 0 (G), M(G)).As M(G) has the metric approximation property, we see that M(G) ⊗M(G) is isometrically a subspace of I(M(G) * , M(G)), or equivalently, by properties of the integral operators, isometrically a subspace of I(C 0 (G), M(G)), as required. Alternatively, for any C * -algebra A, we could define a norm on A * ⊗ A * by embedding A * ⊗ A * into (A ⊗ min A) * .This induces the operator space projective tensor norm, see [10,Chapter 7], and as A has the minimal operator space structure, it follows that A * has the maximal structure, and so this norm agrees with the (Banach space) projective tensor norm. Hence We claim that this quotient map is a * -homomorphism, for which it suffices to check that the kernel . By continuity, we see that so that στ lies in the kernel.Hence we have the following chain of * -homomorphisms say, giving rise to a * -homomorphism Then, for µ, λ ∈ M(G) and f ∈ C 0 (G), we see that so ∆ * induces the usual convolution product on M(G).We have hence shown that M(G) * is a commutative Hopf von Neumann algebra.Notice that throughout, we have actually only used the fact that G is a locally compact semigroup.For a recent survey on measure algebras, see [4], where the authors view M(G) as a Lau algebra (see [16]). Almost periodic functionals For a Banach algebra A, a functional µ ∈ A * is almost periodic if the map is compact.We denote the collection of almost periodic functionals by AP(A).Then it is easy to see that AP(A) is a closed subspace of A * .Using the viewpoint of Hopf von Neumann algebras, it is easy to see that AP(M(G)) is a C * -algebra. , we shall write f * for the pointwise complex-conjugation of f , so that f → f * is the preadjoint of the involution on L ∞ (X).We see that for f, g ∈ L 1 (X), We claim that R F = ∆(F ) * κ L 1 (X) .Indeed, for f, g ∈ L 1 (X), we have that , and hence R F is compact if and only if ∆(F ) is compact.As L ∞ (X) has the approximation property, it follows that ∆(F ) is compact if and only if Weakly almost periodic functionals We shall make use of vector valued L p spaces; for a measure space X, a Banach space E, and 1 ≤ p < ∞, we write L p (X, E) for the space of (classes of almost everywhere equal) Bochner p-integrable functions from X to E. Then L p (X) ⊗ E naturally maps into L p (X, E) with dense range, inducing a norm ∆ p on L p (X) ⊗ E. This norm is studied in [7, Chapter 7].We have that L 1 (X) ⊗E = L 1 (X, E), so that ∆ 1 = • π , the projective tensor norm. It is worth noting that ∆ p is not a tensor norm, as T ∈ B(L p (X)) may fail to extend to a bounded map T ⊗ id : L p (X, E) → L p (X, E).However, note that for F ∈ L ∞ (X), then denoting also by F the multiplication operator on L p (X), it is elementary that F ⊗ id is bounded, with norm F , on L p (X, E).The norm ∆ p does satisfy the estimates We shall henceforth restrict to the case where E is reflexive.Then E * has the Radon-Nikodým property, and so L p (X, E) * = L p ′ (X, E * ) for 1 < p < ∞, where 1/p ′ = 1 − 1/p, see [7,Appendix D], or [8], for further details.We stress that even when p = 2, the dual pairing between L 2 (X, E) and L 2 (X, E * ) is always bilinear and not sesquilinear.Lemma 3.1.Let E be a reflexive Banach space, and let X be a measure space.The map Here f g denotes the pointwise product, so the Cauchy-Schwarz inequality shows that f g ∈ L 1 (X) for f, g ∈ L 2 (X). Proof.Let F ∈ L 2 (X, E * ) and G ∈ L 2 (X, E) be simple functions, so that there exists a disjoint partition of X, say (X k ) n k=1 , and Here we write χ X k for the indicator function of X k .Hence we see that As the simple functions are dense in L 2 (X, E), respectively, L 2 (X, E * ), we conclude that the map Λ : As E is reflexive, we may identify (E * ⊗E) * with B(E). By a suitable choice of f, g, x and µ, we see that W ≥ π , and so we conclude that actually W = π .Hence Λ * is an isometry, so Λ must be a metric surjection, as required. For F ∈ L ∞ (G) and T ∈ B(E), we see that F ⊗ T extends to a bounded linear map on This is then a dual Banach algebra, that is, multiplication in L ∞ (X)⊗B(E) is separately weak * -continuous.See [6,Section 8], where similar ideas are explored.Proposition 3.2.The above lemma isometrically identifies B(L 1 (X), B(E)) with a subspace of B(L 2 (X, E)), under the mapping Λ * .The image of Λ * is precisely L ∞ (X)⊗B(E). Proof.Standard Banach space theory shows that the image of Λ * is equal to Hence we need to show that ker Λ = Z. As L 1 (X) has the approximation property, for each non- Informally, the above proposition allows us to write which is reminiscent of the operator space projective tensor result that (M * ⊗N * ) * = M⊗N , for von Neumann algebras M and N , see [10,Theorem 7.2.4].The important point for us is that we have turned B(L 1 (X), B(E)) into an algebra.It is multiplication in this algebra which will ultimately give rise to the multiplication of weakly almost periodic functionals in L ∞ (X).Now let L ∞ (X) be a Hopf von Neumann algebra, so it admits a coproduct ∆.We have a map whose adjoint, which we denote by ∆ ⊗ id, is a map By linearity, we conclude that (∆ ⊗ id)(UV ) = ((∆ ⊗ id)U)((∆ ⊗ id)V ) for all U ∈ L ∞ (X)⊗B(E) and V ∈ L ∞ (X) ⊗ B(E).By weak * -continuity, this must also hold for V ∈ L ∞ (X)⊗B(E). We now wish to adapt leg numbering notation to our setup.Given W ∈ B(L 2 (X, E)), define W 23 ∈ B(L 2 (X × X, E)) by Let χ : L 2 (X × X) → L 2 (X × X) be the "swap map", defined on elementary tensors by χ(f ⊗ g) = g ⊗ f .For W ∈ L ∞ (X × X)⊗B(E), it is clear that (χ ⊗ id)W and W (χ ⊗ id) both also lie in L ∞ (X ×X)⊗B(E).For W ∈ L ∞ (X)⊗B(E), we define W 13 = (χ⊗id)W 23 (χ⊗id) ∈ L ∞ (X × X)⊗B(E).Theorem 3.4.Let (L ∞ (X), ∆) be a Hopf von Neumann algebra, and let E be a reflexive Banach space.Let π : L 1 (X) → B(E) be a bounded linear map, giving rise to W ∈ L ∞ (X)⊗B(E).Then π is a homomorphism, with respect to ∆ * , if and only if We now come to a proof where "Sweedler notation" would help greatly, but we should perhaps, at least once, give a formal proof.Informally, we shall "pretend" that W (g which completes the proof. To make this rigorous, for ǫ > 0, we can find a finite sum of elementary tensors and, as above, As ǫ > 0 was arbitrary, the proof is complete. Application to weakly almost periodic elements The following result was first shown by Young in [21], building upon [5], and was recast in terms of the real interpolation method by Kaiser in [12] (see also the similar arguments in [6]). We claim that U 23 V 13 = V 13 U 23 , from which it follows, from Theorem 3.4, then π is a homomorphism. Finally, we have that So we conclude that F 1 F 2 = F ∈ WAP(L 1 (X)), showing that WAP(L 1 (X)) is an algebra.
2008-07-08T14:28:29.000Z
2008-06-30T00:00:00.000
{ "year": 2008, "sha1": "c040e6d2156b3675bd439c0c53506f1066f62a72", "oa_license": null, "oa_url": "https://arxiv.org/pdf/0806.4973", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7a439444dbc04e7731df9bef3ce10679585d9125", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
235292363
pes2o/s2orc
v3-fos-license
Design of High Frequency Digital Transceiver in Coastal Radio Station and Shipborne A novel high frequency digital transceiver scheme using audio port as signal modulation and demodulation interface is proposed. The system adding human-computer interaction module, business service module, base station modulation and demodulation module, shipboard modulation and demodulation module, and using advanced modern digital signal processing algorithm to complete signal modulation and demodulation in digital domain, which overcomes the anti-interference ability of analog demodulation circuit poor power, low spectrum utilization, poor user experience and other shortcomings. The simulation results show that when SNR of the baseband is 4.9dB, the BER of the system can reach about 10−4, the design scheme can meet the ITU-R M.2058-0 recommendation and the actual communication requirements. Introduction The modernization of global maritime distress and safety system (GMDSS) will fully integrate digital technology, broadband technology, mobile terminals and other modern communication technologies, and the future development of coastal radio should meet the development requirements of digital technology. NCSR(Navigation, communication and search and rescue) proposes to replace NBDP(Narrow-Band Direct-Printing) with HF digital transmission system. China will also cancel NBDP service in due time and adjust part of the frequency for the research and application of HF Digital Technology [1][2]. At present, the high frequency circuit of coastal radio station is mostly single sideband voice analog circuit [3], which has the disadvantages of poor anti-interference ability and low spectrum utilization. This kind of simple voice communication mode restricts the communication performance and user experience of coastal radio station. Based on the existing coastal radio communication equipment, this paper designs the data communication interface of radio communication equipment, digitizes the modulation and demodulation equipment, and adds channel coding, high-order modulation and other methods to improve the reliability and transmission rate of the system. As a result, the data transmission function is realized, the communication performance and user experience becomes better. The scheme of coastal radio system This scheme is based on the existing equipment and facilities of coastal radio station for upgrading. The reformed system consists of six parts: human-computer interaction module, business service module, base station modulation and demodulation module, HF transmitter module, HF receivier module and shipboard modulation and demodulation module. The overall framework of the system is shown in Figure 1. Fig.1 The scheme of coastal radio system Human-computer interaction module and business service module should upgrade to develope new digital services. HF transmitter module and HF receivier module can make use of the existing equipment and facilities of the old coast radio station. The modulation and demodulation module is the key part of the system. base station modulation and demodulation module The base station modulation and demodulation module is the core part of the digital transformation of coastal radio high frequency circuit. It is connected with the existing base station receiving module and base station transmitting module of coastal radio. It mainly includes central control and information processing unit, transmitting unit, receiving unit, data interaction unit, time synchronous unit and system real-time monitoring unit. The system function module is shown in Figure 2. Fig.2 Function module of modem system Transmitting module: It obtains the instructions such as messages to be sent and transmitting parameters from the external network, and then store the messages to be sent into the message queue with certain rules. The modulation unit encodes the information data to be transmitted in the message queue by channel coding, framing, GMSK modulation, and outputs 1.8kHz audio data. Through the 3 existing PCM equipment of coastal radio station, the signal is directly sent to the transmitter, which then modulates the signal to the short wave band for transmission. Receiving module: The high frequency antenna of coastal radio station is used to receive the signals from other shore stations or shipyards within the communication range. The received signals are converted into baseband signals by the receiver. The GMSK waveform signals are demodulated by the demodulation unit, and the demodulated code stream is de interleaved and decoded accordingly. Finally, the processor analyzes the code stream, recovers the original data, and transmits it through the network Port, WiFi or IEC port. GMSK modulation algorithm is adopted in the design. The scheme of GMSK modulation and demodulation is shown in Fig.3. GMSK modulation signal is a kind of signal with continuous phase and constant envelope. It has the characteristics of fast channel sidelobe fading, strong anti-interference, high frequency band utilization and small out of band radiation. It is developed on the basis of MSK (minimum frequency shift keying). The original signal is firstly filtered by Gaussian filter, and then modulated by MSK. The specific process is shown in Figure 4 [4]. (1), where, nT 1 , T is the period. And GMSK signal can be expressed by formula (2), where φ t is the phase of the signal, which can be expressed by formula (3). S cos cos cos sin sin (2) φ t ∑ The implementation of GMSK is as below: the phase of GMSK modulation signal is directly obtained by using look-up table method; the RF modulation signal is directly generated by using two-point modulation and PLL phase-locked loop. The specific operation process is shown in Figure 4. Fig.4 The realization process of GMSK Modulation GMSK demodulation adopts differential demodulation algorithm. The phase difference of a symbol time interval of GMSK baseband signal is recorded as Δ 1 . It can be concluded that the symbol of Δ is the same as the symbol of the symbol , as shown in equation (4) [5]. Where q(t) is the shaping filter. The a n-2 is the main term, and the other two are ISI interference terms. The above formula can be used as the final input signal of the decoder. The shipborne modulation and demodulation system The shipborne modulation and demodulation system mainly includes modem module and app unit, as shown in Figure 5. The onboard modem module can not only be directly connected with the ship HF radio equipment, but also can be connected with the mobile app and PC through WiFi. Similar to the base station modulation and demodulation module, the output signal of the shipboard modulation and demodulation module is 1.8kHz GMSK modulated signal. At the same time, the modem module receives the audio signal from the shipboard radio equipment and demodulates the signal. Fig.5 The scheme of shipborne modem and APP system The shipboard app unit provides data communication functions to crew, including text, voice and image, mainly including modem control unit, message processing unit, local database and display control unit. (1) Modem control unit. It mainly completes the data interaction with the shipboard modem unit. After the app starts, it will connect to the shipboard modem unit. Only after the connection is successful, the user can use it later. (2) Address book function. The system provides a unique identification code for each communication user, so as to realize the dialing and point-to-point communication functions. (3) Communication services. App provides two kinds of communication services: single chat and group chat. In single chat mode, users can only communicate with one platform, while in group chat mode, users can communicate with multiple platforms. The communication content includes text, voice and image data. Simulation result The simulation result of the system BER [6] performance is shown in Figure 6. It is shown that the BER performance gradually gets better with the increase of SNR. When SNR of the baseband is 4.9dB, the BER of the system can reach about 10 -4 , and it will fully meet the actual communication requirements. The requirement of digital NAVDAT system specified in ITU-R M.2058-0 proposal is BER ≤ 10 -4 @ SNR = 10dB. In comparison, the proposed system in this paper has a design margin of 5dB, and it is suitable to be used in the audio communication of shortwave radio stations, which fully meets the needs of the system. Conclusion This paper proposes a novel scheme of high frequency digital transceiver in coastal radio station. The simulation results show that the system can meet the design requirements. It not only realizes the original analog HF coastal radio system, but also makes full use of the advantages of digital circuit. Relying on the existing transceiver equipment of coastal radio high-frequency communication system, when the human-computer interaction module, business service module, base station modulation and demodulation module and shipboard modulation and demodulation module are added, the advanced modern digital signal processing algorithm can complete signal demodulation in the digital domain, and overcome the shortcomings of current analog demodulation circuit, such as low precision, poor sensitivity, unable to transmit data and image information. It greatly improves the anti-interference ability and user experience of the system. Otherwise, it lays the foundation for the application of practical engineering, and has good practical applicaion and promotional value.
2021-06-03T01:37:34.067Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "07f5559b0daba1bde1c99ed4dfe5cb30abbf5036", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1920/1/012059", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "07f5559b0daba1bde1c99ed4dfe5cb30abbf5036", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
252866173
pes2o/s2orc
v3-fos-license
Characteristics and Residual Health Risk of Organochlorine Pesticides in Fresh Vegetables in the Suburb of Changchun, Northeast China In this study, eleven organochlorine pesticides (OCPs) in fresh vegetables in the Changchun suburb were investigated, and their potential health risks were evaluated. The average concentrations of OCPs in edible parts of vegetables were found in the following descending order: Σhexachlorocyclohexanes (ΣHCHs) (6.60 µg·kg−1) > Σdichlorodiphenyltrichloroethanes (ΣDDTs) (5.82 µg·kg−1) > ΣChlordanes (2.37 µg·kg−1) > heptachlor (0.29 µg·kg−1). Moreover, OCPs in different types of vegetables exceeded the maximum residue limits (MRLs), and the exceeding rates in various vegetables decreased in the following order: leafy vegetables (19.12%) > root vegetables (18.75%) > fruit vegetables (3.85%). The proportions of OCPs exceeding MRL in different vegetables were found in the following descending order: Welsh onion (22.50%) > radish (18.75%) > Chinese cabbage (14.29%) > pepper (6.90%) > cucumber (3.23%) > eggplant (2.94%) > tomato (2.78%). The sources’ identification results showed that DDTs in vegetables came mainly from newly imported technical DDTs and dicofol, while HCHs originated mainly from lindane. For both adults and children, the average target hazard quotients (avg. THQ) were all less than 1, and the average hazard index (avg. HI) values were 0.043 and 0.036, respectively. There were no significant health risks associated with OCP exposure for the inhabitants of the study area. Introduction Organochlorine pesticides (OCPs) are a class of typical organic pollutants with high toxicity, persistence, and high bioaccumulation and pose a threat to human health by penetration through the food chain [1]. Numerous studies have revealed that, after entering the human body, OCPs could cause toxic effects on the endocrine, immune, and nervous systems, whereas some OCPs have even exhibited carcinogenic effects [2]. Organochlorine pesticides have played an important role in controlling agricultural pests and diseases and increasing agricultural production and farmers' income in China. During 1950During -1980, China has been one of the countries with the largest production and use of OCPs around the world, with the cumulative application of dichlorodiphenyltrichloroethanes (DDTs) and hexachlorocyclohexanes (HCHs) of about 0.4 × 10 5 t and 4.9 × 10 5 t, respectively [3]. Although OCPs have been banned in China's agricultural system for nearly 40 years, they are still frequently detected in environmental media (such as soil, water, and sediments) and food (such as vegetables, fruits, and fish) because of their strong chemical stability and long-distance migration [3,4]. Vegetables have become one of the major foods in people's daily diet due to their high nutritional value and provide various vitamins, minerals, dietary fiber, and antioxidants for the human body. The physiological characteristics of vegetables make them more 2 of 14 susceptible to pesticide pollution than crops [5]. Vegetables will suffer from extensive pests and diseases during their growth. Therefore, it is necessary to frequently use OCPs to control pests, which also ensures the yield and quality of vegetables. Although the use of OCPs has brought great economic benefits, their residues in vegetables and health risks to consumers have attracted widespread attention. Research on the residues and distribution of OCPs in tomatoes showed that the total residues of OCPs in tomatoes were in the range of 0.062-0.402 µg·kg −1 , whereas DDTs, heptachlor, and dieldrin was also detected [6]. The maximum (max) residues of OCPs in cucumbers could reach 1.628 µg·kg −1 [7]. The monitoring results of OCPs in vegetables which were sold in Beijing (China) markets showed that the residues of DDTs and HCHs were up to 10.4 µg·kg −1 and 58.8 µg·kg −1 , respectively [8]. Liang et al. (2021) found that the max residues of HCHs and heptachlor in edible parts of vegetables in the southern Leizhou Peninsula (China) exceeded the national standard limit value, whereas the max values of the target hazard quotient (THQ) and hazard index (HI) in Chinese chives, and pepper samples were greater than 1, which might be a threat to human health [9]. Although OCPs have been banned for decades, their detection rates and residues in fresh vegetables are still high, due to which their potential health hazards to consumers cannot be ignored. As a transitional zone between agricultural production and urban living, suburban areas have become important production bases for regional vegetable products. Regarding the transportation costs and preservation requirements of vegetables, more and more suburban farmland is used to cultivate vegetables, and therefore, suburban vegetable production has become an important part of the regional agricultural economy [10]. With the growing demand for vegetables among urban residents, the degree of cultivation intensity of suburban farmland has increased significantly. The excessive use of chemical fertilizers and pesticides and the discharge of domestic and industrial wastes posed serious challenges to the quality of the suburban environment and food safety. At present, most of the research studies on the status of pollution in suburban vegetables mainly focus on heavy metals and polycyclic aromatic hydrocarbons, which come from traffic and industrial emissions and can contaminate vegetables through atmospheric deposition [11,12]. There are only a few studies focusing on pesticide residues in suburban vegetables [13], especially the OCPs. Therefore, it is necessary to conduct research on the residues and risks of OCPs in vegetables in suburban areas. Located in northeastern China, Changchun is an important agricultural production base. Vegetables constitute a major part of the diet of local inhabitants. Natives have some special dietary habits for vegetables, such as direct consumption of raw vegetables (such as Welsh onions, radishes, cucumbers, and tomatoes), and also store some nonperishable vegetables in autumn for consumption during winter-spring. Vegetables are of great significance to local residents, and it is important to understand the edible safety of vegetables in the region. In view of this, seven kinds of vegetables were collected in the suburbs of Changchun with the purposes of (1) quantifying the concentrations of OCPs in fresh vegetables; (2) identifying the potential sources of OCPs; (3) calculating the THQ and HI of OCPs to evaluate the health risks of consuming the vegetables. The results will provide evidence for policymakers to take targeted measures to reduce potential health risks from intoxicated vegetables and ensure the safe consumption of suburban vegetables. Study Area and Sampling Sites Changchun, the capital of Jilin Province, is located in northeast China (43 • 43 N, 125 • 19 E) and is also an important crop and commodity grain base in China. Changchun belongs to the continental monsoon climate with an annual average temperature of 4.8 • C and annual average precipitation of 569.6 mm. The main soil type is typical phaeozem. The vegetable-cultivated area in the suburbs of Changchun is about 3617 hm 2 , whereas the vegetable yield is about 9.8 × 10 4 tons. The suburbs of Changchun were divided into a grid of 1 × 1 km cells, and 54 typical vegetable plots were selected ( Figure 1). A handheld global positioning system (GPS) was used to record all sampling points. During September 2018-October 2018, 214 vegetable samples of seven types were collected and mainly included Chinese cabbage, Welsh onion, cucumber, pepper, eggplant, tomato, and radish. These vegetables are commonly planted in the local area. The same types of vegetables were collected in triplicates as sub-samples from each site and mixed thoroughly to obtain a representative sample. Approximately 500 g of the vegetable samples were packed in a polyethylene zip-lock bag and, after being numbered, transported to the laboratory. Vegetable samples were washed with deionized water and stored in a −20 • C refrigerator. and annual average precipitation of 569.6 mm. The main soil type is typical phaeozem. The vegetable-cultivated area in the suburbs of Changchun is about 3617 hm 2 , whereas the vegetable yield is about 9.8×10 4 tons. The suburbs of Changchun were divided into a grid of 1 × 1 km cells, and 54 typical vegetable plots were selected ( Figure 1). A handheld global positioning system (GPS) was used to record all sampling points. During September 2018-October 2018, 214 vegetable samples of seven types were collected and mainly included Chinese cabbage, Welsh onion, cucumber, pepper, eggplant, tomato, and radish. These vegetables are commonly planted in the local area. The same types of vegetables were collected in triplicates as subsamples from each site and mixed thoroughly to obtain a representative sample. Approximately 500 g of the vegetable samples were packed in a polyethylene zip-lock bag and, after being numbered, transported to the laboratory. Vegetable samples were washed with deionized water and stored in a −20 °C refrigerator. Materials and Reagents OCPs standard solutions were purchased from the National Sharing Platform for Reference Materials in China, including eight OCPs mixed standard solutions (α-HCH, β-HCH, γ-HCH, δ-HCH, o, p′-DDT, p, p′-DDT, p, p′-DDD, and p, p′-DDE) and single standard solutions (cis-chlordane, tran-chlordane, and heptachlor). Acetone and n-hexane (high-performance liquid chromatography grade), anhydrous sodium sulfate (analytical grade), activated alumina (40-60 mesh), and silica gel (60-100 mesh) were obtained from Aladdin Chemical Corp., China. Anhydrous sodium sulfate was dried for 3 h at 400 °C before use and after cooling, stored in a desiccator for subsequent use. The silica gel was activated for 12 h at 180 °C. The filter paper and cotton thread used to wrap the samples in Soxhlet extraction were sequentially washed with acetone, n-hexane, and distilled water in an ultrasonic cleaner and dried in a baking oven. Analysis and Quality Control The Soxhlet extraction and purification process of OCPs followed the previously reported modified method [14]. Extraction: Vegetable samples were cut into small pieces. First, 30.0 g of samples were accurately weighed, and an appropriate amount of anhydrous sodium sulfate was added to them. After grinding in the mortar, the mixtures were put into the Soxhlet extractor, and a 100 mL n-hexane-acetone mixture (1:1, v:v) was added to extract for 24 h. The extracts were concentrated using a rotary evaporator to nearly dry, and 5 mL n-hexane was added to redissolve to complete the solvent conversion. Materials and Reagents OCPs standard solutions were purchased from the National Sharing Platform for Reference Materials in China, including eight OCPs mixed standard solutions (α-HCH, β-HCH, γ-HCH, δ-HCH, o, p -DDT, p, p -DDT, p, p -DDD, and p, p -DDE) and single standard solutions (cis-chlordane, tran-chlordane, and heptachlor). Acetone and n-hexane (high-performance liquid chromatography grade), anhydrous sodium sulfate (analytical grade), activated alumina (40-60 mesh), and silica gel (60-100 mesh) were obtained from Aladdin Chemical Corp., China. Anhydrous sodium sulfate was dried for 3 h at 400 • C before use and after cooling, stored in a desiccator for subsequent use. The silica gel was activated for 12 h at 180 • C. The filter paper and cotton thread used to wrap the samples in Soxhlet extraction were sequentially washed with acetone, n-hexane, and distilled water in an ultrasonic cleaner and dried in a baking oven. Analysis and Quality Control The Soxhlet extraction and purification process of OCPs followed the previously reported modified method [14]. Extraction: Vegetable samples were cut into small pieces. First, 30.0 g of samples were accurately weighed, and an appropriate amount of anhydrous sodium sulfate was added to them. After grinding in the mortar, the mixtures were put into the Soxhlet extractor, and a 100 mL n-hexane-acetone mixture (1:1, v:v) was added to extract for 24 h. The extracts were concentrated using a rotary evaporator to nearly dry, and 5 mL n-hexane was added to redissolve to complete the solvent conversion. Purification: First, 1 g anhydrous sodium sulfate, 4 g active silica gel, 2 g active alumina, and 1 g anhydrous sodium sulfate were successively filled into the glass column from bottom to top. The extract was transferred to a purification column that was pre-Purification: First, 1 g anhydrous sodium sulfate, 4 g active silica gel, 2 g active alumina, and 1 g anhydrous sodium sulfate were successively filled into the glass column from bottom to top. The extract was transferred to a purification column that was preactivated with 40 mL n-hexane. The effluent was discarded. Subsequently, the column was eluted with 30 mL mixed liquor of acetone-n-hexane (1:9, v:v), and the eluate was collected. The eluent was concentrated with a rotary evaporator and dissolved in 1 mL of n-hexane to determine its concentration. Analysis: The OCPs were analyzed using gas chromatography (GC-2010, Shimadzu, Japan) with the electron capture detector (ECD), equipped with an HP-5 chromatographic column (30 m × 0.25 mm × 0.25 µm film thickness, USA). High-purity nitrogen (purity > 99.99%) was used as carrier gas. The injection and detector temperatures were set to 260 °C and 300 °C, respectively. One microliter of the extract was injected in the splitless mode. The temperature programming of GC was set as follows: the initial temperature at 100 °C for 1 min, ramping to 190 °C at 12 °C/min, and held for 8 min, followed by continuous ramping to 250 °C at 3 °C/min, and held for 10 min. The characteristic chromatogram of 11 OCPs is presented in Figure 2. The analytical procedures used in this study were conducted under strict quality assurance and quality control. Quality assurance and control were conducted using procedural blanks, spiked blanks, and duplicate samples. A signal-to-noise ratio of 3 was used to calculate the limits of detection (LODs). Recoveries were determined by spiking the blank vegetable samples with a standard mix solution, and the spiked concentration was set to 50.00 µg·kg −1 . Each sample was spiked in triplicate and then analyzed using the proposed method. Table 1 presents the LODs, spike concentration, recoveries, and relative standard deviations (RSDs). The compounds to be tested in the blank samples were below the LODs and would not affect the determination of the actual sample. For quantitative OCP, concentrations below the LODs were considered non-detectable and set as zero in the calculations. A standard of 0.10, 0.25, 0.50, 1.00, 2.00, and 4.00 µg·mL −1 was used to quantify the calibration curves, and the linear relationship of R 2 > 0.99 was obtained. All concentrations of OCPs were based on a fresh basis. The analytical procedures used in this study were conducted under strict quality assurance and quality control. Quality assurance and control were conducted using procedural blanks, spiked blanks, and duplicate samples. A signal-to-noise ratio of 3 was used to calculate the limits of detection (LODs). Recoveries were determined by spiking the blank vegetable samples with a standard mix solution, and the spiked concentration was set to 50.00 µg·kg −1 . Each sample was spiked in triplicate and then analyzed using the proposed method. Table 1 presents the LODs, spike concentration, recoveries, and relative standard deviations (RSDs). The compounds to be tested in the blank samples were below the LODs and would not affect the determination of the actual sample. For quantitative OCP, concentrations below the LODs were considered non-detectable and set as zero in the calculations. A standard of 0.10, 0.25, 0.50, 1.00, 2.00, and 4.00 µg·mL −1 was used to quantify the calibration curves, and the linear relationship of R 2 > 0.99 was obtained. All concentrations of OCPs were based on a fresh basis. Human Risk Assessment of OCPs through Vegetables Consumption The target hazard quotient (THQ) was applied to evaluate the human health risk of OCP residues in the vegetables [15]. Due to the differences between adults and children in the daily intake of vegetables and the tolerance limits of OCPs, the health risks of adults and children were evaluated separately. The estimated daily intake (EDI, µg·kg −1 ·d −1 ) depends on both the individual OCP concentrations and the daily consumption of food [13]. The EDI is calculated using Equation (1). where C is the concentration of OCPs in vegetables (µg·kg −1 ), Con is the average daily vegetable consumption of local inhabitants (242.00 g·d −1 for adults and 108.50 g·d −1 for children) [16], and BW (kg) represents the average body weight (55.90 kg for adults and 32.70 kg for children) [13]. As the evaluation standard, the THQ was based on the ratio of EDI to acceptable daily intake (ADI). The ADI was obtained based upon the standard GB 2763-2021, China (National food safety standard-Maximum residue limits for pesticides in food) [17] and listed in Table S2. The THQ is calculated using Equation (2). The multiple health risks of various OCPs in vegetables were represented by the hazard index (HI). Moreover, based upon the daily average consumption of vegetables for a human being, HI is calculated using Equation (3). If the value of THQ or HI was less than 1, there was no obvious health risk. However, if the value of THQ or HI was greater than 1, there was a possibility of obvious toxic effects on human health. Furthermore, with the increase in the value of THQ, the probability of toxic impacts increased. The statistical characteristics of the residual concentrations of OCPs in different vegetables are presented in Table 2. The average concentrations of OCPs were in the following descending order: ΣHCHs (6.60 µg·kg −1 ) > ΣDDTs (5.82 µg·kg −1 ) > ΣChlordanes [18]. The larger leaf surface area of leafy vegetables makes them more susceptible to exposure to OCP pesticides through dry and wet depositions under atmospheric conditions [19]. Some studies reported that the absorption capacity of leafy vegetables to OCPs was higher than that of root vegetables [20], and the higher humidity in fruit vegetables would promote the degradation of pesticides [13,21]. Welsh onion (11.94 µg·kg −1 ) had the highest average concentration of HCHs among all vegetables, followed by Chinese cabbage (9.43 µg·kg −1 ) and radish (8.42 µg·kg −1 ). Vegetables with lower HCH residues were eggplant (3.14 µg·kg −1 ) and cucumber (1.91 µg·kg −1 ). The concentrations of DDTs in different vegetables showed the same concentration characteristic of: Welsh onion > Chinese cabbage > radish > pepper > tomato > eggplant > cucumber. In order to strengthen the control of pesticide residues in food, China implemented the national food safety standard-maximum residue limits of pesticides in food ( MRLs was highest in the leafy vegetables (19.12%), followed by root vegetables (18.75%) and fruit vegetables (3.85%). The exceeding rates of ΣHCHs, ΣDDTs, and ΣChlordane in vegetables were 2.80%, 3.74%, and 4.21%, respectively, whereas heptachlor did not exceed the standard. The exceeding rates of ΣHCHs, ΣDDTs, and ΣChlordanes in different vegetables were found to be in the descending order of: Welsh onion (7.50%) > radish (6.25%) > Chinese cabbages (3.57%) > pepper (3.44%) > cucumber = eggplant = tomato (0%), Welsh onion (10.00%) > Chinese cabbage (7.14%) > radish (6.25%) > pepper (3.44%) > cucumber = eggplant = tomato (0%), and Welsh onion (7.50%) > Chinese cabbages (7.14%) > radish (6.25%) > cucumber (3.23%) > eggplant = tomato (2.77%) > pepper (0%), respectively. The proportion of ΣChlordane exceeding MRLs in vegetables was higher than those of HCHs and DDTs. The main components in technical chlordane were cis-chlordane (13%), tranchlordane (11%), and heptachlor (5%). Currently, technical chlordane has been widely used as termiticide for buildings, dams, and cable wires [22]. Moreover, it has also been used in green spaces to control termites in recent years [23]. Changchun is one of the famous garden cities in China, with a total green area of 180 km 2 and a greening rate of 36.5% [24]. Because of its special geographical location, suburban vegetable fields are adjacent to or surrounded by urban green spaces. Therefore, chlordane applied in the urban green space will directly or indirectly enter the suburban vegetable fields, resulting in high chlordane concentrations in vegetables and high proportions of exceeding MRLs [25,26]. The main components of DDTs in Changchun suburban vegetables were o, p -DDT and p, p -DDT, which accounted for 47.08% and 26.98% of the ΣDDTs, respectively (Figure 3). Technical DDT mixtures mainly consist of p, p -DDT (75%), o, p -DDT (15%), p, p -DDE (5%), and several other trace metabolites [1,31]. Generally, p, p -DDT can be dechlorinated into p, p -DDE under aerobic conditions and reduced to p, p -DDD under anaerobic conditions [32]. The high concentration of p, p -DDT in vegetables might be due to the presence of various DDT isomers that were not easily degraded in dicofol. Dicofol is one of the most commonly used OCPs in modern agriculture and animal husbandry [20,33]. The ratios of different isomers of OCPs can be used to identify the environmental input information of pesticides. Therefore, the ratio of p, p -DDT to its degradants can reflect the "new input or historical use" of DDTs in the environment and the residual time. A ratio of W p, p -DDT /W p, p -DDE of less than 1 indicates that DDTs are from historical inputs, while a ratio of W p, p -DDT /W p, p -DDE of greater than 1 indicates the new application of DDTs. In this study, the average value of W p, p -DDT /W p, p -DDE was 2.29 ( Figure 4). Except for eggplant, the average ratios of W p, p -DDT /W p, p -DDE in other vegetables were all higher than 1, indicating that there may be new DDTs input in the study area. In addition, the ratio of o,p -DDT and p,p -DDT was about 0.2 in technical DDTs, while the ratio is about 7 in dicofol [1,33]. The ratios of W o, p -DDT /W p, p -DDT in Changchun suburban vegetables were within the range of 0.46-4.10, which were higher than that of technical DDTs but lower than that of dicofol. It can be inferred that there might be new inputs of technical DDTs and dicofol in the study area, whereas dicofol was the main source. The residues of HCHs and DDTs in suburban vegetables of Changchun were compared with those in other regions around the world, and it was found that the average concentration of DDTs in the vegetables from Changchun suburbs (5.82 µg·kg −1 ) was significantly higher than those in Taizhou, China (0.30 µg·kg −1 ) [27], city of Northwest Russian (0.11 µg·kg −1 ) [28], Taiwan (2.51 µg·kg −1 ) [29] and Cambodia (1.85 µg·kg−1) [30], and Delhi, India (4.53 µg·kg −1 ) [18]. However, the concentration of DDTs in the vegetables from Changchun suburbs was significantly lower than that in Cape Town, South Africa (53.65 µg·kg −1 ) [20]. The average residue of HCHs (6.6 µg·kg −1 ) was much higher than those of the city of Northwest Russia (0.07 µg·kg −1 ) [28], Cambodia (0.47 µg·kg −1 ) [30] and Taiwan (3.78 µg·kg −1 ) [29], whereas it was lower than that of Delhi, India (76.55 µg·kg −1 ) [18]. The main components of DDTs in Changchun suburban vegetables were o, p′-DDT and p, p′-DDT, which accounted for 47.08% and 26.98% of the ΣDDTs, respectively (Figure 3). Technical DDT mixtures mainly consist of p, p′-DDT (75%), o, p′-DDT (15%), p, p′-DDE (5%), and several other trace metabolites [1,31]. Generally, p, p′-DDT can be dechlorinated into p, p′-DDE under aerobic conditions and reduced to p, p′-DDD under anaerobic conditions [32]. The high concentration of p, p′-DDT in vegetables might be due to the presence of various DDT isomers that were not easily degraded in dicofol. Dicofol is one of the most commonly used OCPs in modern agriculture and animal husbandry [20,33]. The ratios of different isomers of OCPs can be used to identify the environmental input information of pesticides. Therefore, the ratio of p, p′-DDT to its degradants can reflect the "new input or historical use" of DDTs in the environment and the residual time. A ratio of Wp, p′-DDT/Wp, p′-DDE of less than 1 indicates that DDTs are from historical inputs, while a ratio of Wp, p′-DDT/Wp, p′-DDE of greater than 1 indicates the new application of DDTs. In this study, the average value of Wp, p′-DDT/Wp, p′-DDE was 2.29 ( Figure 4). Except for eggplant, the average ratios of Wp, p′-DDT/Wp, p′-DDE in other vegetables were all higher than 1, indicating that there may be new DDTs input in the study area. In addition, the ratio of o,p′-DDT and p,p′-DDT was about 0.2 in technical DDTs, while the ratio is about 7 in dicofol [1,33]. The ratios of Wo, p′-DDT/Wp, p′-DDT in Changchun suburban vegetables were within the range of 0.46-4.10, which were higher than that of technical DDTs but lower than that of dicofol. It can be inferred that there might be new inputs of technical DDTs and dicofol in the study area, whereas dicofol was the main source. γ-HCH and β-HCH were the main components of HCHs in suburban vegetables of Changchun, with a cumulative contribution rate of 91.52%, while the contribution rates of α-HCH and δ-HCH were only 6.36% and 2.12%, respectively (Figure 3). Yi et al. (2013) [34] found that the enrichment ability of different isomers of HCHs in the soil-plant system was in the descending order of α-> β-> δ-> γ-HCH. In the present study, the residual of γ-HCH was higher in vegetables, which might be attributed to long-term exposure to lindane-containing pesticides [34]. There are currently two forms of HCHs: technical HCHs and lindane. The technical HCHs are mainly composed of α-HCH (60%~70%), β-HCH (5%~12%), γ-HCH (10%~12%), and δ-HCH (6%~10%), while lindane consists of γ-HCH and β-HCH were the main components of HCHs in suburban vegetables of Changchun, with a cumulative contribution rate of 91.52%, while the contribution rates of α-HCH and δ-HCH were only 6.36% and 2.12%, respectively (Figure 3). Yi et al. (2013) [34] found that the enrichment ability of different isomers of HCHs in the soil-plant system was in the descending order of α-> β-> δ-> γ-HCH. In the present study, the residual of γ-HCH was higher in vegetables, which might be attributed to long-term exposure to lindane-containing pesticides [34]. There are currently two forms of HCHs: technical HCHs and lindane. The technical HCHs are mainly composed of α-HCH (60%~70%), β-HCH (5%~12%), γ-HCH (10%~12%), and δ-HCH (6%~10%), while lindane consists of more than 99% γ-HCH [35]. Compared with α-HCH, γ-HCH is easier to degrade and transform, and its residual time in the environment is shorter. Therefore, the ratio of α-HCH/γ-HCH can be used to monitor the sources of HCHs. When W α-HCH /W γ-HCH < 1, it is mainly from the input of lindane, whereas when 3 < W α-HCH /W γ-HCH < 7, it is mainly from the input of technical HCHs. Moreover, when W α-HCH /W γ-HCH > 7 or 1 < W α-HCH /W γ-HCH < 3, it is derived from the historical use of lindane and has degraded to a certain extent. In addition, β-HCH is one of the most stable isomers of HCHs, which is not easy to degrade in the environment. Therefore, β-HCH is usually the most abundant isomer in the environment [32]. The ratio of W β-HCH /W (α + γ)-HCH can be used to indicate whether the sources of HCHs are historical residues or new inputs. The ratio of W β-HCH /W (α + γ)-HCH > 0.5 indicates a source of historical pollution [18]. According to Figure 4, the ratios of W α-HCH /W γ-HCH in all vegetables were less than 1, indicating that there were lindane inputs in the vegetable growing environment. The ratios of W β-HCH /W (α + γ)-HCH were within the range of 0.06-1.13, whereas the ratios of leafy vegetables (Chinese cabbage 0.21 and Welsh onion 0.38) were less than 0.5, indicating that HCHs in these two vegetables were mainly derived from the new application of lindane, while the wet and dry deposition of the atmosphere might be the main input route. The ratios of eggplant, pepper, cucumber, and radish were all greater than 0.5. It can be concluded that the HCHs in these four vegetables mainly came from the historical residues of lindane in the soil. Int. J. Environ. Res. Public Health 2022, 19, x 9 of 14 more than 99% γ-HCH [35]. Compared with α-HCH, γ-HCH is easier to degrade and transform, and its residual time in the environment is shorter. Therefore, the ratio of α-HCH/γ-HCH can be used to monitor the sources of HCHs. When Wα-HCH/Wγ-HCH < 1, it is mainly from the input of lindane, whereas when 3 < Wα-HCH/Wγ-HCH < 7, it is mainly from the input of technical HCHs. Moreover, when Wα-HCH/Wγ-HCH > 7 or 1 < Wα-HCH/Wγ-HCH < 3, it is derived from the historical use of lindane and has degraded to a certain extent. In addition, β-HCH is one of the most stable isomers of HCHs, which is not easy to degrade in the environment. Therefore, β-HCH is usually the most abundant isomer in the environment [32]. The ratio of Wβ-HCH/W(α + γ)-HCH can be used to indicate whether the sources of HCHs are historical residues or new inputs. The ratio of Wβ-HCH/W(α + γ)-HCH > 0.5 indicates a source of historical pollution [18]. According to Figure 4, the ratios of Wα-HCH/Wγ-HCH in all vegetables were less than 1, indicating that there were lindane inputs in the vegetable growing environment. The ratios of Wβ-HCH/W(α + γ)-HCH were within the range of 0.06-1.13, whereas the ratios of leafy vegetables (Chinese cabbage 0.21 and Welsh onion 0.38) were less than 0.5, indicating that HCHs in these two vegetables were mainly derived from the new application of lindane, while the wet and dry deposition of the atmosphere might be the main input route. The ratios of eggplant, pepper, cucumber, and radish were all greater than 0.5. It can be concluded that the HCHs in these four vegetables mainly came from the historical residues of lindane in the soil. Consumption Health Risk of OCPs in Vegetable The calculated EDI, THQ, and HI values of different OCPs for children and adults in Changchun are presented in Table 3. The average (avg.) EDI for adults and children were as follows: HCHs > DDTs > chlordane > heptachlor, while max EDI for adults and children were in the decreasing order of DDTs > HCHs > Chlordane > heptachlor. The values of avg. EDI and max EDI for adults were higher than those for children. The max EDI and avg. EDI values of OCPs were lower than the ADI values in national food safety standards in China, indicating that the current vegetable consumption posed a low health risk to the local inhabitants. However, it should be noted that the dietary structure of the population is diverse and complex, and some people may consume more vegetables than the average, Consumption Health Risk of OCPs in Vegetable The calculated EDI, THQ, and HI values of different OCPs for children and adults in Changchun are presented in Table 3. The average (avg.) EDI for adults and children were as follows: HCHs > DDTs > chlordane > heptachlor, while max EDI for adults and children were in the decreasing order of DDTs > HCHs > Chlordane > heptachlor. The values of avg. EDI and max EDI for adults were higher than those for children. The max EDI and avg. EDI values of OCPs were lower than the ADI values in national food safety standards in China, indicating that the current vegetable consumption posed a low health risk to the local inhabitants. However, it should be noted that the dietary structure of the population is diverse and complex, and some people may consume more vegetables than the average, such as vegetarians and people undergoing a fat-reducing period. Their EDI will be correspondingly higher. In addition, the ways of vegetable cleaning (detergent or not), eating (raw or cooked), and cooking (boiled or fried) will bring some uncertainties, leading to an overestimation or underestimation of dietary OCPs exposure [13]. Compared with other regions around the world (Table 4), the intakes of HCHs through the consumption of suburban vegetables of inhabitants in Changchun (0.022-0.029 µg·kg −1 ·d −1 ) were higher than those of most other regions except for Taizhou, China (0.137 µg·kg −1 ·d −1 ). In contrast, the intakes of DDTs (0.019-0.025 µg·kg −1· d −1 ) of the inhabitants in Changchun were higher than those of Dalian, China (0.003 µg·kg −1 ·d −1 ), Denmark (0.0037 µg·kg−1·d −1 ), and Punjab Province, Pakistan (0.019 µg·kg −1 ·d −1 ), but lower than those of Jakarta, Bogor, and Yogyakarta, Indonesia (0.040 µg·kg −1 ·d −1 ) and Taizhou, China (0.076 µg·kg −1 ·d −1 ). In general, the consumption of OCPs through suburban vegetables by residents in the study area was at a high level. The max THQs of HCHs, DDTs, Chlordane, and heptachlor for adults were 0.060, 0.038, 0.32, and 0.38, respectively, and those for children were 0.046, 0.029, 0.25, and 0.29, respectively (Table 3). For both adults and children, the max THQ of OCPs exhibited a descending order of heptachlor > Chlordane > HCHs > DDTs. The calculated max HI values for adults and children were 0.80 and 0.62, respectively. Heptachlor, Chlordane, HCHs, and DDTs contributed 47.70%, 40.16%, 7.46%, and 4.68% to the max HI for adults and children, respectively. The avg. HI values of adults and children were 0.043 and 0.036, respectively, and with the descending order of Chlordane > heptachlor > HCHs > DDTs. The contribution rates of HCHs, DDTs, chlordane, and heptachlor to avg. HI were 13.61%, 5.98%, 49.48%, and 30.93%, respectively. Therefore, Chlordane and heptachlor were the key compounds that led to potential health risks for residents. The results suggested that residents may not suffer significant adverse health effects based on the average level of OCPs. Although daily intake of OCPs through vegetables is an important route of dietary exposure for people, many studies have reported that humans can also be exposed to OCPs through other foods such as rice, vegetable oils, fruits, aquatic products, and meats [5,15,38,41,42]. Wang et al. (2021) [43] investigated the intake of OCPs through diet and related health risks among women of childbearing age in agricultural areas of northern China. Among the seven types of foods investigated, the intakes of OCPs through vegetables and fruits were significantly higher than those of other foods and accounted for 35.1% and 45.6% of the total intake of OCPs, respectively. Hao et al. (2014) [44] studied the DDTs' residual levels in edible fish in Guangzhou and the consumption risk and found that the daily exposure of inhabitants to DDTs from fish consumption was 23.0-1875.6 ng·d −1 , whereas the long-term consumption of croaker might have certain potential health risks. A study from Pakistan showed that OCPs' residuals in cereals were up to 15.2 µg·kg −1 (dry weight), whereas the consumption of contaminated cereal crops could pose a health risk to the population in the study area [41]. Furthermore, OCPs ingested by humans through consuming individual types of food may not cause health risks, but there would be a certain possibility of toxic effects on humans under exposure to multiple foods. In addition, when assessing the health risks of OCPs, special groups such as the elderly, pregnant women, and infants should be considered. Wang (2021) [45] found that the accumulation of OCPs such as β-HCH, γ-HCH, o, p -DDD in the elderly showed a significant positive correlation with age, whereas the accumulation level gradually increased with age. In addition, the levels of β-HCH and o, p -DDD in the blood of the elderly were positively associated with the risk of hypertension, hyperlipidemia, and diabetes, and dietary pathways accounted for more than 90% of the total exposure to OCPs. Furthermore, HCHs and DDTs are the most frequently detected POPs in breast milk [46,47]. Additionally, β-HCH is a risk factor for neonatal weight loss and is significantly associated with growth and development indicators such as neonatal length and head circumference [48]. Gyalpo et al. (2012) [49] studied the concentration levels of DDTs in the human body's fat during the lifetime of human individuals. The results showed that the total concentration of DDTs in the human body was the highest at the age of 2 (75 µg·g −1 lipid), which was 2.5 times that of the 20-year-old. Meanwhile, the concentrations of DDTs in primiparous mothers were higher than those in prolific mothers, which resulted in increased exposure to DDTs in neonates. In addition, DDT in humans was mainly derived from the diet. Therefore, future protection policies regarding exposure to OCPs should consider vulnerable groups such as the elderly, pregnant women, and younger infants, as they have a higher sensitivity to pollutants and may be more vulnerable than adolescents and adults. Conclusions In this study, the characteristics, sources, and health risks of residual OCPs in vegetables in Changchun suburbs (China) were studied. The residues of OCPs in leafy vegetables were found to be higher than those in root and fruit vegetables. The residues of OCPs in 9.81% of the vegetable samples were higher than the MRLs, and the ratio of exceeding the MRLs was found in the descending order of leafy vegetables > root vegetables > fruit vegetables. The ratios of OCPs exceeding the MRLs in different vegetables decreased in the following order: Welsh onion > radish > Chinese cabbage > pepper > cucumber > eggplant > tomato. The source identification results showed that the DDTs in vegetables mainly came from a mixed source of technical DDTs and dicofol, of which dicofol was the predominant source. The HCHs in Chinese cabbage and Welsh onion mainly originated from new inputs of lindane, while the HCHs in eggplant, pepper, cucumber, and radish came from the historical residue of lindane in soil. The THQ values of OCPs were all less than 1, whereas the avg. HI values of adults and children were 0.043 and 0.036, respectively. The health risks of OCPs in vegetables were not obvious to the consumers. Although OCPs have been banned in China for decades, they still have high detection rates and residues in vegetables. Therefore, regular inspection of agricultural products and supervision of production, sales, and use of pesticides should be strengthened.
2022-10-13T15:43:25.392Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "26b000ace4a7b3719c43547270f96396af1fc788", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/19/12547/pdf?version=1664615361", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a937e4396693dfe6c91628406c472f37abc469e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
252670605
pes2o/s2orc
v3-fos-license
Working from home, work-time control and mental health: Results from the Brazilian longitudinal study of adult health (ELSA-Brasil) This cross-sectional study investigated the association between work-time control (WTC), independently and in combination with hours worked (HW), and four mental health outcomes among 2,318 participants of the Longitudinal Study of Adult Health (ELSA-Brasil) who worked from home during the COVID-19 pandemic. WTC was assessed by the WTC Scale, and mental health outcomes included depression, anxiety, stress (measured by the Depression, Anxiety and Stress Scale, DASS-21), and self-rated mental health. Logistic regression models were used to determine odds ratios (ORs) and 95% confidence intervals (CIs). Among women, long HW were associated with stress (OR = 1.56; 95% CI = 1.11–2.20) and poor self-rated mental health (OR = 1.64; 95% CI = 1.13–2.38), whereas they were protective against anxiety among men (OR = 0.59; 95% CI = 0.37–0.93). In both sexes, weak WTC was associated with all mental health outcomes. Among women, the long HW/weak WTC combination was associated with all mental health outcomes, and short HW/weak WTC was associated with anxiety and stress. Among men, long HW/strong WTC was protective against depression and stress, while short HW/strong WTC and short HW/weak WTC was associated with all mental health outcomes. In both sexes, weak WTC, independently and in combination with HW, was associated with all mental health outcomes. WTC can improve working conditions, protect against mental distress, and fosterwork-life balance for those who work from home. Introduction In order to curb the spread of the disease caused by the 2019 coronavirus , prevent the collapse of health services, and reduce COVID-19 lethality, many countries introduced measures to restrict inter-person contact during the pandemic. These included guidance to stay at home, travel restrictions and closure of schools and non-essential services (Gostin and Wiley, 2020;Wenham et al., 2020;De Sio et al., 2021). These measures led to unprecedented social distancing, with impacts on work organization and workers' lives (Biroli et al., 2021). In several occupations, working from home became more frequent, necessary, and even mandatory. Workers had to try to perform their duties while also dealing with home-making, child-rearing, and various sources of distraction; often in very adverse ergonomic conditions and without any in-person interaction with co-workers (Arntz et al., 2020;Majumdar et al., 2020;Amano et al., 2021;Kniffin et al., 2021;Şentürk et al., 2021;Xiao et al., 2021). Several studies have shown the adverse effects of these working arrangements on workers' mental health during the COVID pandemic (Majumdar et al., 2020;Oakman et al., 2020;Biroli et al., 2021;Toniolo-Barrios and Pitt, 2021;Sentürk et al., 2021;Xiao et al., 2021). Working from home has been associated with psychosocial stress, social isolation, sleep disorders, concentration deficit, and screen fatigue from long working hours (Tavares, 2017;Majumdar et al., 2020;Buomprisco et al., 2021;Xiao et al., 2021). Long working hours have also been shown to have a significantly negative impact on worker's psychological health (Virtanen et al., 2012;Watanabe et al., 2016;Li et al., 2019;Park et al., 2020), and have been identified as one of the pathways linking working from home and mental health (Choi et al., 2021;Rugulies et al., 2021;Toniolo-Barrios and Pitt, 2021;Şentürk et al., 2021). Indeed, those who work from home tend to work longer hours and spend more time on their cell phone and desktop/laptop than office workers (Nijp et al., 2016;Tavares, 2017;Majumdar et al., 2020). Working long hours can impact family activities and personal goals, foster imbalance between personal life and work (Żołnierczyk-Zreda et al., 2012), interfere with health-related behavior (Bannai and Tamakoshi, 2014;Virtanen et al., 2015) and reduce the time one has available for self-care (Soek et al., 2016). It appears that the challenge of transitioning to working from home is not the same for men and women. According to several studies (Arntz et al., 2020;Barbieri et al., 2021;Biroli et al., 2021;Sato et al., 2021;Şentürk et al., 2021;Xiao et al., 2021;Matthews et al., 2022) women's mental health may be more severely impacted by this transition than men because of their greater involvement in household and caregiving tasks, which can cause work interruptions and concentration difficulties. Studies prior to the pandemic indicated that giving workers control over their work schedules could attenuate the adverse mental health effects of working long hours (Ala-Mursula, 2002;Nijp et al., 2012;Żołnierczyk-Zreda et al., 2012). This kind of control allows workers to determine the length of their work day, when to start and finish their work, to take breaks and deal with private matters during work time, and to have the autonomy to schedule holidays and other kinds of leave (Nijp et al., 2012(Nijp et al., , 2015. It is based on workers' needs rather than employers' needs, and its positive effects stem from workers' ability to balance their time and resources in order to better deal with the demands of work and home simultaneously (Żołnierczyk-Zreda et al., 2012;Leineweber et al., 2016;Virtanen et al., 2021). As far as we were able to assess, no previous studies have explored the influence of control over working hours on mental health among individuals who worked from home during the COVID-19 pandemic. Furthermore, the literature has explored the effect of work control (Ala-Mursula, 2002;Albrecht et al., 2017;Li et al., 2019;Şentürk et al., 2021) and of long working hours from home (Park et al., 2020;Choi et al., 2021;Rugulies et al., 2021) on mental health separately. Besides, few articles have studied gender differences in vulnerability to the potential mental health effects of working from home (Matthews et al., 2022). In this article, we investigated gender differences in a comprehensive range of mental health outcomes, including depression (a mood disorder that involves a low mood and a loss of interest in activities), anxiety (a reaction to stress, with feelings of worry, nervousness, irritability or unease) and stress (a feeling of emotional or physical tension, caused by any event or thought that makes people feel worried, angry or nervous) (Sinclair et al., 2012;Vignola and Tucci, 2014;Bottesi et al., 2015;Camacho et al., 2016;Yıldırım et al., 2018;Martins et al., 2019). Beyond the experience of symptoms of depression, anxiety, and stress, overall self-rated measures can capture how people perceive their own mental health, rather than focusing on mental illness (Levinson and Kaplan, 2014), and can help improve screening and treatment interventions (McAlpine et al., 2018). Therefore, this study investigated the association between work-time control (WTC), independently and in combination with hours worked (HW), and depression, anxiety, stress and self-rated mental health among men and women who worked from home during the COVID-19 pandemic, highlighting sex differences. Sample and procedure This cross-sectional study was carried out between July 2020 and February 2021, and used data from a supplementary study of the Longitudinal Study of Adult Health (Estudo Longitudinal de Saúde do Adulto, ELSA-Brasil) to assess the short-and long-term impacts of COVID-19 and of COVID-19 mitigation strategies. Within the framework of this supplementary study, 5,639 civil servants who were active in or retired from teaching positions and research institutions in five of Brazil's state capitals (Belo Horizonte, Porto Alegre, Rio de Janeiro, Salvador, and Vitória) were invited to respond to questionnaires by mobile phone or computer. Invitees completed the questionnaires using an application that was produced especially for the study with the assistance of a trained, certified team. Only those who were active civil servants and responded to questions on working from home were eligible for inclusion (n = 3,043, 54%). We then excluded 725 individuals who did not engage in telework; thus the final analytical sample comprised 2,318 participants (1,155 men and 1,163 women). Measures Hours worked from home Hours worked (HW) was determined based on the question: "On average, how many hours do you spend on work at home, not counting housework?" Responses were categorized as "short HW" (< 12 h/week) and "long HW" (≥ 12 h/week), based on the median cut-off point for men and women. Work-time control Access to WTC was measured on the WTC access scale proposed by Nijp et al. (2015Nijp et al. ( , 2016, who defined access to WTC as the possibility of deciding when to work. Three bilingual, epidemiological researchers with long experience in the psychometric adaptation of scales translated the WTC access scale from English to Portuguese. The instrument comprises six items, with responses given on a 5-point Likert scale (1 = never to 5 = always). The dimensional validity of the WTC scale was assessed by exploratory and confirmatory factor analyses (EFA and CFA). In the EFA, the criteria used for the number of factors to be extracted were an eigenvalue greater than one and the factor structure fit, considering item loading and number of items per factor. Items with loading > 0.40 and no cross-loading (> 0.40 loading on more than one factor and < 0.20 difference between loadings), using the geomin oblique rotation, were considered appropriate. The CFA, based on the model originally proposed by the authors of the WTC scale (Nijp et al., 2015(Nijp et al., , 2016 and the results of the EFA, was then performed using the robust weighted least squares mean and variance adjusted (WLSMV) estimator. That analysis applied WLSMV, which uses polychoric correlation matrices as appropriate for categorical or ordinal variables. Model fit was assessed using the following criteria of proper fit: two incremental fit indices-the Comparative Fit Index (CFI) and Tuckey-Lewis Index (TLI) > 0.90-and one parsimonious fit index, RMSEA (< 0.06 preferable, but up to 0.08 acceptable) (Hu and Bentler, 1999;Hair et al., 2009). Convergent validity was assessed by average variance extracted (AVE) and internal consistency by composite reliability (CR), with AVE ≥ 0.50 and CR ≥ 0.60 considered acceptable (Hair et al., 2009). Cronbach's alpha was also calculated in order to permit international comparisons. All analyses in this stage were performed using the Mplus statistical package, version 7.1 (Muthén andMuthén, 1998-2012). After assessing the WTC scale's suitability by factor analysis, the sums of the scale items were calculated and median scores (women = 20; men = 21) were used to classify WTC as strong or weak. Table 1 shows the items in English and in their Brazilian Portuguese translations, as well as the WTC scale's final psychometric performance. The EFA revealed no cross-loading, and all items returned loadings of ≥ 40 (data not shown). The CFA showed satisfactory performance for all study indicators (CFI = 0.997, TLI = 0.998, RMSEA = 0.084), after inclusion of three residual correlations between pairs of items (1 and 2, 3 and 2, and 3 and 4). AVE and CR were 0.687 and 0.928, respectively ( Table 1). This performance warranted proceeding with the analyses using the WTC scale and its association with the investigated mental health outcomes, both independently and in combination with HW. Combined variable hours worked/work time control The combined variable HW/WTC was categorized at into four groups: short HW/strong WTC (reference category), long HW/strong WTC, short HW/weak WTC, and long HW/weak WTC. Self-rated mental health Self-rated mental health was assessed by the question: "Generally speaking, in comparison with people of your age, how do you regard your mental state of health?" Responses were grouped into "good" (very good/good) and "poor" (regular/poor/very poor). Self-rated mental health has been previously validated (Mawani and Gilmour, 2010) and is an important predictor of health outcomes and wellbeing (Ahmad et al., 2014;Levinson and Kaplan, 2014;McAlpine et al., 2018). Covariables Age (continuous in the multiple analysis and categorized as < 54 years, 55-64 years, ≥ 65 years in the bivariate analysis), self-reported race/color (black, white, brown, other), marital status (with partner, without partner), per capita income (continuous), schooling (masters/doctorate, undergraduate/higher diploma, up to complete upper secondary), caregiver for children, sick and/or elderly people Frontiers in Psychology 04 frontiersin.org (no, yes) and time spent on housework in hours/week (continuous) were included in the analyses as covariables. Statistical analysis In describing the sample, categorical variables were expressed as frequencies, and continuous variables as means and standard deviations (SDs). Associations were tested using the Pearson chi-square test for categorical variables and the t-test for continuous variables. Odds ratios (ORs) and their respective 95% confidence intervals (CIs) were estimated using logistic regression models, adjusted for covariables that showed associations in the bivariate analyses. Crude models included only the exposure and outcomes. Adjusted models included age, self-reported race/color, marital status, per capita income, and time spent on housework in hours/week. Interactions between sex and HW, WTC and HW/WTC were considered statistically significant at a p-value < 0.05. The descriptive analyses and analyses of association were stratified by sex and performed using the program R, version 3.6.1 (R Core Team, 2017). Results Mean age in the study sample was 55 ± 7.4 years, and about half were women. The women were younger than the men, had less schooling (a smaller percentage held masters/doctoral degrees), reported lower income and more frequently reported caregiving responsibilities for children and the sick and/or elderly. However, men more frequently reported having a partner. Mean HW (20.5 ± 17.7 for women and 20.6 ± 17.9 for men), as well as the frequencies of different HW/WTC groups, were similar in both sexes. On the other hand, women reported spending more time on housework than men and displayed higher prevalences of all mental health outcomes ( Table 2). Discussion This study examined sex differences in the association between WTC, independently and in combination with HW, and depression, anxiety, stress, and self-rated mental health among individuals who worked from home during the COVID-19 pandemic. Moreover, it is one of the first studies to provide evidence of the modifying effect of the combination of WTC and HW on mental health outcomes in this group of workers. The study showed that, among people working from home, long HW (i.e., above median hours) increased the odds of stress and of poor self-rated mental health among women, while among men, the direction of the association indicated protection against anxiety. Weak WTC was associated with all mental health outcomes in both sexes, independently and in combination with short and long HW. The high prevalence of depression, anxiety and stress observed in the present study, especially among women, is in agreement with other studies carried out during the pandemic (Şentürk et al., 2021;Wang et al., 2021;Andersen et al., 2022). Men's and women's HW were similar, but medians were lower than those reported in other studies, in which men were shown to have a longer work day (Li et al., 2019;Park et al., 2020;Choi et al., 2021;Rugulies et al., 2021); these include previous studies conducted as part of ELSA-Brasil. Also, the sex differences in the associations between HW and the mental health outcomes showed a pattern of interaction, with long HW being associated with higher odds of mental health outcomes among women, but conferring a protective effect among men. This pattern was also observed in a subgroup analysis (Choi et al., 2021) investigating the association between working 41-52 h/week and mild depression and moderate to severe depression, although the estimates for men were not statistically significant, and the same study showed an association between long work weeks and the risk of stress, depression and suicidal ideas, with more prominent associations observed among women and low-wage workers (Choi et al., 2021). A recent review (Rugulies et al., 2021) concluded that there is still insufficient evidence on the association between HW Supplementary study on COVID-19, ELSA-Brasil (2020-2021). *p < 0.05; **p < 0.01; ***p < 0.001. and mental health outcomes, and underlined the need for further studies. However, most studies on this topic address the situation prior to the pandemic and do not take into account work-from-home factors in the context of the COVID-19 pandemic. It is possible that HW during the pandemic, even if they were shorter than what would have been routine before the pandemic, are still reflected in women's mental health. Indeed, the pandemic had a strong impact on family life (Arntz et al., 2020;Biroli et al., 2021) and imposed new routines that placed different demands on women. For example, our findings showed that a higher proportion of women had no partner, had to care for children or a sick family member and had a higher average number of hours of housework. A study of families in the United States, Britain and Italy (Biroli et al., 2021) showed that women reported spending more time on household chores during than before the COVID-19 pandemic, and that men reported taking greater part in caring for children (especially in play time) and in grocery shopping. It is therefore possible that women find it more difficult to work from home, because they tend to spend more time on a variety of household and caregiving tasks, and thus may face more frequent interruptions at work, difficulty concentrating and lack of support in housework (Arntz et al., 2020;Biroli et al., 2021;Sentürk et al., 2021;Xiao et al., 2021). A qualitative study in Turkey even found that telework can detach women from professional work, expose them to more precarious labor conditions and consolidate their roles as traditional housewives (Çoban, 2021). Barbieri et al. (2021) found a slightly greater effect of workload on women than on the overall sample. The authors emphasized the view that women's professional life is considered complementary, whereas the domestic avenue is given higher priority. In line with the present study, others (Sato et al., 2021;Şentürk et al., 2021;Xiao et al., 2021) have indicated that women working from home were more likely to suffer worse mental health outcomes than men. These findings suggest that working from home can heighten gender inequalities in various dimensions. The importance of WTC and its association with mental health outcomes differed slightly between the sexes. In both men and women, weak WTC was associated with anxiety and stress, even when combined with short HW. However, stronger associations were observed when long HW and weak WTC were present simultaneously. WTC was found to be protective among men, even when combined with long HW, thus reinforcing the importance of WTC. Previous studies have also pointed to the importance of WTC, especially when working from home (Li et al., 2019, as cited in de Wind et al., 2021, saying that WTC fosters balance between family life and work, motivation to work, physical and mental health, and reduces fatigue (Nijp et al., 2012(Nijp et al., , 2015(Nijp et al., , 2016Albrecht et al., 2017Albrecht et al., , 2020. Other studies (Ala-Mursula, 2002;Nijp et al., 2012;Ż ołnierczyk-Zreda et al., 2012;Leineweber et al., 2016) have found that, although WTC is universally beneficial, workers with more family responsibilities (especially women with small children) and those with more need to recover after working (older workers), are more likely to derive greater benefit from increased WTC, even when working longer hours. Working from home, an arrangement that is increasingly common worldwide, can be beneficial to both workers and employers, especially workers who live in large cities, as they eliminate commuting time, and workers with motor deficiencies (Tavares, 2017;Majumdar et al., 2020;Buomprisco et al., 2021; Conroy et al., 2021;Xiao et al., 2021). However, it can increase the costs workers incur to create the proper infrastructure for work and deprive workers of interpersonal relations with fellow workers. In the context of the pandemic, the professional isolation produced by social distancing contributed to burnout and stress (Jamal et al., 2021). Some groups, especially women with small children, may find their workload increased by the combination of housework and job demands, which can produce stress and impair work performance (Ala-Mursula, 2002;Nijp et al., 2012;Żołnierczyk-Zreda et al., 2012;Leineweber et al., 2016). Also, it is important to emphasize that working from home during the pandemic differs from conventional telework in that, when it is mandatory to work from home full-time, some of the advantages of conventional telework, such as flexibility in working hours, in the workplace and in the respect for workers' preferences in general, may not be maintained (Barbieri et al., 2021). Moreover, that the extensive use of information and communication technologies (ICT) when working from home can result in "techno-overload" (Ragu-Nathan et al., 2008). This overload is considered a techno-stressor which, when combined with stressful situations, contributes to a longer and fasterpaced working day, possibly involving handling large amounts of information, leading to fatigue, memory difficulties and loss of WTC. Also, some workers may have difficulty dealing with all the skills and know-how relating to new ICT updates, which in turn can cause greater pressure and tension (Ingusci et al., 2021). The main strengths of the present study lie in the adaptation of the WTC scale to Brazilian Portuguese, with psychometric properties appropriate to a relatively large, wide-ranging sample, comprising personnel drawn from a variety of professions at different universities. Another strength is the use of the variables HW and WTC independently and in combination. Moreover, in addition to the use of three different mental health outcomes assessed by the DASS-21 (depression, anxiety, and stress), we used self-rated mental health as an outcome. As many mental health conditions remain undiagnosed, self-rated assessments provide a useful and perhaps more revealing indicator of mental wellbeing (Levinson and Kaplan, 2014;McAlpine et al., 2018), and may capture a more comprehensive understanding of mental health in the general population (Mawani and Gilmour, 2010;Nguyen et al., 2015;Romac et al., 2022). Self-rated mental health is also qualitatively different from mental illness, as it goes beyond the experience of symptoms (Levinson and Kaplan, 2014), and it is related to health care expenditure (Nguyen et al., 2015). Limitations of the study include the lack of representativeness in the sample, which was obtained by voluntary completion of questionnaires via an application software. That strategy, which was quite common during the pandemic, may have led to self-selection bias, particularly among participants with more schooling, as observed in this and other studies (Amano et al., 2021;Barbieri et al., 2021). However, it is important to emphasize that this was expected, given that working from home has historically been the privilege of those in better socioeconomic positions (Wang et al., 2021). Another limitation is the cross-sectional nature of the analyses, in that exposure and outcome were obtained at the same time, thus the possibility of reverse causality cannot be ruled out, and participants in mental distress may have perceived weaker WTC and longer HW. However, some authors have already demonstrated that the direction of the effect is predominantly from WTC to the subsequent mental health outcomes (Albrecht et al., 2017(Albrecht et al., , 2020. The occupations of the participants were not included in the study, and this variable may influence HW. Another limitation is that the long-term effects of social distancing measures on telework have not been evaluated, and the analyses presented here covered early stages of the pandemic. One of the major challenges posed by the COVID-19 pandemic has been the sudden change in how people and their families live, work, study, and carry out their daily routines. Working from home has gone from being an occasional activity to a permanent, constant feature of the domestic environment. In addition, this work is being done full-time, rather than part-time or occasionally, and can have harmful effects on wellbeing and stress levels. Understanding how telework is organized and its impacts on mental health, especially in a new context such as that presented during the pandemic, will make it possible to develop strategies and public policies to protect workers' health. The findings of this study point to the importance of strengthening WTC, a strategy that should be pursued widely among teleworkers. This study can inform recommendations to managers and workers in connection with for telework, an increasingly common arrangement in the service sector, particularly with regard to the introduction of rest periods, promoting leisure activities and respecting the working hours that are best suited to each individual's demands and productivity. Future studies should be carried out that include the role of housework and the constitution of families (number of members, type of household, etc.) among the examined associations. These variables can complement the study of the effects of working from home on workers' mental health. Data availability statement The datasets presented in this article are not readily available because the ELSA study has government funding and the database is available only to researchers and students of the research institutions linked to the study. Requests to access the datasets should be directed to elsa@fiocruz.br. Ethics statement The studies involving human participants were reviewed and approved by the research ethics committees of all five researchers centers (Federal University of Minas Gerais, Federal University of Rio Grande do Sul, Federal University of Espírito Santo, Federal University of Bahia, and Oswaldo Cruz Foundation). The participants provided their written informed consent to participate in this study. Author contributions RG coordinated the study design, wrote the manuscript, and had primary responsibility for the final content. MA, SB, BD, LG, JM, MM, MS, and MF coordinated the study design, participated in data interpretation, contributed intellectual content to the manuscript, and participated in final review of the manuscript. AB, AM, and AP participated in data interpretation, contributed intellectual content to the manuscript, and participated in final review of the manuscript. All authors have read and approved the final version of manuscript.
2022-10-03T13:53:09.367Z
2022-10-03T00:00:00.000
{ "year": 2022, "sha1": "8505a1fe44e27079a34d1db76dc8fe9f23d97034", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "8505a1fe44e27079a34d1db76dc8fe9f23d97034", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260784439
pes2o/s2orc
v3-fos-license
Assessment and GIS Mapping of Soil Quality Indicators of Agroecological Unit 9 of Kerala, India Context: The agroecological unit 9 (AEU 9) in Alappuzha district of Kerala represents the south central laterites. The soils are acidic, gravelly, having low activity clay, often underlain by plinthite with low water and nutrient retention capacity. Assessment of soil quality indicators and mapping of resources and soil fertility status is essential for planning and development activities. Aims: Soil quality assessment was made by collecting observations on physical, chemical and biological indicators, soil quality index was computed and generated maps using GIS. INTRODUCTION "Soil is the most important natural resource which would support life on this planet through supply of essential nutrients and act as a medium for plant growth. Soil quality, like air or water quality, has an impact on the environment's health and production. Soil quality, often known as soil health, refers to a soil's ability to function within natural or managed ecosystem bounds, to sustain plant and animal productivity, to maintain or improve water and air quality, and to support human health and habitation" [1]. Soil quality declines due to depletion of soil organic matter, nutrient losses from runoff and leaching, desertification, accumulation of toxic substances, excessive use of chemical fertilizers and pesticide, crusting, compaction, improper waste disposal etc. The agroecological unit is a land unit delineated based on climate variability, landform and soils and/or land cover and having a specific range of potentials and constraints for land use. Assessing soil quality indicators at agroecological unit level by characterization and mapping of existing site specific information would provide precise and scientific catalogue of soils, nature of soil and distribution so that prediction could be made about characters and land potentialities. Soil quality assessment includes a variety of sensitive physical, chemical, and biological characteristics that represent the soil's current functioning status. "Soil quality evaluation gives an opportunity to redesign land and soil management systems for improved agricultural productivity by providing a framework for assessing the sustainability of various land use regimes. In agriculture, technologies such as remote sensing (RS), geographic information systems (GIS), and global positioning systems aid in the collection of data on agricultural operations, such as landuse/land-cover, weather conditions, soil conditions and other factors that are critical for site characterization and help in determining soil quality and land suitability for farming" [2]. "Arriving at proper soil quality index (SQI) can help to determine the degraded soil properties and help in proper interpretation of soil resources for growing crops, apart from developing fertilizer recommendations" [3]. "The south central laterite, agroecological unit 9 is delineated to represent mid land laterite terrain with typical laterite soils, which are strongly acidic. Lateritic clay soils herein are gravelly and often underlain by Plinthite with low water and nutrient retention capacity. The lowlands have strongly acid, low activity, non-gravelly clay soils with impeded drainage conditions. Mono cropped rubber and coconut intercropped to a variety of annual and perennial crops is the major land use on uplands and rice, tapioca, banana and vegetables on lowlands" [4]. These soils are very strongly acid to slightly acid with overall pH ranging from 4.5 to 5.5, poor in N, P and K, low in bases and also deficient in calcium, magnesium and boron. Therefore, a sustainable management system for improving fertility and productivity of these soils needs to be developed. Plant nutrition needs to be looked into and location and crop specific management practices should be recommended. In this context, the present study was carried out with an objective to make assessment of soil physical, chemical and biological parameters for developing suitable SQI for assessing soils for improving crop production. Study Area A study was conducted in agroecological unit 9 (south central laterites) in Alappuzha district of Kerala to assess the soil quality indicators, to work out SQI and to generate thematic maps using GIS. "The study area lies between 9° 23' 38.28'' and 9°33'63.71'' N latitude, 76°57'88.39'' and 76°65'02.00'' E longitude, which spread over the eastern part of Chengannur block which includes Mulakkuzha, Ala, Cheriyanad and Venmony Panchayaths and Chengannur municipality. It extends over 8058 ha (5.71%) of total area of the district. The south central laterites (AEU 9) represents midland laterite terrain with typical laterite soils and short dry period. The climate is tropical humid monsoon type with mean annual temperature26.5ºC and rainfall 2,827 mm" [5]. Survey and Collection of Soil Sample A survey was conducted in the study area to identify locations for the collection of soil samples. Georeferenced surface (0-20 cm) soil samples were collected from seventy five sites ( Fig. 1) in Mulakuzha, Ala, Cheriyanad, Venmony panchayats and Chengannur municipality of AEU9. With the help of GPS, geographical coordinates of each sample site was recorded and used for GIS mapping. The soil samples were shade dried, powdered with wooden pestle and mortar, sieved through a 2mm sieve and stored in labeled plastic containers for analysis. Characterization of Soil Soil samples collected from AEU9 were characterized for physical, chemical and biological indicators of soil quality using standard procedures. Soil texture was analyzed by Bouyoucos hydrometer method, bulk density and water holding capacity by core method, water stable aggregates by Yoder's method, pH (soilwater ratio of 1:2.5) using pH meter, organic carbon by wet oxidation method, available nitrogen by alkaline permanganate method, available phosphorus by colorimetric method, available potassium by neutral normal ammonium acetate extraction followed by flame photometry method, available calcium and magnesium by versenate titration method, available boron by azomethane H reagent method, available sulphur by CaCl 2 extraction followed by spectroscopy. Biological analysis of dehydrogenase activity was determined by colorimetric method. Principal Component Analysis for Assessment of Soil Quality To assess SQI, 22 soil parameters were considered and tested for significance based on the PC analysis as described by [6], using SPSS software. The PCs which had eigen values of more than one [7] and explained more than 5% variation in data and mainly high factor loaded variables with magnitude of more than0.70 were considered. Within each PC, only highly weighted factors having absolute loading values of more than 0.60 were considered for minimum dataset (MDS). The variables qualified under these series of steps were termed as 'key indicators' and considered for deriving SQI after suitable transformation and scoring. All observations of each identified key MDS indicators were transformed using linear scoring technique. To assign scores indicators were arranged in the order depending on whether a higher value was considered "good" or "bad" in terms of soil function. After transformation using linear scoring, MDS indicators for each observation were weighted using PC analysis results. Each PC explained certain amount of variation (%) in the total data set. This variation when divided by total variation explained by all PCs with eigen vectors more than 1, gave weighted factors for indicators chosen under given PC. After performing these steps, to obtain SQI, weighted MDS indicator scores for each observation were summed up using the function. In this relation, Si is the score for the subscripted variable and Wi is the weighing factor obtained from PC analysis. The assumption is that, higher index scores indicated better soil quality or greater performance of soil function. For better understanding and relative comparison, SQI values were reduced to a scale of 0-1 by dividing SQI values with highest SQI value. The numerical values thus obtained, reflect the relative performance of soils, and hence were termed as 'Relative soil quality index' (RSQI). Mean scoring values of MDS were then expressed in percentage to explain their respective contribution to the SQI. RSQI of each sampling location was classified as poor (RSQI< 50%), medium (RSQI 50%-70%) and good (RSQI >70%) [8]. Generation of Maps Using Geographic Information System GIS based thematic maps on soil quality indicators and index were prepared to depict the spatial variability using ArcGIS 10.5.1 software following Inverse Distance Weighting (IDW) method, a spatial analyst tool in ArcGIS software. The soil analysis data was loaded into MS Excel, converted to a CSV (Commade limited) file, then imported into the Arc GIS mapping software. The mapping software also imported a shape file containing the boundaries of sampled area. From the spatial analyst tool, IDW was chosen. In the IDW dialogue box, longitude, latitude, and soil attribute values were selected as x, y, and z, respectively, and the processing extent was set to the boundaries of the sampled area. The data was interpolated once the number of sampling points was entered. The resultant map for each parameter was manually categorized using conventional ratings, with distinct colours assigned to each class. RESULTS AND DISCUSSION The efficiency of soils to supply nutrients for crop growth, apart from maintenance of soil physical conditions to optimize yield is one of the important components of soil fertility or quality that would determine the productivity of agricultural system. Hence, the results of physical, chemical and biological indicators of soil quality based on 75 surface samples taken from AEU 9 are described, SQI is calculated and GIS maps are prepared. ) was observed in soils of Chengannur where sediment deposit of clay was noticed resulting in higher organic carbon content (2.41%) and clay content (53.2%), while high bulk density was observed in soils with more sand content. Low bulk density might be due to the influence of organic matter content which improved the aggregation of soil particles. Porosity varied from 44.1 to 78.1 percent in the study area. The high organic carbon content (1.42%) observed in these soils might have favoured soil aggregation and enhancing soil porosity [9]. Clay content varied between 5.62 and 80.20%, silt between 11.10 and 57.80% and sand between 9.91 and 79.90%. Sandy loam was the predominant textural class observed in 58.6% of soils in AEU 9 of Alappuzha district (Fig. 2). Physical attributes of soil quality Water holding capacity was found to be the highest in Chengannur soils where high clayey content was observed and soil texture was silty clay. Soil aggregation and aggregate stability are the most important soil quality indicators that are affected by texture and organic matter. Water stable aggregates in soil varied between 37.3 and 70.6 %. Water stable aggregates are high in soils rich in organic carbon and clay content. This is attributed to the stabilization of aggregates through the binding action of increased clay and organic carbon content in soils [10]. Chemical attributes of soil quality The present investigation (Fig. 3) revealed that soil pH varied from 4.10 to 6.90 with a mean value of 5.02. [11] indicated that the soils of Kerala were mostly laterites and basically acidic in reaction. Majority of soils (90.3%) were in the extremely acidic to strongly acidic category. Leaching of basic cations from the soil might have led to increased acidity. Soil acidity was observed to be lower in areas with sediment deposits where concentrations of basic cations like K and Ca were observed to be higher. The available nitrogen content of soil varied between 100 and 627 kg ha -1 with a mean value of 197 kgha -1 ( Table 2). Available nitrogen was low in 89.3% of the soils and only 10.7% were in the medium range. Available nitrogen was found to be medium in some areas of Mulakuzha, Ala and Cheriyanad panchayats and low in other Fig. 3. Spatial distribution of soil pH in AEU 9 of Alappuzha district areas. Even with medium to high organic carbon status of the soils under study, the low available nitrogen observed may be attributed to the low mineralization of organic matter in the extremely acidic environment, leaching of nitrate nitrogen due to heavy rainfall experienced in the region and due to losses of nitrogen under anaerobic conditions through nitrate reduction and denitrification [12]. The available phosphorous content of soil varied from 8.32 to 47.8 kg ha -1 and was found to be medium in 60% of the soils and high in 26.7% soil indicating a buildup of phosphorus in soil from high levels of phosphorus fertilization and through deposition of phosphates from the sea water. Deficient levels of phosphorus (<10 kg ha -1 ) were observed in 13.3% of samples. The increased soil acidity adversely affects phosphorus availability and the presence of clay and organic matter deposition in soil contributes to phosphate sorption and reduction in phosphorous availability. The available K content in soil ranged between 100 and 492 kg ha -1 . Majority (53.1%) of the soils were medium in available K, 44.6% were high and 2.3% low. Low activity clays such as kaolinite and iron and aluminium oxides and hydroxides are predominant in laterite soils. Hence it may be inferred that the low activity clay minerals in these soils were efficient in holding the exchangeable potassium to a considerable extent which might have contributed to increased availability of potassium [13]. The deficiency of calcium and magnesium is severe in the soils of the study area. The 64 % of samples were deficient in plant available calcium (<300 mg kg -1 ) and all the samples were deficient in plant available magnesium (<120 mg kg -1 ). The high rainfall in this region facilitate the leaching of basic cations calcium and magnesium from soil resulting in their concentration below the sufficiency range as also reported by [14]. Available sulphur content in soil varied between 3.02 and 27.5 mgkg -1 and was found to be adequate in 93.3% of soils. The higher levels of available sulphur might be due to the accumulation of organic matter and sediments in these soils. Available boron in soil ranged between 0.01 and 0.41 mgkg -1 . Available B was deficient in all the soils of AEU 9. This can be attributed to the higher mobility of boron in soils and also leaching losses which led to B deficiency in these soils. High intensity rainfall will lead to loss of soluble forms of boron by leaching [15]. Biological attributes of soil quality Organic carbon content ranged between 0.51 and 2.62% with a mean value of1.42%. Majority (61.3%) of the soils are having medium organic carbon status followed by 38.7% soils with high status (Fig. 4). Organic carbon was high in most of the areas in Chengannur and Mulakuzha and medium in other panchayats. This can be attributed to the deposition of sediments rich in organic matter in compliance with the findings of [16]. Soil Quality Index The soils of total cultivable areas of AEU 9 were assessed for soil quality in which PC analysis was performed for 22 variables. In the PC analysis of variables, about 67.2% of variance in soil physical, chemical and biological parameters was explained by 7 PCs with eigen values 'greater than1' (Table 3). Fig. 5. Frequency distribution of dehydrogenase activity in AEU 9 The eigen values ranged from 1.048 (PC7) to 3.987 (PC1) with variance in the range of 5.8% (PC7) to 22.1% (PC1). Highly weighted variables which are loaded onPC1included clay content followed by available P and K, available N, exchangeable Ca, bulk density and available S, bulk density and soil pH were highly loaded variables on PC2PC3, PC4, PC5, PC6 and PC7 components respectively (Table 4). To formulate the soil quality index, the parameters in the MDS were assigned appropriate weights and each class with suitable scores [18]. Scoring was done following the method suggested by [8] and [19] with slight modifications based on soil fertility ratings for Kerala soils. Available N and P were assigned with the highest weightage of 20 each followed by bulk density, texture, pH, available K, Ca and S with weightage of 10 each and categorized into four classes with scores ranging from 4 to 1. After scoring of soil quality indicators, a weighted SQI was computed. A relative soil quality index was also computed to study the change in soil quality and samples were rated based on RSQI value. Soil quality index (SQI) of soils in AEU 9 ranged from 120 to 250 with a mean value of 174 ( Table 5). The relative soil quality index (RSQI) ranged from 32.5 to 62.5 %with a mean of 43.6% ( Table 5). The highest mean value of relative soil quality index was observed in Mulakuzha (48.8%) followed by Cheriyanad (45.7%) and Chengannur (44.8%) and the lowest in Venmony (37%). Majority of the soils (80%) had poor soil quality while 20 %of soils had medium soil quality (Fig. 6). Soil quality was observed to be maximum in Chengannur and Mulakuzha where organic carbon, available nitrogen, phosphorous, potassium and calcium were found to be high and sediment depositions of clay and silt were observed. Fig. 6. Spatial distribution of SQI in soils of AEU 9 in Alappuzha district The low to medium soil quality of AEU 9 may be attributed to the inherent properties of laterite soils, type of vegetation and micro climate as reported by [13]. Clay percent in soil emerged as a key soil quality indicator based on the study, which plays a key role directly or indirectly in influencing quality of these soils because majority of soils of AEU 9 are sandy loam in texture. Among the nutrients, P and K, followed by N, Ca, S and other parameters bulk density and pH also emerged as key soil quality indicators. This might be due to the reason that these soils are strongly acidic (low pH), having high bulk density, low in N, medium in P and K, deficient in Ca. The results outline the need for regular liming to control soil acidity and to alleviate Ca deficiency and addition of organic matter and recommended dose of N, P and K fertilizers to improve nutrient status and to sustain agricultural systems as well as to maintain soil quality. CONCLUSION Assessing soil quality by examining the variability existing in soil parameters clearly showed that farmers can cultivate crops whose nutrient requirements are more in the areas where SQI was highest. On other hand crops which require less nutrients can be grown in areas where SQI was lowest. Examining relationships of parameters, and analyzing principal component indicated that eight parameters have significantly contributed to the SQI. Soil pH, clay %, bulk density, nutrients N, P, K, Ca and S are the important key indicators of soil quality. The soil quality can be enhanced by managing the soil pH and increasing the soil aggregation using organic amendments and Ca containing fertilizers in low SQI areas. Application of good quality and more amount of locally available organic matter to the areas with low SQI would improve the soil quality and maintain sustainable farming as it increases the OC, available content and micronutrients in soil.
2023-08-11T15:13:43.162Z
2023-08-09T00:00:00.000
{ "year": 2023, "sha1": "616c8f14cd7f2f4e1dca18a68d2e67782f507850", "oa_license": null, "oa_url": "https://journalijpss.com/index.php/IJPSS/article/view/3474/6920", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e1ae55a3dbb6c095800ee235a61dfa18a7b17a00", "s2fieldsofstudy": [ "Environmental Science", "Geography", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
122647328
pes2o/s2orc
v3-fos-license
MAGNETIC-FIELD DEPENDENCE OF THE NEUTRON-SCATTERING FROM ERRH4B4 Magnetism in the reentrant superconductor ErRh4B4 has been studied by neutron scatteringas a function of an applied magnetic field. For a temperature of 1.69 K long.rangeferromagnetism is found in fields higher than 1 kOe. Considerable hysteresis is found in the neutron scattering intensity vs magnetic field curve and long-range order with a small Er momentremains when the field is reduced to small values. conducting high magnetic a a ferromagnet the 1 Institute for Pure and Applied Physical Sciences, University of California, San Diego, La Jolla, CA 92093, U.S.A. (Recieved 28 November 1979 by H. Suhi) Magnetism in the reentrant superconductor ErRh 4B4 has been studied by neutron scattering as a function of an applied magnetic field. For a temperature of 1.69 K long.range ferromagnetism is found in fields higher than 1 kOe. Considerable hysteresis is found in the neutron scattering intensity vs magnetic field curve and long-range order with a small Er moment remains when the field is reduced to small values. BECAUSE of its unusual magnetic and superconducting used to remove high order contamination from the properties [11 there has been a lot of interest in the monochromator. The monochromator was pyrolytic ternary compound ErRh4B4. The material becomes a graphite and collimation after the sample of 20' was superconductor at 8.7 K and superconductivity is used. Two sets of measurements were made. In the first destroyed and long-range magnetic order is established set of measurements the sample was placed in a superat about 0.9 K. Neutron diffraction experiments by conducting magnet capable of producing high magnetic Moncton et al. [21 have shown that ErRh4B4 is a fields. However, the superconducting magnet had a ferromagnet below 0.9 K with the moment direction in small remanent field and thus it was necessary to check the basal plane. The moment value was found to be the low field results with a different magnet system. 5.6p~which is well below the free ion value of~Additional low field measurements were thus made with Fertig et a!. [11showed that the superconducting state a conventional pumped 4He cryostat placed in a is destroyed with the application of a magnetic field. Helmholtz pair of magnet coils. This gave a uniform This paper reports neutron diffraction measurements magnetic field and a negligible remanent field. The on ErRh 4B4 taken as a function of an applied static experiments were performed at the lowest temperatures magnetic fIeld, achievable by the two cryostat systems which were The sample was prepared by arc melting the rare 1.69 K for the superconducting magnet and 1.79 K for earth tetraboride with Rh followed by annealing. The the cryostat with the Helmholz coils. aIB isotope was used to decrease the absorption cross We will first discuss the measurements made in the section for slow neutrons. The experiments were per-superconducting magnet. Magnetic diffraction peaks formed at the High Flux Isotope Reactor using con-could easily be observed in fields larger than about ventional techniques. An incident neutron wavelength 1 kOe. in zero field. Figure 1 shows the (101), (110) Fig. 2. Field dependence of the magnetic contribution applied field (open circles) and in an applied field or to the (101) peak intensity measured in a supercon. 10 kOe (closed circles). ducting magnet for a sample temperature of 1.69K. When making powder diffraction measurements in about 1 kOe was applied reflecting the superconductivity a magnetic field considerable care must be taken that in this field range. The scattering intensity then increases the field does not orient the powder particles with increasing field, the ordered moment values correspreferentially. This commonly occurs with fuse particles ponding to 5.06 ± 0.5 fB at 10 kOe and 6.9 ±0.5~at that have sizeable magnetic moments. In the present 20 kOe. This is a similar value to that obtained in the case, the sample consisted of rather coarse polycrystal-magnetization measurements. The intensity found upon line particles held in a flat sample holder so that the lowering the applied field appears to be quite different sample size was about 2 x 2 x 0.1 cm 3. The field was from the magnetization measurements although the applied vertically so that it was perpendicular to the reversible magnetization data are only published for low scattering plane and along one of the long dimensions field values. We see considerable hysteresis in the intenof the plate. The thin plate was used so that neutrons sity vs field curve and at zero applied field we still see could be transmitted through the sample which is quite some long-range ordered moment. The field value is not absorbing to slow neutrons because of the high absorp-brought identically to zero since the magnet assembly tion cross-section of Rh. The sample particles were and spectrometer have some remanent field amounting packed tightly in the holder and it seems unlikely that to about 100 Oe. they would move in the presence of the applied field. Good statistics were obtained after decreasing the Nevertheless, to check that the field was not producing field to zero applied field (point 1 on the graph) showpreferred orientations that could influence the interpret. ing that the long-range order found is a real effect. If ation of our results, powder diffraction patterns were one starts from point 1 and increases the field, the intentaken before and after the fields were applied and at sity remains on the decreasing field curve and follows it several values of the applied field. Analysis of the dif-back to higher values. If one starts at point I and warms fraction patterns showed no evidence that the field was the sample to a temperature above the superconducting producing preferential alignment of a particular crystal-transition temperature (8.7 K) and then recools to lographic axis. 1.69 K one returns to the zero intensity value corres-The field dependence of the magnetic scattering ponding to no long-range moment. The scattering intendetermined from the intensity of the (101) reflection sity at point 1 corresponds to a long-range ordered is shown in Fig. 2. The nuclear component to the reflec-moment of 0.7 ±0~2PBFreeman and Jarlborg [5] have tion has been subtracted. The field was produced by the in fact previously suggested that upon lowering an superconducting magnet operating in the persistent external field from a value greater than the critical field mode and thus the field value was very stable during it may be possible to form a mixed state in which normal the measurements at each point. The sample tempera. and ferromagnetic regions of the compound coexist with ture was held at 1.69 ±0.02 K during the course of the superconductivity. experiment. The curve for increasing field is in some The results of the second set of measurements are respects similar to that established by Fertig et a!. [I] shown in Fig. 3. These measurements are confused to and Ott eta!. [3,4]
2018-12-17T06:00:01.663Z
1980-10-01T00:00:00.000
{ "year": 1980, "sha1": "706f9b2280ef53213035a66b5b418a06d2e46d8a", "oa_license": "CCBY", "oa_url": "https://escholarship.org/content/qt6z28n42c/qt6z28n42c.pdf?t=oam571", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "06bf12bee67f974b057edf099d1f58746681532e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
139800722
pes2o/s2orc
v3-fos-license
Structure optimization of nozzle in quick-freezer based on Response Surface Methodology The structural parameters of nozzle in quick freezer have great influence on the nozzle outlet velocity, and the nozzle outlet velocity directly affects the freezing efficiency of the quick freezer. A V type slot nozzle was simulated with CFD software. The effect of the V type slot nozzle structure parameters variation on nozzle outlet velocity was studied by using Response Surface Methodology. Finally, the optimal structural parameters of the nozzle were obtained: the nozzle outlet height was 3.97 mm, the nozzle outlet width was 8 mm, the V type diversion trench height was 71.63 mm and the V type diversion trench angle was 29.57°. The conclusion provided a theoretical basis for optimal design of nozzle structure of quick freezer. Introduction In the field of heat and mass transfer, the high heat transfer coefficient is produced by the nozzle impingement jet on the target area, and the main factor affecting the heat transfer intensity below the nozzle is the velocity along the axial direction from the nozzle to the impact plate. This efficient heat transfer mechanism is mainly used in papermaking, drying, metallurgy, food quick freezing and other fields [1][2][3][4]. The axial velocity of the nozzle is mainly related to the nozzle inlet and outlet shape. Liang et al [5] combined the CFD numerical simulation method with the BP neural network to realize the effective prediction of the nozzle jet. It was found that the increase of the outlet angle of theta and the depth h of the cone hole was helpful to enhance the jet kinetic energy of the nozzle. Zhang et al [6] employed the Large Eddy Simulation based on the FLUENT to simulate both the two phase and single phase flow models of the jet flow field of three dimensional nozzle. The rationality of the simulation results of the two phase and single phase flow models of jet flow field of three dimensional nozzles was demonstrated by the diffusion angle and volume flux of jet oil measured by the method of high speed visualization and weighting jet oil, respectively. The result indicated that the volume flux of jet oil increased as the orifice diameter, the angle between orifice axis and upstream axis increased and as the orifice axial length decreased. When the fluid flowed through the nozzle, the energy loss was related to the nozzle structure. Gong et al [7] captured the surface structures of the jet with three nozzles with high-speed and microscopy photography. The experimental results show that the nozzle contraction ratios have a significant influence on the jet periodic ripple section. When the contraction ratio is 8/2, [8] designed the multiple swirling jet nozzles with simple structure. The outflow field of nozzle was simulated by SIMPLEC algorithm firstly, and then the outflow field feature was analyzed. The rock-breaking mechanism was studied by indoor rock-breaking experiment and the structural parameters of nozzle were optimized. The results show: the impacting area of the multiple swirling jet nozzle increases with setting circle radius and decreases with extended angle, and the impacting area firstly increases and then decreases with the increase of torsion angle. Yang et al [9] carried out the experiments to study the flow characteristics of cylinder nozzles, cone nozzles and cosine nozzles. The results show that the inlet conditions greatly influence the cavitation energy loss. The cavitation energy loss coefficient of the cylindrical nozzle is the largest, the conical nozzle is the second, and the cosine nozzle is the least. When the inner surface is closed to streamline, the nozzle features significantly affected both kinds of energy losses and they tend to be smaller. Li et al [10] used the VOF model and the standard  − k model in FLUENT software to simulate and analyze the water-jet flow fields with different nozzles. The results show that the different nozzle structures have great influence on the water-jet performance. When the convergence angle is 14 o and length diameter ratio was 2-4, the performance of the water-jet is the best. Przemysław Młynarczy et al [11] observed that the key issue is the selection of nozzles of appropriate shapes and dimensions in order to achieve pressure pulsation reduction without significantly increasing the flow resistance. At the same time, it was found that the twin hyperboloidal nozzle with a 33% reduced cross-section area has the optimal damping properties. Ozgur Oguz Taskiran [12] investigated the effect of nozzle inlet rounding on diesel spray formation and combustion. Results showed that inlet rounding increases discharge coefficient and sharp inlet nozzle produces smaller droplets that shorten spray tip penetration and autoignition delay period due to low discharge coefficient and rounded inlet nozzle has lower combustion temperature, less NO and soot concentration than sharp inlet nozzle. In the laminar flow regime, Barak Kashi et al [13] studied the influence of nozzle length in submerged jet impingement heat transfer by validated direct numerical simulations. It is found that the maximal jet velocity first decreases with increasing effective nozzle length, Z=L/(D·Re), to a minimum at Z*≈0:0015,beyond which it increases as in developing pipe-flow. Using transparent nozzles, Cui et al [14] investigated the diameter error, conical and inclined that embody common deviations in nozzle geometry. The results indicate that very small differences in geometric structure still have consequences for the obviously different characteristics of cavitating flow. Huang et al [15] investigated the effects of the necking circular nozzle and the twisted triangular nozzle on the bubble size distribution, the average gas holdup, liquid mixing time and gas-liquid mass transfer coefficient in the jet bubbling reactor. The experimental results showed that the bubble size was smaller, the average gas holdup was higher and the mixing time was shorter in the case with twisted triangular nozzle, compared with the case with necking circular nozzle. Based on the nozzle shape, Tang et al [16] carried out the numerical simulation for the flow field and energy separation effect of the helical nozzles and straight nozzles vortex tube with 4 channels. The result was that a vortex tube with helical nozzles can achieve greater tangential and axial velocity. Compared with the straight nozzle, the vortex tube with helical nozzles can obtain energy separation better. In this paper, the object was the V type slot nozzle of the impact freezer. By changing the structural parameters of the nozzle, a larger nozzle outlet velocity could be obtained. Numerical simulation The physical model of the V type slot nozzle structure was shown in figure 1, and the nozzle structural parameters were shown in table 1. The width between the two nozzles D was 73 mm, the nozzle outlet width S was 5 mm, the nozzle outlet height K was 30 mm, the V type diversion trench height V was 66 mm, the V type diversion channel angle theta was 30 o . This paper studied the effect of parameters changed on the outlet velocity of V type slot nozzle during quick freezing. The flow medium was air, and the simulation process assumed: • Air was an incompressible, homogeneous viscous fluid. • The wall of the nozzle was considered as no slip wall, that was, the air velocity at the wall was U=0. • The wall of nozzle was adiabatic, that was, heat flux q=0 W/m 2 . The continuity equation, momentum equation and energy equation were combined to solve the numerical simulation. Pressure inlet was selected as inlet boundary, Pressure outlet was as outlet boundary, and Pin=220 Pa, Tin=228 K, Pout=0 Pa, Tout=233 K. The calculation model and the adjacent parts of the V type nozzle were set as symmetry boundary, which were Symmetry1 and Symmetry2 which were shown in the figure 2. A solution method was based on k-εturbulence model, the SIMPLE algorithm with second order upwind [17] for all spatial discretization. Experiment design of Response Surface Methodology With the advantages of modulating one factor at a time in determining experimental-response relationship, the response surface methodology (RSM) is one of the most commonly used multivariate techniques [18]. By establishing a mathematical model, RSM could evaluate variable parameters and interactions using quantitative data, effectively optimizing processing technology based on statistical results, thus reducing the number of experimental trials required [19]. RSM has been successfully used for developing, improving, and optimizing processes in many fields, e.g. food, herbal medicine, and microbiology. The optimum outlet velocity of V nozzle was determined by Response Surface Methodology optimization. Using Box-Behnken designed Experiment, and the four structural parameters of the nozzle outlet width S, the nozzle outlet height K, the V type diversion trench height V and the V type diversion trench angle θ were taken as the investigation factors, with the outlet velocity of the V nozzle as the response value, and the factor level of the four structural parameters were shown as table 2. Regression analysis of the data in table 3 The U was the outlet velocity of the nozzle, and the K, S, V, and Θ were nozzle outlet height, nozzle outlet width, V type diversion trench height and V type diversion trench angle respectively. Table 4 showed the results of variance analysis of regression equation. The correlation coefficient R 2 =93.13% of the model showed that the correlation degree of the model was better. The coefficient of variation (C.V.) was 7.59%; the result indicated that the reliability of the model was high. The predicted value of the model was in good agreement with the actual value, and it was suitable for the prediction and analysis of the outlet velocity of the V type slot nozzle. From table 4, we could see that the whole model had a significant impact on the response value(P<0.001), and there was no significant interaction among the parameters (P < 0.01). It was known from the f value that the influence of each factor on the nozzle outlet speed was D (V type diversion trench opening angle) > C (nozzle outlet width S) > A (nozzle outlet height K) > B (V type diversion trench height V). According to the results of variance analysis in table 4, the relationship model between the regression equation of the nozzle outlet velocity and the response face value was significant (p<0.05), and the test for lack of fit was not significant ( p>0.05). It showed that the experimental model fully fitted the test data, and the regression equation of the nozzle outlet velocity was a suitable mathematical model to show the relationship between the nozzle outlet velocity and the parameters of the structure of the V type slot nozzle structure. type diversion trench and the height of the V type diversion trench was elliptical, and the interaction was remarkable. It could be seen from the three-dimensional image that the surface was convex. Along the direction of D axis, the color of 3D surface became deep and then shallower. It showed that with the change of the angle of the V type diversion trench, the change trend of the nozzle outlet velocity was first severe and then slowly. The effect of interaction of various factors on the outlet velocity of the nozzle could be seen intuitively from figure 3. It could be seen that the factor D (the opening angle of the V type diversion trench) had the greatest influence on the nozzle outlet velocity, and the factor C (nozzle outlet width) was the second, which coincided with the results of table 4 variance analysis. The optimum structure parameters of the V slot nozzle were determined by software analysis. That was, the maximum outlet speed of the V slot nozzle was 15.5119 m/s. The optimum combination scheme of the structural parameters was the nozzle outlet height 3.97 mm, the nozzle outlet width 8 mm, V type diversion trench height 71.63 mm and V type diversion trench angle 29.57°. The error analysis of Response Surface Methodology regression model and actual calculation was shown in Table 5. Compared with the results of formula (1) calculation and numerical simulation, the deviation of both of them was less than 10%, and the deviation was small. It was proved that the result was reasonable and reliable, and it was an acceptable deviation range. Therefore, the formula (1) obtained by the Response Surface Methodology method could be applied to the calculation of the outlet velocity of the V type slot nozzle. Conclusion Taking V type slot nozzle as the research object and using control variable method, this paper studied the influence of different nozzle parameters on the outlet velocity of V type slot nozzle during quick-freezing process, including the nozzle outlet width S, the nozzle outlet height K, and V type diversion trench height V and V type diversion trench angle θ. The conclusion was obtained: on the basis of single factor experiment, the structural parameters of nozzle were optimized by Response Surface methodology. The structural factors affecting the nozzle outlet speed were arranged in the order of main and secondary: the opening angle of the V type diversion trench, the width of the nozzle outlet, the height of the V type diversion trench and the height of the nozzle outlet. Finally, the optimum structural parameters were as follows: nozzle height was 3.97 mm, nozzle outlet width was 8mm, V type diversion trench height was 71.63 mm and V type diversion trench angle was 29.57°.
2019-04-30T13:08:34.221Z
2018-10-30T00:00:00.000
{ "year": 2018, "sha1": "b1102bf4579a61ae61701dff06c4f60a2e97d151", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/188/1/012106", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f2e484ea6cf15d4c8826a7dabe9c372be9086590", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
266146487
pes2o/s2orc
v3-fos-license
Transdisciplinary methods in socio-economic and environmental research . The article presents a theoretical overview characterizing the research potential of transdisciplinary methods. The empirical part was conducted in the South of Russia (Stavropol Krai) in July 2023; it summarizes the practices of using transdisciplinary methods in the analysis of socio-economic and environmental processes. The professional community of environmentalists of Stavropol Krai was involved in the development of strategic vectors of interaction with university education to solve a common task – the training of highly qualified specialists in demand by production. In the course of the study we assessed the involvement of the professional community of ecologists in the processes of interaction with university teachers; the quality of training of graduates of environmental educational programs of higher education; we identified the practiced and promising types of interaction between the professional community of ecologists and the university. The research shows the importance of universal professional competencies for the successful work of a graduate of environmental educational programs. The data obtained provide an understanding of the importance of new research strategies using transdisciplinary approaches in the analysis of socio-economic and environmental processes. Introduction Different regions of the world face global challenges in the field of climate change, ecology, economy, social relations and sustainable development.Many multidirectional complex processes require new approaches in their study, evaluation, and development of program documents at the municipal, regional, and federal levels.The purpose of our research is to generalize the practical experience of using transdisciplinary approaches in the analysis of socio-economic and environmental processes at the regional level. The relevance of the research is confirmed in a number of publications that are used in the analysis of the scientific potential of transdisciplinary methods.A new generation of specialists should be ready to use transdisciplinary practices in their professional activities.This conclusion is reached by the authors of the article Thomas W. Bean, Amy Wilson-Lopez, Kristen Gregory [1].Accordingly, in professional educational programs, it is necessary to provide sufficient theoretical material that reveals the content of these methods, as well as to provide students with the opportunity to apply transdisciplinary methods in practice to solve specific socio-economic and environmental problems.This conclusion generally confirms the relevance of our study. The solution of complex problems associated with anthropogenic processes and transformations of previously stable regional biogeocenoses goes beyond the limits of individual disciplines.Forecasting and modelling of socio-economic and environmental processes requires an integrated approach to understanding the vectors and dynamics of their development.Scientists need to rely on a variety of data sources, including the opinions of the regional and professional community in the subject areas studied.The author of the article Parisa Nourani Rinaldi draws attention to the inevitable interrelation of natural and social processes, which leads to the need to remove numerous interdisciplinary research frameworks [2].This conclusion is especially relevant in the process of preparing predictive research strategies. The development of technological progress has affected the complexity of the human habitat.This is especially true of large megacities.At the same time, there are situations associated with natural disasters, which are complicated by man-made factors.According to scientists, communities are key sources of information and innovations that can serve as a model of recovery after a natural disaster [3].Using the example of the work of an interdisciplinary group in Puerto Rico, effective algorithms for responding to natural disasters were proposed through the inclusion of the local community in their development. Transdisciplinary methods are of particular importance in the study and preparation of solutions to social and environmental problems faced by post-industrial communities [4].The authors of the article Jess Vogt, Margaret Abood analyze the experience of using a transdisciplinary approach in preparing a program for the ecological development of the territory through the management of urban plantations and municipal resources.It is this approach that allows you to organize the coordination of the activities of several organizations.By conducting research with stakeholders, scientists describe the mechanisms by which the activities of Communi Tree transform resources into results in a socio-ecological context, and assess their sustainability. An example of project activity in solving land use problems served as the basis for conclusions about work in interdisciplinary and transdisciplinary research groups [5].In the presence of heterogeneous subjects in the study, the transdisciplinary approach makes it possible to correct the problem statement in a timely manner, systematically use the knowledge of the involved practitioners at the stage of project design development.This is especially important for thematic areas of high complexity, such as land use research. Transdisciplinary research is aimed at obtaining knowledge through the interaction of science and practice and is attracting increasing attention in the science of sustainable development [6].As it was noted, sustainable development is characterized by huge difficulties, specificity of context and wide representation of project participants (different subject areas and levels of knowledge -from theoretical analysis to practical application).The scientific problem discussed by the results of the study is the systematization and conceptualization of the process of integrating knowledge in transdisciplinary research projects.According to this the integration of knowledge occurs depending on their typesystemic, targeted, transformative, as well as on the effectiveness of the interaction of project team members from academia and practitioners.In many ways, these conclusions are consonant with our research hypothesis, which is formulated in the subject area "environmental knowledge".In the empirical part of the study, the integration of knowledge processes, the interaction of the academic community of agricultural education and practical specialists in the environmental sphere of Stavropol Krai are subject to analysis.And this is happening in the complex context of a region with certain climatic features and socio-economic changes.Thus, research strategies using transdisciplinary approaches have their relevance in different regional contexts and subject areas [7,8,9,10]. Researcher Rea Pärli in the article provides a systematic review of the literature and interviews with the expert community on how organizational, technological, and institutional factors affect the results of transdisciplinary research projects [11].The author considers important distinctive features of this category of projects: 1) the goal is to make a decision on the problem under study and not just a statement of the results of the study in the subject area; 2) research strategy and methods play a fundamental role in the success of a transdisciplinary project, rather than institutional factors.These important conclusions are important for our research, since new knowledge is obtained at the junction of different institutions aimed at improving specific practical actions [12,13,14].The academic community together with practitioners (in the field of environmental safety and environmental protection of the region) are developing an interaction strategy aimed at improving the quality of training graduates of environmental educational programs.The result is not only research findings, but also a roadmap of interaction, updated curricula for training environmentalists, partnerships between academic leaders and practitioners, inviting practitioners to participate in the implementation of individual disciplines and practical training of students. The scientific community pays great attention to the conceptualization of the transdisciplinary approach.Thus, the author of the article Stephan Lorenz analyzes the scientific discussion about the problems of applying this approach to solving the problems of sustainable development of large socio-economic, natural, technological systems [15].On the basis of sociological theory, there is an attempt to combine the research and project experience of scientists and practitioners.The author emphasizes, that professional selfreflection is important, especially of the academic community to provide a methodological basis for the subsequent training of students and young scientists in the cooperation of science and practice.In general, Stephan Lorenz believes that transdisciplinary research is defined as a special kind of applied science. Another aspect of the application of transdisciplinary methods in solving current and future problems of sustainable regional development is presented in the article by Barbara Smetschka and Veronika Gaube [16].In their opinion, this approach ensures the integration of social and scientific knowledge, promotes the acquisition of new knowledge and strategies, and also affects society through the training of research skills of practitioners and the promotion of progressive knowledge in society.The authors of the article, using the example of the participation of the farming community in the research process of sustainable land use, show that joint modelling allows integrating the most pressing issues into models and developing scenarios and strategies together with stakeholders.These findings are relevant for other studies as well [17,18,19]. Materials and methods The empirical part of the research practices of using transdisciplinary methods in assessing socio-economic and environmental processes in the region was conducted in the South of Russia (Stavropol Krai) in July 2023.The professional community of environmentalists of Stavropol Krai was involved in the development of strategic vectors of interaction with university education to solve a common task -the training of highly qualified specialists in demand by production.At the stage of developing the program and research tools, focus groups were organized with the participation of scientists and practitioners in the field of ensuring environmental safety of the region.A unified vision has been developed for the theoretical model of the professional community's expert survey of environmentalists of Stavropol Krai.The information blocks of the expert's questionnaire were: assessment of involvement in the processes of interaction with the university community; assessment of the quality of training graduates of environmental educational programs of higher education; practiced and promising types of interaction between the professional community and the university; the importance of universal professional competencies for the successful work of graduates of environmental educational programs.To collect primary sociological information we used the method of electronic questionnaire via Google Form.In total, 53 people from among the heads, chief and leading specialists, heads of structural divisions in organizations under the jurisdiction of the Ministry of Natural Resources and Environmental Protection of Stavropol Krai took part in it.The data obtained during the survey were processed in the SPSS Statistics program (version 23).The research tools included 27 substantive questions, socio-demographic, and qualification characteristics of the survey participants. Results and discussion According to the survey results, 88.4% of experts have interacted with graduates of environmental educational programs of Stavropol State Agrarian University (SSAU) in professional work situations in the last 2-3 years.Another 11.6% noted that among their acquaintances there are graduates of the Stavropol SAU who are excellent employees.Thus, the expert community shows high awareness about the object and subject of the study. According to 92.7% of the survey participants, Stavropol SAU trains excellent specialists.84.2% of experts noted that graduates have the necessary competencies (skills) for successful inclusion in the current work processes of the enterprise (organization); 80.1% believe that graduates of Stavropol SAU are trained on advanced equipment for the industry, taking into account modern technological processes of the industry. In organizations belonging to the Ministry of Natural Resources and Environmental Protection of Stavropol Krai, digital competencies of specialists are in demand (96.0% of the participants in the expert survey noted).And, in their opinion, 76.9% of graduates of environmental programs of Stavropol State Agrarian University fully possess such competencies. 92.7% of experts expressed readiness to develop strategic partnership with Stavropol SAU in educational, scientific, cultural and leisure, sports and other fields of activity.Experts are ready to confirm this partnership with targeted contracts for the training of graduates and agreements on long-term strategic cooperation. Areas for cooperation in improving the quality of training graduates of environmental educational programs are: -a practical training class for students of SSAU on the basis of the enterprise (84.7% of the participants of the expert survey noted); -an internship platform for teachers (76.3% of the participants of the expert survey noted); -modernization by the enterprise of a profile class (laboratory) at the university (72.1% of the participants of the expert survey noted). Experts assessed the importance and relevance of universal competencies of graduates.The data are presented in the following table 1.The first five most significant positions of graduates of environmental programs are related to competencies: the ability to set professional goals and objectives (average score of 4.96 on a five-point scale); the ability to identify likely problems in work (average score of 4.88 on a five-point scale); the ability to develop solutions and evaluate their effectiveness (average score of 4.84 on a five-point scale); the ability to independently extract and interpret the necessary information (an average score of 4.84 on a five-point scale) and sociability and communication skills (an average score of 4.81 on a five-point scale). The allocated competencies play an important role in the development of interdisciplinary approaches by graduates to solve environmental problems of the region. In general, experts note that the listed competencies are somewhat less important for graduates of agro-technological programs (table 1). Thus, we can say that the interaction of the academic community and practitioners shows the areas of improvement of educational programs and confirms the relevance of the transdisciplinary approach in socio-economic and environmental research. Conclusion Based on a brief theoretical review of publications in the subject area of the application of transdisciplinary methods, a number of conclusions can be drawn: 1. Transdisciplinary methods are used in a significant part of research aimed at analyzing, modelling and forecasting the sustainable development of large socio-economic and ecological systems.2. The examples of the application of the transdisciplinary approach articulate the importance of the integration of scientists and practitioners in the research process.3. The benefits of using a transdisciplinary approach are not only to enrich scientific theory and practice, but also to promote jointly generated relevant socio-professional practices, which positively affects the competence potential of the local, regional, professional and scientific community.4. The application of a transdisciplinary approach provides an information base for professional self-reflection, the academic community.Subsequently, this plays an important role in the development of the methodological basis for the subsequent training of students and young scientists in the cooperation of science and practice.5.The quality of socio-professional communications in the process of project activity of scientists and practitioners plays a great role in achieving the positive effects of the use of transdisciplinary methods. In the empirical part of the study an important thesis was confirmed -the relevance of using a transdisciplinary approach in socio-economic and environmental research. Table 1 . Expert community's assessment of the demand for universal competencies of the university educational programs
2023-12-10T16:24:27.459Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "c01b70845833de5cd17b09634ad18db8b583e4b2", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/95/e3sconf_emmft2023_06006.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ca8f6a34d716ea0c387fc6fdc085e2d888ecec8e", "s2fieldsofstudy": [ "Environmental Science", "Economics", "Sociology" ], "extfieldsofstudy": [] }
3322921
pes2o/s2orc
v3-fos-license
Formation of carbyne-like materials during low temperature pyrolysis of lignocellulosic biomass: A natural resource of linear sp carbons The exploration, understanding and potential applications of ‘Carbyne’, the one-dimensional sp allotrope of carbon, have been severely limited due to its extreme reactivity and a tendency for highly exothermic cross-linking. Due to ill-defined materials, limited characterization and a lack of compelling definitive evidence, even the existence of linear carbons has been questioned. We report a first-ever investigation on the formation of carbyne-like materials during low temperature pyrolysis of biobased lignin, a natural bioresource. The presence of carbyne was confirmed by detecting acetylenic –C≡C– bonds in lignin chars using NMR, Raman and FTIR spectroscopies. The crystallographic structure of this phase was determined as hexagonal: a = 6.052 Å, c = 6.96 Å from x-ray diffraction results. HRSEM images on lignin chars showed that the carbyne phase was present as nanoscale flakes/fibers (~10 nm thick) dispersed in an organic matrix and showed no sign of overlapping or physical contact. These nanostructures did not show any tendency towards cross-linking, but preferred to branch out instead. Overcoming key issues/challenges associated with their formation and stability, this study presents a novel approach for producing a stable condensed phase of sp-bonded linear carbons from a low-cost, naturally abundant, and renewable bioresource. Conventionally, the spectra of all the carbon species are acquired by turning on 1 H decoupling during the acquisition period. However, turning off the 1 H decoupling for a short period of 40 μs (i.e., gated decoupling) suppresses the signal of the protonated, nonmobile carbon species, due to dipolar dephasing of the 13 C signal, and yields the signal for only the non-protonated or methyl carbon species. This step can identify and suppress signals from hydrogen bonded carbon atoms. The spectra for lignin chars after heat treatment for 30 minutes at 350 °C and 400 °C are shown in Fig. S1a. The spectra in red, acquired without gated 1 H decoupling, represents the spectra of all carbon species, whereas the spectra in black, acquired after 40 μs of gated 1 H decoupling, represents the signals from non-protonated carbon species only. The baseline is represented by the horizontal dashed line in these spectra. The signals from the -C≡Cspecies (the sp hybridized alkyne carbons) are expected to be found in the chemical shift region between 100-60 ppm (see insets). No signal was observed in this region for 350 °C char after suppressing the signal from C-H species thereby indicating the absence of -C≡Cspecies at this temperature. However, a significant signal for the −C≡C− species was observed in the nonprotonated carbon species spectra in the black curve for the 400 °C char. The signal to noise 4 ratio in this region was measured to be 4.25, thereby indicating the presence of a 'real peak' clearly above the noise level. Fig. S1c shows the NMR spectra of raw lignin powder, which contained contributions from three main constituents: cellulose (42%), lignin (35%), and hemicellulose (23%). The presence (or absence) of -C≡Cspecies could not be ascertained due to large contributions from several functional groups in the chemical shift region between 100-60 ppm. It is important to note that the NMR signal for the -C≡Cspecies was clearly absent for the 350 °C chars, and was unambiguously present in 400°C chars. The initial presence (or absence) of the -C≡Cspecies in the starting material may therefore not be an essential requisite for the formation of carbyne phase at higher temperatures. Peak identification and phase characterization We had previously reported detailed X-ray diffraction investigations using Cu Kα radiation on lignin chars heat treated in the temperature range 200-800°C 21 . The diffraction pattern for the 400°C char had contributions from three distinct phases. The first set included lignin based fibers labelled as Phase 'B' that were present in 200°C as well as 400°C chars; but this phase was no longer present at 600°C. The second set included additional peaks, that made their initial appearance at 400 °C and were labelled as the phase 'C'. The XRD peaks for the 'C' phase increased in intensity at 600°C, increasing further upon heating to 800 °C. The third set included peaks that did not follow any such well-defined pattern. The peaks for the carbyne phase were not identified unambiguously due to their relatively low intensities. Spectroscopic investigations (NMR, Raman and FTIR) on 400 °C lignin chars had indicated the presence of small amounts of the 'carbyne' phase along with different structural forms of sp 2 carbon, mineral impurities etc. The small domain size of the 'carbyne' phase is expected to give rise to broad XRD peaks; this aspect was alleviated to some extent by using a longer wavelength radiation CoKα (1.789 Å) instead of the standard CuKα (1.54 Å) and highresolution optics and beam focusing. Data collection was carried out over a longer period using step-scan with up to 30 seconds per angular step. The XRD peaks for the carbyne phase were identified using the following criteria (see main text): (a) all relevant peaks should have similar peak shapes and profile clearly distinct from other peaks, and (b) these peaks must only be present in 375°C and 400°C chars and be absent in 600°C chars. Six peaks, marked with the symbol '▲', met these criteria and were used for 'carbyne' structure determination. Peaks located at 24°, 51° and 53.5° (marked with '*') did not belong to the carbyne phase, as these were present at all three temperatures. Diffraction peaks at 31°, 49.5°, 76° and 81° were found to be relatively too sharp. 8 Figure S2a| XRD spectrum for the 600°C lignin char using Co radiation The XRD spectrum for the 600°C lignin char is shown in Fig. S2a. With the peak at 31° peak (marked with an '*' in the figure) attributed to impurity silica, indexing and structural characterization of other peaks have been detailed in Table S2. The structural phase of carbon present in 600°C chars was very similar to the phase 'C' (a=8.83Å; c=6.9Å) reported previously 21 . Spatial distribution of the 'Carbyne' phase In addition to determining the structure of 'carbyne', an attempt was made to determine its spatial distribution as well. After identifying the peaks for the 'carbyne' phase, a microdiffraction investigation was carried out for preparing a spatial map for this phase. Micro XRD measurements were performed on Philips X'pert MRD PRO system with a horizontal high resolution Ω-2θ Goniometer (320 mm radius) with a minimum step size of 0. is presented in Fig. S3a. Key constituents in this char were found to be predominantly carbon with a very small peak for oxygen. Figure S3a| SEM/EDS images for 600°C chars indicating carbon to be the key constituent, along with a small oxygen peak. White regions in the SEM image indicate ash impurities. 11 All these results indicate that the key element present in lignin chars was carbon along with a small amount of oxygen, nitrogen, sulphur and mineral impurities. The presence of hydrogen, which cannot be measured accurately during above analysis was indicated in the NMR spectra from all carbon species (Fig. S1a) and proximate analysis. Spatial distribution of the 'Carbyne' phase: Fig. S3b shows an HRSEM image of the 400°C char showing the spatial distribution of cavities containing the carbyne phase (Fig. 4F). Several such cavities were found distributed across the specimen and were not localized in a specific area. This result on the spatial distribution of the 'carbyne phase' is in good agreement with the epitaxial mapping results from x-ray diffraction (Fig. S2). Figure S3b| HRSEM images for 400°C chars indicating the spatial distribution of cavities and the carbyne phase across the specimen.
2018-04-03T03:55:53.307Z
2017-12-04T00:00:00.000
{ "year": 2017, "sha1": "01a864f6870a9064cef493ec696ff6e260aa3196", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-17240-1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01a864f6870a9064cef493ec696ff6e260aa3196", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
256054632
pes2o/s2orc
v3-fos-license
Effect of Diurnal Variation of Heart Rate and Respiratory Rate on Activation of Rapid Response System and Clinical Outcome in Hospitalized Children Heart rate and respiratory rate display circadian variation. Pediatric single-parameter rapid response system is activated when heart rate or respiratory rate deviate from age-specific criteria, though activation criteria do not differentiate between daytime and nighttime, and unnecessary activation has been reported due to nighttime bradycardia. We evaluated the relationship between rapid response system activation and the patient’s clinical outcome by separately applying the criteria to daytime and nighttime in patients < 18. The observation period was divided into daytime and nighttime (8:00–20:00, and 20:00 to 8:00), according to which measured heart rate and respiratory rate were divided and rapid response system activation criteria were applied. We classified lower nighttime than daytime values into the ‘decreased group’, and the higher ones into the ‘increased group’, to analyze their effect on cardiopulmonary resuscitation occurrence or intensive care unit transfer. Nighttime heart rate and respiratory rate were lower than the daytime ones in both groups (both p values < 0.001), with no significant association with cardiopulmonary resuscitation occurrence or intensive care unit transfer in either group. Heart rate and respiratory rate tend to be lower at nighttime; however, their effect on the patient’s clinical outcome is not significant. Introduction Heart rate (HR) and respiratory rate (RR) are key indicators of the patient's physiological status: patients generally present with changes in vital signs before physiological deterioration. Therefore, HR and RR are used as evaluation factors in most pediatric screening systems, such as pediatric advanced life support (PALS), advanced pediatric life support (APLS), rapid response system (RRS), and pediatric early warning score (PEWS) to identify patients who need intervention [1][2][3][4][5][6][7]. On the other hand, it is well known that HR and RR may have circadian variations under the influence of the autonomic nervous system. Indeed, some studies report lower HR and RR at nighttime [6][7][8][9][10][11]. However, the criterion of HR or RR by age used in the abovementioned screening systems does not differentiate between daytime and nighttime [3][4][5][6][7]. However, due to their characteristics, RRS and PEWS are applied not only during the daytime but should also be applied at nighttime. Thus, false positives may be recorded since the HR or RR is normally lowered at nighttime, which can be interpreted as a patient's worsening. While PEWS scores and evaluates various parameters, RRS relies on a single parameter, providing early warning only based on an outlier of one factor (HR or RR), whereby the probability of false positives may be higher. In one retrospective study, in the case where RRS was activated only with bradycardia, no significant association was found with the deterioration of the patient. This study also reported more cases of RRS activation due to nighttime bradycardia than during the daytime, suggesting that attention should be paid to the application and evaluation of vital sign monitoring according to the time period [12]. However, studies so far have only focused diurnal variation in HR and RR or the underlying mechanisms [8][9][10][11], none of them investigating the relationship between the differences in vital signs between daytime and nighttime and the related clinical outcomes. Therefore, this study aimed to obtain the distribution of HR and RR for nighttime and daytime to investigate the clinical significance of the changes in HR or RR meeting the RRS activation criteria. Study Setting and Design This retrospective study was conducted at a tertiary care children's hospital with 350 beds. Children under the age of 18 who were admitted to the general ward at the children's hospital from January 2019 to December 2020 were included the study. Among them, patients diagnosed with cardiovascular disease or pulmonary disease that could affect HR or RR were excluded from the analysis. Since body temperature is also known to affect HR and RR, cases with a body temperature of <36 • C or ≥38 • C at the time of vital sign measurement were excluded [13]. Among the HR and RR measurements, those obtained in the intensive care unit (ICU), operating room, or emergency department, but not in the general ward, were excluded from the analysis. The source of the data is the clinical data warehouse of the hospital information system, which was accessed following the deliberation of the hospital's institutional review board (IRB). In addition, as this study analyzed de-identified data, the requirement for written consent was waived from the IRB (2112-151-1285). We collected data on patient gender, HR, RR, body temperature, measurement time, activation of RRS, cardiopulmonary resuscitation (CPR) occurrence, and transfer to ICU. As a pre-processing of the collected data, observations considered to be non-physiologic data (with HR > 300 beats/min, HR < 30 beats/min, RR > 120 breaths/min, or RR < 5 breaths/min) were excluded. The daytime period for measuring vital signs was defined from 8:00 to 20:00, and the nighttime period as that from 20:00 to 8:00 (of the next day). A patient is repeatedly measured multiple times due to the characteristics of vital signs; however, all individual measurements were not used in the analysis, to avoid bias in the results attributable to differences in the number of times the vital signs were measured in some patients compared with others. Each patient's hospitalization period was defined as an 'individual hospitalization unit', and the mean of HR and RR for each unit was used for analysis. The confounding effect of the variability of subjects with respect to their daytime vs. nighttime difference in HR and RR was eliminated using only the daytime-nighttime pair of measurements of the same patient's individual hospitalization unit in the analysis. Thus, we excluded cases with HR or RR measured during the daytime, but not measured at nighttime, or vice versa. RRS Activation Criteria and Classification of Abnormalities in Measurements Since October 2010, this institution has employed the RRS system by slightly modifying the system introduced by Tibballs et al. [12,14,15]. In this study, the original RRS criteria were used to evaluate whether they corresponded to the activation criteria of HR and RR, and each measured value was classified into one of the three categories: low, normal, and high. In addition, we identified a 'decreased group' and an 'increased group' according to whether the values measured for nighttime were lower or increased compared to those for daytime: the values classified as low in the nighttime but not low in the daytime (normal or high) were included in the decreased group, and those classified as high in the nighttime but not high (normal or low) in the daytime were included in the increased group. HR difference was obtained by subtracting daytime HR from nighttime HR, and RR difference was obtained in the same way. In addition, we defined CPR occurrence or transfer to ICU as a critical event to represent the patient's clinical outcome, and attempted to analyze its relevance according to the classification group of measurements. Outcomes The primary outcome was to evaluate the association of critical events (CPR occurrence or transfer to ICU) according to the HR and RR groups (increased group or decreased group). The secondary outcomes were HR and RR differences, and distribution and centiles of HR and RR by daytime and nighttime. In addition, since HR and RR have different normal ranges depending on age, the meaning of z-scores by age may be different than that of the respective individual measurements. Thus, the distribution and difference of z-scores were also analyzed. Statistical Analyses Categorical variables were expressed as numbers and percentages, continuous variables were expressed as mean (standard deviation [SD]) if they followed normal distribution, and non-parametric variables were denoted by median (interquartile range [IQR]). The association with the critical events of each group was analyzed using the mixed effect logistic regression model. The results were expressed as odds ratio (OR) and 95% confidence interval (CI). Paired t-test or Wilcoxon signed-rank test was used for comparison of daytime and nighttime measurements depending on normality. The Shapiro-Wilk test was used for the normality test. For the calculation of z-scores by age for HR and RR, the Lambda-Mu-Sigma method and the Box-Cox power exponential distribution were used based on the data derived from our previous study that presented centiles of HR and RR as a nationwide study [16]. In this process, the generalized additive model for location, scale, and shape package, and super imposition by translation and rotation growth curve analysis package were used [17,18]. R software version 4.2.1 was used for all data processing and statistical analyses. p values < 0.05 were considered statistically significant. Clinical Characteristics of the Patients There were 11,890 hospitalizations in a total of 3824 patients, and 190,133 vital signs were measured. The 9778 individual hospitalization units in a total of 3633 patients were finally used for analysis after applying the exclusion criteria. The age was 7.1 (2.7-12.2) (median [IQR]) years old, and 4394 (44.9%) were female patients. Among the individual hospitalization units, the RRS was activated in 1021 (10.4%) cases, 541 (5.5%) cases were transferred to the ICU, and CPR occurred in 69 (0.7%) cases. Other demographic data are detailed in Table 1. As for the underlying disease of patients, hemato-oncologic disease was the most common with 5665 (21%) patients, followed by congenital and genetic disease and neurologic disease, with 4670 (17.3%) and 2126 (7.9%) patients, respectively (Supplementary Table S1). Distributions of HR and RR according to time period are shown in Supplementary Figure S1. Main Outcomes CPR occurrence and ICU transfer were analyzed via group-specific critical event analysis, which is the primary outcome of this study. RRS activation was also analyzed. In the case of CPR incidence, the results in all four groups (decreased HR group, decreased RR group, increased HR group, and increased RR group) were not statistically significant ( For the analysis, mixed effect model logistic regression analysis was used. The analysis was made with reference to the non-occurrence of each item event. OR = odds ratio, HR = heart rate, RRS = rapid response system, CPR = cardiopulmonary resuscitation, ICU = intensive care unit, RR = respiratory rate, NA = not applicable. The HR difference and RR difference are shown in Figure 1. The values in the young age part of these scatter plots were widely distributed, showing a trend of narrowing as the age increased. As a result of regression analysis, both the SD of HR difference (p < 0.001) and SD of RR difference (p < 0.001) showed a statistically significant decrease with increasing age (Supplementary Figure S2A,B). Additionally, the centile curves and charts of HR and RR by time period are presented in supplementary information (daytime HR centile curve and chart: Supplementary Figure S3 and Table S2; nighttime HR: Supplementary Figure S4 and Table S3; daytime RR: Supplementary Figure S5 and Table S4; and nighttime RR: Supplementary Figure S6 and Table S5). The values in the young age part of these scatter plots were widely distributed, showing a trend of narrowing as the age increased. As a result of regression analysis, both the SD of HR difference (p < 0.001) and SD of RR difference (p < 0.001) showed a statistically significant decrease with increasing age (Supplementary Figure S2A,B). Additionally, the centile curves and charts of HR and RR by time period are presented in supplementary information (daytime HR centile curve and chart: Supplementary Figure S3 and Table S2; nighttime HR: Supplementary Figure S4 and Table S3; daytime RR: Supplementary Figure S5 and Table S4; and nighttime RR: Supplementary Figure S6 and Table S5). As for the z-score by age according to the time period, daytime showed higher results than nighttime in both HR and RR (both p values < 0.001) (Figure 2). HR and RR differences according to underlying disease are shown in Supplementary Table S6. The difference of each measurement was defined as the nighttime measurement minus the daytime measurement. Daytime was defined as 8:00 to 20:00, and nighttime was defined as 20:00 to 8:00 the next day. HR = heart rate, RR = respiratory rate. As for the z-score by age according to the time period, daytime showed higher results than nighttime in both HR and RR (both p values < 0.001) (Figure 2). HR and RR differences according to underlying disease are shown in Supplementary Table S6. When the RRS activation criteria were applied to HR and RR by time period, daytime HR was 0.8%, 98.8%, and 0.4% for low, normal, and high, respectively, and for nighttime, 3.2%, 96.6%, and 0.2%, respectively. A high ratio of low HR was shown ( Figure 3A,B). On the other hand, RR was 0.0%, 99.5%, and 0.4% at daytime versus 0.0%, 99.6%, and 0.4% at nighttime, showing similar results ( Figure 3C,D). In the same way, the distributions to which APLS and PALS criteria are applied instead of RRS activation criteria are shown in Supplementary Figures S7 and S8, respectively. Distribution of z-scores by age for heart rate and respiratory rate by time period. The zscore by age was calculated based on the distribution of heart rate and respiratory rate derived from an existing nationwide study [19]. Daytime was defined as 8:00 to 20:00, and nighttime was defined as 20:00 to 8:00 the next day. Wilcoxon signed-rank test was used for comparison by paired group. * Median (IQR), IQR = interquartile range, Q1 = 1st quartile, Q3 = 3rd quartile. When the RRS activation criteria were applied to HR and RR by time period, daytime HR was 0.8%, 98.8%, and 0.4% for low, normal, and high, respectively, and for nighttime, 3.2%, 96.6%, and 0.2%, respectively. A high ratio of low HR was shown ( Figure 3A,B). On the other hand, RR was 0.0%, 99.5%, and 0.4% at daytime versus 0.0%, 99.6%, and 0.4% at nighttime, showing similar results ( Figure 3C,D). In the same way, the distributions to which APLS and PALS criteria are applied instead of RRS activation criteria are shown in Supplementary Figures S7 and S8, respectively. Distribution of z-scores by age for heart rate and respiratory rate by time period. The z-score by age was calculated based on the distribution of heart rate and respiratory rate derived from an existing nationwide study [19]. Daytime was defined as 8:00 to 20:00, and nighttime was defined as 20:00 to 8:00 the next day. Wilcoxon signed-rank test was used for comparison by paired group. * Median (IQR), IQR = interquartile range, Q1 = 1st quartile, Q3 = 3rd quartile. Figure 2. Distribution of z-scores by age for heart rate and respiratory rate by time period. score by age was calculated based on the distribution of heart rate and respiratory rate derive an existing nationwide study [19]. Daytime was defined as 8:00 to 20:00, and nighttime was d as 20:00 to 8:00 the next day. Wilcoxon signed-rank test was used for comparison by paired * Median (IQR), IQR = interquartile range, Q1 = 1st quartile, Q3 = 3rd quartile. When the RRS activation criteria were applied to HR and RR by time period, da HR was 0.8%, 98.8%, and 0.4% for low, normal, and high, respectively, and for nigh 3.2%, 96.6%, and 0.2%, respectively. A high ratio of low HR was shown ( Figure 3A, On the other hand, RR was 0.0%, 99.5%, and 0.4% at daytime versus 0.0%, 99.6% 0.4% at nighttime, showing similar results ( Figure 3C,D). In the same way, the dis tions to which APLS and PALS criteria are applied instead of RRS activation criter shown in Supplementary Figures S7 and S8, respectively. Blue dots indicate daytime (from 8:00 to 20:00) measurements, and red dots indicate nighttime (from 20:00 to 8:00 the next day) measurements. The solid black line represents the age-specific criteria for RRS activation [12], and the percentages represent the percentages of the measured values in each range. HR = heart rate, RR = respiratory rate, RRS = rapid response system. Discussion We conducted this study to evaluate the clinical significance of RRS activation due to diurnal variation in HR or RR and revealed that neither the decreased group nor increased group had a significant effect on CPR occurrence or ICU transfer. This result allowed us to consider several interesting points. There was a statistically significant difference in HR and RR during the daytime and nighttime, especially a decline during the nighttime compared to daytime. This is consistent with the results of several previous studies on the diurnal variation in vital signs [8,11,14,20,21]. One study suggested that the circadian rhythm of HR was related to autonomic nervous activity, such as the relative degree of sympathetic tone or parasympathetic tone [8]. Another study on vital sign abnormalities in children showed similar results to the above study, suggesting that vagal slowing and sleep can be the cause of bradycardia or bradypnea [14]. However, one interesting fact is that, due to the diurnal variation of vital signs, if the RRS activation criteria are met only at nighttime, the possibility that this identifies a condition becoming life-threatening is low. Of course, this does not mean that HR or RR outside the normal range are not clinically significant. However, if it only deviates from the normal range at night due to physiologic diurnal variation, it gives only a small hint as to how to a primary physician facing this case should respond. This also has implications for the application of RRS activation criteria during nighttime. Haines et al. indicated that bradycardia alone was not a specific marker for serious illness due to its low specificity [15]. Another study on pediatric RRS indicated the limitations of using bradycardia as a single parameter, and suggested the need for other co-parameters to activate RRS [12]. On the other hand, in contrast to these results for bradycardia, another previous study reported that 25% of patients admitted to ICU had tachypnea [16]. However, care should be exercised in interpreting this study, because it literally reports the rate of tachypnea among patients admitted to the ICU, and not an analysis of the relationship between ICU admission and tachypnea, since the latter was seen only at night while RR was normal during the day. On the other hand, the SD of the difference in HR or RR was higher at a younger age and decreased with increasing age (Supplementary Figure S2). This means that younger pediatric patients show an inherently greater variation in HR and RR, thus suggesting that there may be limitations in interpreting and evaluating the patient's condition with only one change in HR or RR in younger children, such as infants. Although it is true that the median of HR or RR is higher at a young age [19,21,22], this may not mean that the lower limit of HR or RR should be higher with younger age. This is because the difference in HR or RR increases significantly as the age decreases. Therefore, caution is required when interpreting vital signs, especially in young children. Single-parameter RRS (used in this study) triggered by meeting the criteria for only one parameter may be more vulnerable to diurnal variation of vital signs compared to PEWS, which is scored by a combination of several parameters [12]. However, even in PEWS, since diurnal variation is not considered in the criteria of HR or RR, the same intrinsic limitations exist as in RRS, although their effect is smaller. Nevertheless, we do not think that it would be best to present differentiated reference ranges according to daytime and nighttime to apply RRS or PEWS. The accuracy would be slightly increased; however, we need to consider the improvement in the quality of medical care in terms of the overall cost and benefit of medical resources. Nevertheless, we believe that these individual studies, such as ours, can be gathered to create medical evidence and ultimately be used as a foundation for medical development. There have been several studies that tried to derive centile curves of HR and RR based on actual evidence [19,[21][22][23][24], revealing how different actual evidence is from the age criteria used in PALS and APLS [19,21]; however, the criteria for HR and RR of PALS were not changed following such evidence. All the same, these studies are not meaningless at all, as they broaden the medical knowledge base and solidify the thinking of researchers; we do hope that our research work will contribute along this line. A key strength of our study is that we analyzed the distribution of HR and RR according to the time period. Previous studies mentioned diurnal variations in vital signs, though not reflecting their effect on reference ranges. To the best of our knowledge, this is the first study to analyze each distribution by daytime and nighttime period, and to evaluate the clinical significance of changes in vital signs in children. This study has several limitations. First, this was a single-center study reflecting the characteristics of the patients admitted to our hospital. It may be difficult to generalize our results to other centers. Second, we analyzed results by the average value of HR and RR for individual hospitalization unit. Since the analysis was conducted using representative values (average) rather than the individual values measured, there might have been a loss in the signal regarding the changes in individual vital signs. However, patients with a long hospital stay and a lot of vital sign measurements could have skewed the results if the analysis had been done using individual measured values, thus it was inevitable to use representative values in order to minimize such bias. Third, a more accurate analysis could have been possible if the severity of patients transferred to the ICU had been corrected through severity score, such as pediatric risk of mortality score or pediatric index of mortality-3 [25][26][27]. Finally, we divided the time period into daytime and nighttime, based on the 8:00 and 20:00 times, though there may be children who do not sleep during the nighttime, or who take a nap during the daytime. Although more interesting results may have been expected if actual sleep had been reflected, evaluating the actual sleep was difficult due to the retrospective nature of our study. Conclusions We evaluated the clinical significance of changes in HR and RR differentiating between daytime and nighttime and analyzed the association of the changes with CPR occurrence and ICU transfer, demonstrating that there was no clinically relevant association. However, due to the inherent limitations of a retrospective and single-center study, these results may be difficult to apply universally; thus, well-designed further studies are needed to confirm our findings. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/children10010167/s1, Table S1: Distribution of underlying diseases among study subjects; Table S2: Heart rate centiles by age during the daytime (from 8:00 to 20:00); Table S3: Heart rate centiles by age during the nighttime (from 20:00 to 8:00 the next day); Table S4: Respiratory rate centiles by age during the daytime (from 8:00 to 20:00); Table S5: Respiratory rate centiles by age during the nighttime (from 20:00 to 8:00 the next day); Table S6: Regression analyses between underlying diseases and z-scores differences of HR and RR by age; Figure S1: Scatter plot of vital signs by time period; Figure S2: SD changes of differences in vital signs by age; Figure S3: Centile curve of heart rate by age in daytime (from 8:00 to 20:00); Figure S4: Centile curve of heart rate by age in nighttime (from 20:00 to 8:00 the next day); Figure S5: Centile curve of respiratory rate by age in daytime (from 8:00 to 20:00); Figure S6: Centile curve of respiratory rate by age in nighttime (from 20:00 to 8:00 the next day); Figure S7: Centile curves for heart rate by age; Figure S8: Centile curves for respiratory rate by age. Informed Consent Statement: The requirement for informed consent was waived because this study was recognized by the Institutional Review Board as a minimal risk study analyzing deidentified data. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available because it is the policy of the Institutional Review Board of Seoul National University Hospital to destroy the research data after a certain period of time. Conflicts of Interest: The authors declare no conflict of interest.
2023-01-22T05:14:55.058Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "c33f9716a3157ece59f8dd732b339a1056ff81f2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/10/1/167/pdf?version=1673706345", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c33f9716a3157ece59f8dd732b339a1056ff81f2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1140300
pes2o/s2orc
v3-fos-license
Strong Correlation to Weak Correlation Phase Transition in Bilayer Quantum Hall Systems At small layer separations, the ground state of a nu=1 bilayer quantum Hall system exhibits spontaneous interlayer phase coherence and has a charged-excitation gap E_g. The evolution of this state with increasing layer separation d has been a matter of controversy. In this letter we report on small system exact diagonalization calculations which suggest that a single phase transition, likely of first order, separates coherent incompressible (E_g>0) states with strong interlayer correlations from incoherent compressible states with weak interlayer correlations. We find a dependence of the phase boundary on d and interlayer tunneling amplitude that is in very good agreement with recent experiments. The ground state of a two-dimensional monolayer electron system at Landau level filling factor ν = 1 is a single Slater determinant described exactly by Hartree-Fock theory and is a strong ferromagnet with a large gap E g for charged excitations [1,2]. This elementary property has rich and interesting consequences for the physics of bilayer quantum Hall systems at the same total ν, consequences that are readily appreciated when a pseudospin language [3,1] is used to describe the layer degree of freedom. When the layer separation d goes to zero, interactions between layers are identical to interactions within layers. The pseudospin bilayer Hamiltonian is then identical to the single layer Hamiltonian with spin and its ground state has pseudospin order and a finite charge gap. For infinite layer separation, on the other hand, the bilayer system reduces to two disordered, compressible, uncorrelated ν = 1/2 systems. This Letter concerns the evolution of bilayer quantum Hall systems between these two extremes. For small layer separations the difference between interlayer and intralayer interactions breaks the pseudospin-invariance of the Hamiltonian, resulting in an incompressible easy-plane pseudospin ferromagnet. In physical terms the pseudospin order represents spontaneous phase coherence between the electron layers. Several scenarios have been proposed for the evolution of the ground state as the layer separation increases further. In Hartree-Fock theory [4], spontaneous interlayer coherence is lost if the layer separation exceeds a critical value, and the ground state at large separations consists of weakly correlated Wigner crystals. While possibly instructive, this picture is known to be incorrect at large d since half-filled Landau levels do hot have crystalline ground states. Working in the other direction, Bonesteel et al. started [5] from the composite fermion theory of isolated compressible ν = 1/2 layers, and concluded that coupling would lead to pairing between compos-ite fermions in opposite layers and also, implicitly, to a charge gap. Since the pseudospin ferromagnet possesses particle-hole rather than particle-particle pairing, however, this picture still implies that at least one phase transition occurs as a function of layer separation. In a numerical diagonalisation study He et al. [6] predicted, on the basis of the system parameter dependence of overlaps between exact groundstates and two different variational wavefunctions, the existence of two distinct incompressible states separated by a region of compressible states. Experiments, on the other hand, have tended to be consistent [7] with the proposal [3] that a singlephase transition from an incompressible to a compressible states occurs with increasing layer separation at any value of the interlayer tunneling amplitdue. Very recently, in an intriguing new experiment by Spielman et al. [8]. the tunneling conductance across the layers was studied in a sample with extremely small tunneling amplitude. When the ratio of layer separation and magnetic length was lowered (at fixed filling factor) below a critical value, the conductance showed a very pronounced peak around zero bias voltage between the layers, that provides direct evidence [9] for spontaneous interlayer phase coherence. This is because in the coherent state, the layer index of each electron is uncertain and only in this case can tunneling leave the system in or near its ground state so that there is no orthogonality catastrophe and tunneling can occur at zero voltage. Since the critical layer separation found by Spielman et al. is close to the one obtained earlier by Murphy et al. for the onset of the quantum Hall effect [7], experiment demonstrates that for vanishing tunneling amplitude the phase transitions at which pseudospin order and the charge gap are lost are either closely spaced or coincident. In this Letter we report on small system exact diagonalization calculations which strongly suggest that bilayer quantum Hall systems have a single phase transition, likely of first order, as a function of d. Our critical layer separation is in very good quantitative agreement with the value measured in Ref. [8]. In the light of the experimental results mentioned above, our calculations imply that the charge gap disappears and longrange phase coherence simultaneously drops sharply to near zero at the phase transition. This result is not entirely unexpected since a simple Landau-Ginzburg analysis indicates that the two order parameters could not vanish simultaneously without fine-tuning, if the transition were continuous. Also the mean-field theory energy gap is proportional to the pseudospin order parameter, suggesting that these two orders are mutually reinforcing and that a first order transition is therefore likely. Finally, we note that, experimentally, the charge gap phase transition is sharp even at finite tunneling between the layers. Since tunneling produces a pseudomagnetic field which couples to the pseudospin order parameter, this is an unusual magnetic transition which does not involve symmetry breaking, a fact which lends further weight to the suggestion that the transition is first order. We analyse bilayer quantum Hall systems numerically by means of exact diagonalisations of finite systems unsing the spherical geometry. We have verified numerically that the ground state and low-lying excitations are fully spin-polarized and neglect the spin degree of freedom in the present discussion. The Hamiltonian is given by where H Coul represents the usual Coulomb interaction within and between layers, and the single-particle Hamiltonian H 1P is given by We concentrate here on the tunneling amplitude (∆ t ) tuned phase transition, although bias voltage (∆ v ) dependence is also interesting and often experimentally more convenient. µ, µ ′ ∈ {+, −} run over the layer (or pseudospin) indices and a summation convention is implicit; τ are the pseudospin Pauli matrices. m ∈ {−N φ /2, . . . , N φ /2} is the z-projection of the orbital angular momentum of each electron in the lowest Landau level, where N φ is the number of flux quanta penetrating the sphere. In the following we denote the pseudospin operators by T = (1/2) m c + µ,m τ µ,µ ′ c µ ′ ,m . The interlayer separation d is measured in units of the magnetic length l B = hc/eB, and all energies are given in units of the Coulomb energy scale e 2 /ǫl B . We consider the case of zero well width to enable comparison with most previous theoretical investigations [10][11][12][13]), and also systems consisting of two rectangular wells of finite width w [14] whose ratio to the center-to-center layer separation d is w/d = 0.65. This value corresponds to the sample used in Ref. [8]. We consider systems with an even electron number N which leads to a nondegenerate spatially homogeneous ground state with total angular momentum L = 0. For simplicity, let us first examine the case of vanishing bias voltage, where both T y and T z are strictly zero. Figure 1 shows the interlayer phase coherence as measured by the expectation value T x along with the fluctuation ∆T x = T x 2 − T x T x as a function of the tunneling gap for a system of twelve electrons, a layer separation of d = 1.80, and zero well width. At ∆ t = 0, T x is necessarily zero in a finite system. With increasing tunneling gap, T x grows rapidly reaching a inflection point with a very steep tangent. The differential pseudospin susceptibility, χ = (1/N ) T x /d∆ t , is plotted in the inset and shows a very pronounced peak. In the immediate vicinity of this peak, the pseudospin fluctuation ∆T x has also a pronounced maximum. In figure 2 the χ is plotted for different numbers of electrons. The rapid growth with increasing system size of the peak in this generically intensive quantity is strong evidence for a ground state phase transition. Analogous findings are obtained for the peak in the pseudospin fluctuation. Thus, the peaks in the susceptibility of the pseudospin and its fluctuation grow very rapidly with increasing system size and signal a quantum phase transition at the critical value of the tunneling gap. At large tunneling the system pseudospin magnetisation is close to its maximum value, while at small (but also finite) tunneling the the system is disordered and the pseudospin magnetisation is strongly reduced by interactions. The two peaks described above occur at extremely nearby values of ∆ t at a given layer separation d, and we consider the very tiny differences in their location as a finite-size effect. To estimate the phase diagram of the system we place the phase boundary at the maximum of the quantum fluctuations ∆T x . Figure 3 shows the resulting phase boundaries for different system sizes and both cases of well width. At small layer separation the system is in the ordered phase and the fluctuation peak occurs exactly at ∆ t = 0. At a critical layer separation d c (∆ t = 0, N ) the phase boundary moves out rapidly to finite values of ∆ t and intersects the axis ∆ t = 0 with an almost horizontal tangent. This is in qualitative agreement with earlier experimental [7] and theoretical [3] estimates of the phase diagram. The critical values d c (∆ t = 0, N ) form a rapidly converging data sequence and are plotted in figure 4. These finite-size data are accurately and consistently described by an ansatz of the form d c (N ) = α + βN −λ with two fit parameters α = d c (N = ∞), β, and a shift exponent λ. The best fits to both sets of data are obtained for λ = 5.0 ± 0.2 leading to a value of d c (N = ∞) = 1.30 ± 0.03 for zero well width, and d c (N = ∞) = 1.81 ± 0.03 for w/d = 0.65. The latter value is in excellent agreement with the results of Ref. [8], where the onset of the tunneling conductance peak is observed at a layer separation of d = 1.83. Thus, our numerical results clearly indicate that the findings of the above tunneling experiments are the signature of a quantum phase transition. The very large value of λ seems inconsistent with a diverging correlation length and suggests the transition is first order. A first order phase transition would explain the apparent coincidence of the appearance of spontaneous phase coherence and the quantum Hall effect in experiment [7,8] We note that our result for the critical layer separation at vanishing tunneling gap agrees reasonably, at zero well width, with the point at which the uniform density phase coherent state first becomes unstable in the Hartree-Fock approximation [3]. At larger w, however, the Hartree-Fock estimates clearly deviate from the exact diagonalisation result. In order to further investigate the order of the quantum phase transition, we introduce the ratio where the subscript N refers to the system size. As we discuss below, this type of ratio should prove to be a powerful general tool in the analysis of any quantum phase transition. In classical physics this ratio of fluctuation to susceptibility is equal to the thermal energy k B T and vanishes at T = 0. The classical relationship does not apply here since the Hamiltonian fails to commute with its derivative with respect to ∆ t . There is, however, a closely related zero-temperature relationship with the typical excitation energy ω N taking over the role of temperature. The fluctuation can be written as where the sum is performed over all excited states, while for the derivative of the pseudospin magnetisation one finds from linear response theory From these equations we see that ω N is a harmonic average of excitation energies (E n − E 0 ), weighted by the factors | n|T x |0 | 2 . In particular, ω N has a vanishing thermodynamic limit if at least one state with a nonvanishing matrix element n|T x |0 has an excitation energy (E n − E 0 ) which extrapolates to zero for N → ∞. Thus, equation (3) defines a characteristic energy scale of the system at the phase boundary. The operator T x naturally enters this expression since it couples to a control parameter driving the phase transition. For a continuous phase transition one would clearly expect ω N to vanish at the phase boundary for an infinite system, while a finite limit lim N →∞ ω N is indicative of a finite energy scale, i.e. a first order transition. From our finite-size data for ω N (evaluated at vanishing tunneling and d = d c (N )) we conclude that this quantity extrapolates for N → ∞ to a rather substantial non-zero value of order 0.05e 2 /ǫl B ∼ 5K for both values of w considered here. Along with the arguments and experimental findings given so far, this result strongly suggests that the bilayer quantum Hall system at filling factor ν = 1 undergoes a single first order phase transition as a function of the ratio of layer separation and magnetic length at all values of the tunneling amplitude. The phase boundary separates a phase with strong interlayer correlation (and a finite gap for charged excitations) from a phase with weak interlayer correlations and vanishing E g . Finally we comment briefly on the influence of a bias voltage between the layers. When applying a bias voltage to the system the vector T is tilted out of the xy-plane with a finite z-component. In this case we find numerically that the quantum phase transition is again signaled by the longitudinal fluctuation of the pseudospin magnetisation and its susceptibility, and all results concerning the phase boundary are qualitatively the same. First order phase transitions from stronly correlated to weakly correlated states also occur with increasing bias poten-
2017-08-26T11:49:20.335Z
2000-06-20T00:00:00.000
{ "year": 2000, "sha1": "f423abfe6d5557e9e3c9fe58983d799dbdda88db", "oa_license": null, "oa_url": "https://epub.uni-regensburg.de/28256/1/PhysRevLett.86.1849.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "23e97ce276f91fb303c7341a0790c43d041b9acd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
263954327
pes2o/s2orc
v3-fos-license
Data in Brief Dataset link: Proteomic analysis of human microglia cells (HMC3) stimulated with anti-inflammatory cytokines using serum-deprived culture conditions (Original data) Dataset link: Proteomic analysis of human microglia cells (HMC3) stimulated with anti-inflammatory cytokines using serum-enriched culture conditions (Original data) Dataset link: Anti-Inflammatory Cytokine Stimulation of HMC3 Cells: Proteome Dataset (Original data) The immunoprotective functions of microglia in the brain are mediated by the inflammatory M1 phenotype. This phenotype is challenged by anti-inflammatory cytokines which polarize the microglia cells to an immunosuppressive M2 phenotype, a trait that is often exploited by cancer cells to evade immune recognition and promote tumor growth. Investigating the molecular determinants of this behavior is crucial for advancing the understanding of the mechanisms that cancer cells use to escape immune attack. In this article, we describe liquid chromatography (LC)-mass spectrometry (MS)/proteomic data acquired with an EASY-nanoLC 1200-Q Exactive TM Orbitrap TM mass spectrometer that reflect the response of human microglia cells (HMC3) to stimulation with potential cancer-released anti-inflammatory cytokines known to be key players in promoting tumorigenesis in the brain (IL-4, IL-13, IL-10, TGFB and MCP-1). The MS files were processed with the Proteome Discoverer v.2.4 software package. The cell culture conditions, the sample preparation protocols, the MS acquisition parameters, and the data processing approach are described in detail. The RAW and processed MS files associated with this work were deposited in the PRIDE partner repository of the ProteomeXchange Consortium with the dataset identifiers PXD023163 and PXD023166, and the analyzed data in the Mendeley Data cloud-based repository with DOI 10.17632/fvhw2zwt5d.1. The biological interpretation of the data can be accessed in the research article "Systems-Level Proteomics Evaluation of Microglia Response to Tumor-Supportive Anti-inflammatory Cytokines" (Shreya Ahuja and Iulia M. Lazar, Frontiers in Immunology 2021 [1] ). The proteome data described in this article will benefit researchers who are either interested in re-processing the data with alternative search engines and filtering criteria, and/or exploring the data in more depth to advance the understanding of cancer progression and the discovery of novel biomarkers or drug targets. © 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ) Table Subject Cell biology, proteomics. Value of the Data Specifications • The data described in this manuscript comprise eight proteome datasets of human fetal microglia cells (HMC3) grown in the presence and absence of anti-inflammatory cytokines, under serum-deprived and serum-rich culture conditions, enriched in nuclear and cytoplasmic cell fractions. The datasets provide a systems-level landscape of a microglia phenotype that can be used to gain insights into the immunosuppressive activities mounted by microglia in the presence of brain cancer. • The HMC3 anti-inflammatory proteome profiles, the cell-membrane proteins that trigger signaling pathways in cells, and the biological processes that are associated with these proteins can be utilized in comparative studies that aim at characterizing microglia responses to different experimental conditions. • Unlike earlier literature reports that present data describing the behavior of various types of microglia in a range of animal and macrophage research models, the present datasets describe a much less studied model system, i.e., the human fetal microglia. • The HMC3 proteome profiles will benefit researchers who are interested in studying the behavior of immune cells, the molecular mechanisms that drive cancer development in the brain, and the cell-membrane protein networks that can facilitate the discovery of novel therapeutic targets. • The MS/RAW files can be re-processed with other search engines that use different algorithms for peptide/protein identifications, or by using other human databases that contain protein isoforms or mutated sequences, to produce complementary results and provide additional insights into the behavior of HMC3 cells. • The tandem MS data of previously un-identified peptides can be used in the generation of reference spectral libraries that have value in a variety of mass spectrometry applications (e.g., for targeted peptide identification and quantitation, data-independent analysis, etc.). Objective Mass spectrometry technologies were used to generate comprehensive proteome profiles of HMC3 cells reflective of how microglia are activated in response to anti-inflammatory cytokines released from cancer cells. The biological interpretation of results is described in a related research manuscript published in Frontiers in Immunology [1] . The data presented in this article include additional qualitative and quantitative details that will enable researchers to broaden the premise for the interpretation of results. Data Description The RAW, processed and analyzed proteomic datasets that are described in this manuscript comprise: (a) RAW and msf mass spectrometry files of HMC3 cells, nuclear and cytoplasmic fractions, generated from cells cultured in the presence and absence of fetal bovine serum (FBS), with or without treatment with anti-inflammatory cytokines (IL-4, IL-13, IL-10, TGFB) and chemokine MCP-1 (files shared in the PRIDE/ProteomeXchange repository (c) Processed RAW files with Proteome Discoverer v.2.4/Sequest HT that include the information from (b), for proteins that contained at least two peptides and that past the t-test for 2-fold change (FC) in PSM counts upon treatment with anti-inflammatory cytokines (increased or decreased) for serum-free and serum-treated cells (Supplemental files [2][3][4][5], and that align the proteins with increased/decreased spectral counts with controlled vocabulary terms (Supplemental file 6). (d) Lists of enriched (FDR ≤5 %) up-and down-regulated GO biological processes and pathways (KEGG, Reactome, Wiki) that were represented by the combined lists of proteins from (c) with increased (1296 proteins) and decreased (775 proteins) PSMs (Supplemental files 7 and 8). (e) Figures that visualize the experimental setup, the microglia functions in the brain, the qualitative results and reproducibility of proteomic data, and, for supporting the interpretation of the data, a summary of the main biological processes that were affected by the proteins that changed expression level or function in response to the treatment with cytokines: Fig. 1 provides a schematic of the experimental cell culture conditions; Fig. 2 highlights the main physiological roles of microglia in the brain, and therefore, the premise of the study; Fig. 3 Reagents and Materials Cells (HMC3), EMEM, and PenStrep were purchased from ATCC (Manassas, VA). Other media and reagents necessary for cell culture such as phenol red/glutamine-free MEM, L-glutamine, DPBS, and trypsin-EDTA were procured from Gibco (Gaithersburg, MD), and FBS from Gemini Bio Products (West Sacramento, CA). The recombinant human cytokines (IL-4, IL-10, IL-13, CCL2, TGF β1/TGF β2-both HEK293 derived) were purchased from Peprotech (Rocky Hill, NJ). Propidium iodide for performing FACS analysis was bought Invitrogen (Carlsbad, CA). Reagents for sample preparation such as CH 3 COOH, CF 3 COOH, NH 4 HCO 3 , urea, DTT, phosphatase inhibitors (Na 3 VO 4 , NaF), protease inhibitor cocktail (P8340), RNase, Triton X-100, and the nuclear and cytoplasmic cell extraction kit (Cell Lytic TM NuCLEAR TM ) were acquired from Sigma Aldrich (St. Louis, MO). Sequencing grade trypsin was from Promega (Madison, WI). BSA standards and the Bradford reagent were secured from Biorad (Hercules, CA). Pipette tips for sample cleanup (SPEC-PTC18 and SPEC-PTSCX) were purchased from Agilent technologies (Santa Clara, CA). HPLC-grade solvents for sample preparation (CH 3 OH, CH 3 CN) were supplied by Fischer Scientific (Fair Lawn, NJ) and ethanol by Decon Laboratories (King of Prussia, PA). Water, high-purity, for sample solution preparations and HPLC analysis, was prepared by distillation from de-ionized water, in-house. HMC3 Culture HMC3 cells were retrieved from liquid nitrogen, thawed, and cultured in EMEM with FBS (10 %) in an incubator with 5 % CO 2 atmosphere that was maintained at 37 °C. An outline of the cell culture conditions is provided in Fig. 1 [1] . Cell cultures that were used as control were either (a) starved of FBS for 48 h, or (b) starved for 48 h and then released with FBS (10 %) for 24 h. Stimulation with anti-inflammatory cytokines was performed for 24 h by using the following conditions: (a) HMC3 cells were first starved for 24 h in MEM (phenol red-free) supplemented with glutamine (2 mM), and then starved for another 24 h but with the cytokine cocktail added to the culture medium; and (b) HMC3 cells were starved for 48 h, then stimulated with cytokines in the presence of FBS (10 %) for 24 h. The concentration of cytokines used for stimulation was chosen based on literature reports [5][6][7] and was: IL-4 (40 ng/mL), IL-10 (20 ng/mL), IL-13 (20 ng/mL), TGF β1 (20 ng/mL), TGF β2 (20 ng/mL), and CCL2, i.e., MCP-1 (40 ng/mL). All cell culture media contained PenStrep (0.5 %) to prevent bacterial contamination. Cell harvesting was performed by trypsinization, and the cells were flash frozen at -80 °C until further processing. Data for serum starved cells (stimulated or non-stimulated with cytokines) were collected to enable an assessment of the HMC3 response to the cytokine treatment without interference from the FBS proteins. Three batches of cells, retrieved from liquid nitrogen and processed independently, were used as biological replicates for the control and the stimulated cells. FACS FACS analysis was performed with the FACSCalibur system (BD Biosciences, San Jose, CA) for assessing the cell cycle stage of the serum-starved and serum-treated cells. In preparation for analysis, the fresh cell cultures were fixed in 70 % EtOH, stained with propidium iodide (0.02 mg/mL) in a PBS solution containing RNase (0.2 mg/mL) and Triton X-100 (0.1 %), and then incubated at room temperature for 30 min. Cell Extract Preparation and Processing All cell states and treatments were fractionated into nuclear (N) and cytoplasmic (C) cellular subfractions. The manufacturer-recommended protocol was followed for cell lysis and processing. Hypotonic lysis buffer (10X) and Igepal from the Cell Lytic TM NuCLEAR TM extraction kit were used for lysing the cells, and the Extraction buffer for performing the extraction of nuclear proteins [8][9][10] . The cell and nuclear lysis reagents were supplemented with phosphatase (Na 3 VO 4 , NaF) and protease inhibitor cocktails. The concentration of protein extracts was determined with the Bradford assay. For MS analysis, 500 μg of protein extracts were first denatured and reduced at 57 °C, for 1 h, in the presence of urea (8 M) and DTT (5 mM). After 10-fold dilution with NH 4 HCO 3 (50 mM), the samples were subjected to overnight enzymatic digestion with trypsin (50:1 protein/enzyme ratio) at 37 °C. The enzymatic reaction was quenched with glacial acetic acid (10 μL/mL proteolytic digest). Buffers and cell lysis components from the proteolytic digests were removed with the SPEC-PTC18/SCX sample clean-up cartridges, and the resulting peptides were re-suspended in a concentration of 2 μg/μL in a solution of H 2 O:CH 3 CN:CF 3 COOH (98:2:0.01). The samples were frozen at -80 °C until further analysis. LC-MS Analysis Nano-liquid chromatography mass spectrometry was performed with an Easy-nLC 1200 ultrahigh pressure LC system and a Q Exactive TM Hybrid Quadrupole-Orbitrap TM mass spectrometer (ThermoFisher Scientific, USA) interfaced via an EASY-Spray TM ion source [1] . The separations were performed on ES802A columns (250 mm long x 75 μm i.d.) packed with C18/silica particles (2 μm), with an eluent gradient of ∼2 h. The eluent flow was 250 nL/min, with solvent A being prepared from H 2 O:CH 3 (122-127 min). The separation column was heated at 45 °C, ESI was established at 2 kV, and the MS ion inlet capillary was heated at 250 °C. The data were acquired by using a DDA approach, with MS acquisition occurring over a range of m/z = 40 0-160 0. The MS acquisition parameters were pre-set to 70,0 0 0 resolution, AGC target 3E6, and IT maximum of 100 ms. The isolation window for the quadrupole filter was set to m/z = 2.4. Tandem MS was performed by using higher energy collision dissociation (HCD) with a normalized collision energy of 30 %, and MS2 spectra were acquired on the top 20 ions with a resolution of 17,500, AGC target 1E5, and IT maximum of 50 ms. Charge exclusion was enabled for + 1 ions and ions with undetermined charge states, selecting ions with peptide-like isotope distribution, isotope exclusion on, apex trigger 1-2 s, minimum AGC trigger 2E3, and dynamic exclusion of 10 s. MS/RAW Data Processing The RAW mass spectrometry files were interpreted with the Proteome Discoverer (v.2.4) software package (ThermoFisher Scientific) by using the Sequest HT search engine and a UniProt Homo sapiens database [11] of 20,404 reviewed and non-redundant protein entries (2019). A target-decoy processing workflow was used for matching the experimental tandem mass spectra to the theoretical peptides generated from the UniProt database. The following parameter settings were used in the search: fully tryptic peptides with precursor masses of 40 0-5,0 0 0 Da and a minimum of 6 amino acids, maximum 2 tryptic missed cleavage sites, tolerances of 15 ppm for precursor ions and 0.02 Da for fragment ions, all b/y/a ions considered in the search, and dynamic modifications allowed on the N-terminus (42.0106 Da/acetyl) and methionine (15.9949 Da/oxidation). For PSM validation, a forward/reverse concatenated database was used (maximum Delta Cn 0.05, maximum rank 1). Only rank 1 peptides were counted, and only for top scoring proteins, with the strict parsimony principle being enabled for the protein grouping node. The probability threshold for the peptide modifications was set to 75, and all peptide/protein level FDRs were set at confidence thresholds of 0.03 (relaxed)/0.01 (stringent). Bioinformatics MS Data Analysis A total of eight datasets were created, i.e., two cell states of serum-free and serum treated cells (SF/ST), nuclear and cytoplasmic fraction (N/C), with cytokine (ck) stimulation or without stimulation (control). For each of the eight cell conditions, three LC-MS/MS technical replicates were generated with results combined in one multiconsensus Proteome Discoverer v.2.4 report. Three biological replicates were created for all of the above to enable statistical evaluations of changes in protein expression ( Fig. 3 A). For a complete proteome profile, a combined multiconsensus report was generated from all 72 experimental RAW files (8 datasets x 3 technical replicates x 3 biological replicates) (Supplemental file 1). List of proteins that changed PSMs, reflective of differential protein expression between cytokine-treated vs. non-treated cells, were generated by processing and filtering the qualitative protein lists according to the following criteria: (a) The PSM count data for each protein were normalized based on the average of total spectral counts for each of the three biological replicates of cytokine-treated and non-treated (control) cell states that were compared; (b) One spectral count was added to each protein in each data set to account for missing values; (c) Only proteins that were identified by two distinct peptide sequences were accepted for quantitative comparisons; (d) A two-tailed t-test was performed for each protein to assess the statistical significance of the change in PSMs; (e) Proteins that displayed a 2-FC in spectral counts, i.e., with Log2(Cytokine-treated cells/Non-treated cells) ≥0.9 or ≤(-0.9), and that passed the significance threshold set by p-value < 0.05, were included in the final lists (Supplemental files 2-5). Controlled vocabulary terms from UniProt were used to extract Homo sapiens proteins with functionally related roles ( Fig. 3 B and Supplemental file 6). Enriched up-/down-regulated biological processes and pathways (FDR < 0.05) represented by the above lists of proteins with change in spectral counts (1296 proteins with increased PSMs, 775 proteins with decrease PSMs) were inferred based on GO [12] , KEGG [13] , REACTOME [14] and WIKI [15] bioinformatics tools enabled via the STRING [16] website (Supplemental files 7-8). The representation of the biological roles of microglia was created with tools provided by BioRender.com ( Fig. 2 ). Bar-and bubble charts were generated with Microsoft Excel ( Figs. 3 and 4 ). Ethics Statement Not applicable. Data Availability Proteomic analysis of human microglia cells (HMC3) stimulated with anti-inflammatory cytokines using serum-deprived culture conditions Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2023-07-22T15:18:10.451Z
0001-01-01T00:00:00.000
{ "year": 2023, "sha1": "26d61c233c170dcfa8c906b30e19066c7566f6f5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.dib.2023.109433", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "26d61c233c170dcfa8c906b30e19066c7566f6f5", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232338085
pes2o/s2orc
v3-fos-license
Evaluating and communicating hepatitis C cascades of care data in Tayside, Scotland: A journey towards elimination Chronic hepatitis C virus (HCV) is one of the leading causes of liver cirrhosis and hepatocellular carcinoma. The WHO 2030 Elimination Goals require each country to evaluate their response to their epidemics. This can be achieved by visualization of cascades of care, depicting how infected cases move through disease control stages. However, methods of displaying data are debated and lack practical application. This project proposes a new way of codifying and displaying HCV data using Tayside as a case study. 1464 cases of active HCV infections in Tayside from 2015 to 2019 were analysed from NHS Tayside’s HCV Database. Variables were evaluated to create a systematic coding framework that was then used to code each patient’s diagnosis, treatment and cure status each year from 2015 to 2019. Graphical representation of the data in the form of a stacked clustered bar chart demonstrates general trends and conversion rates. For example, Tayside has seen an increase in diagnosis‐to‐cure rates from 18% to 49% (2015‐2019). This method also demonstrates the portion of newly and previously diagnosed people accessing treatment, those with unsuccessful or incomplete treatments, completed treatments with unconfirmed cure, and the number of deaths and relocations. In conclusion, this project proposes a novel way of displaying cascades of care data that relays yearly snapshots of an epidemic, cumulative progression over time, nuanced information of each stage and progression towards elimination targets. This method can be meaningfully used to improve local service planning, knowledge exchange across health systems and reporting to bodies like the WHO. became the standard of care for many countries like Scotland, providing an easily administered oral option with over 95% efficacy across genotypes. 4 With the introduction of DAAs, eradication of the disease has become possible and the World Health Organization (WHO) has set Elimination Goals for 2030 tasking each health system with evaluating their own epidemic and progress towards said targets. These include a 90% diagnosis rate, 80% treatment rate of those eligible, 90% reduction in incidence and 65% reduction in mortality. 5 The WHO has highlighted priority actions for countries such as information gathering, prevention, national target setting and revising plans as necessary. 6 In order to monitor both the progression of an epidemic towards targets and utilize the metrics to make strategic healthcare provision decisions on the ground, data must be collected and communicated effectively. Insights about an infectious disease epidemic, like that of HCV, can be gained by analysing a health system's cascade of care (CoC) which depicts how infected cases move through the steps of effective disease control in a continuum of services. 7 CoC has been effectively used to identify gaps in HCV care, plan service delivery to intensify efforts in key areas, evaluate the progression of the epidemic and monitor health system effectiveness in order to improve the health and increase the treatment rates of those living with HCV. [8][9][10][11] While CoC is used around the globe, approaches to data coding, analysis and reporting vary. 12 Hepatitis C specialist teams have debated the best ways to capture yearly snapshots of the current state of the epidemic, appreciate how the epidemic changes over time, emphasize the nuances at each stage of the cascade and compare how progress meets set targets. A standardized method would facilitate the comparison and knowledge exchange across time, settings, subpopulations and health systems. 6 In order to evaluate yearly snapshots of an epidemic, its cumulative evolution over time, nuance at each stage of the cascade and progression towards elimination targets, this paper proposes a novel, systematic way of codifying HCV cases along the proposed cascade of Care: Diagnosis, Treatment and Cure. It also evaluates the hepatitis C cascade of Care in Tayside, Scotland, to provide examples of insights gained from displaying data in this way. | Coding framework Using the variables to identify important statuses in the diagnosisto-cure journey, a framework was created to codify each patient's Diagnosis, Treatment and Cure status at each point in time (Table 2). The coding framework was then used to codify each of the 1476 patients in our data set each year from 2015 to 2019 to produce an example stacked clustered bar chart. Details of the methodological nuances and special considerations that are important in replicating this method are detailed in Table 3. | RE SULTS The coding framework was used on the Tayside data set to produce an example stacked clustered bar chart ( Figure ) with the supporting raw data ( Figure 2B). Similarly to previous methods used, this method of displaying a cascade of care communicates the basic disease control stages: Diagnosis, Treatment and Cure. It provides a cumulative overview of the epidemic's progression during a period of time while also allowing for yearly comparison of the disease control stages. Conversion rates, which indicate the percentage of people per year who move from diagnosis to treatment or cure, help to note overall progression towards defined targets and identify specific periods of success or struggle in the elimination efforts. In Tayside, diagnosis-to-treatment rates went from 19% (2015) to 50% (2019) and diagnosis-to-cure rates went from 18% (2015) to 49% (2019). Both diagnosis-to-treatment and diagnosis-to-cure rates saw the most significant rise from 2016 to 2017. The method also provides a variety of nuance and distinction Fourthly, the portion of patients who have completed treatment but do not have a confirmed SVR is shown to provide information on patients that require follow-up. In Tayside, the service is still attempting to follow-up 5% of patients to confirm their post-treatment SVR status. Lastly, deaths and patients that relocate are shown at each stage of the cascade. In Tayside, 4% of patients died and 2% moved each year. F I G U R E 1 Cohort selection based on inclusion/exclusion criteria | DISCUSS ION The coding framework suggested in this paper could be applied across healthcare systems to add nuance to stages of the cascade and to standardize definitions. This could facilitate the comparison across settings and systems, providing a potential reporting method the WHO might suggest to countries tracking their epidemics. An example of how the framework can be practically applied to any system's HCV data can be found in Appendix 1. Having used the method to analyse the Tayside data, it was evident that the area has been successful at both managing and tracking the local HCV epidemic. This may be attributed to their quick transition from interferon-based treatments to DAAs in 2015 and universal access to treatment provided for virtually all those infected. Tayside has utilized innovative testing and treatment pathways, some of which are located within community pharmacies and harm-reduction centres, to widen access to services at each stage of the cascade. [13][14][15] As in many settings, there are continuing challenges in providing services to those that are hardest to reach, such as those that frequently relocate or those that may not regularly engage with healthcare services. Beyond Tayside, the method proposed in this paper can be implemented with basic spreadsheet tools that have pivot table and graphing functions. While the method, as described here, has been created to meet Tayside's needs using their rich data collection and Diagnosis Reinfections, whether in the same year or in subsequent years, are coded separately as individual infections. Patients with a positive DBST and negative PCR taken on the same day are considered negative as PCR is a more specific test. The small number of patients that are diagnosed in Tayside, move away and return to Tayside at a later date with an active infection are first coded as "New Diagnosis" and upon return coded as "Diagnosis Carrier." Treatment Though rare, multiple treatments for the same infection, whether in the same year or in subsequent years, are coded separately as individual treatments. Patients coded as "Treatment Unsuccessful" or "Treatment Incomplete" without a confirmed SVR for one year are coded as "Previous Diagnosis" for the successive year as they had not cleared the virus and are still pending a successful treatment. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available from the corresponding author upon reasonable request.
2021-03-25T06:16:39.143Z
2021-03-24T00:00:00.000
{ "year": 2021, "sha1": "826b5fc22ac309d857f368440d0667e79e5796aa", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jvh.13505", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "94803d5ea364feb88902dec4c90073a8039dda4a", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
209519173
pes2o/s2orc
v3-fos-license
Trastuzumab-Targeted Biodegradable Nanoparticles for Enhanced Delivery of Dasatinib in HER2+ Metastasic Breast Cancer Dasatinib (DAS) is a multikinase inhibitor that acts on several signaling kinases. DAS is used as a second-line treatment for chronic accelerated myeloid and Philadelphia chromosome-positive acute lymphoblastic leukemia. The therapeutic potential of DAS in other solid tumours is under evaluation. As for many other compounds, an improvement in their pharmacokinetic and delivery properties would potential augment the efficacy. Antibody-targeted biodegradable nanoparticles can be useful in targeted cancer therapy. DAS has shown activity in human epidermal growth factor receptor 2 (HER2) positive tumors, so conjugation of this compound with the anti-HER2 antibody trastuzumab (TAB) with the use of nanocarriers could improve its efficacy. TAB-targeted DAS-loaded nanoparticles were generated by nanotechnology. The guided nanocarriers enhanced in vitro cytotoxicity of DAS against HER2 human breast cancer cell lines. Cellular mechanistic, release studies and nanoparticles stability were undertaken to provide evidences for positioning DAS-loaded TAB-targeted nanoparticles as a potential strategy for further development in HER2-overexpressing breast cancer therapy. Introduction The discovery of new targeted therapies is a main objective, particularly for diseases like cancer where mortality is observed in a high proportion of patients [1][2][3]. Protein kinases play a central role in the activation of oncogenic signaling pathways and some small kinase inhibitors can interfere with their activity. As an example of this family of agents, is the multikinase inhibitor dasatinib (DAS) which Characterization 1 H nuclear magnetic resonance (NMR) spectra were recorded on a Varian Inova FT-500 spectrometer. Gel permeation chromatography (GPC) measurements were performed on a Polymer Laboratories PL-GPC-220 instrument equipped with a TSK-GEL G3000H column and an ELSD-LTII light-scattering detector. Field Emission Scanning Electron Microscopy (FE-SEM) images were recorded on a Jeol 7800 F electron microscope to study the particle size distribution and morphology of the nanoparticles. High-resolution electron microscope images were obtained on a Jeol JEM 2100 Nanomaterials 2019, 9,1793 3 of 14 transmission electron microscope (TEM) operating at 200 kV and equipped with an Oxford Link EDS detector. As the specimens could be damaged under beam irradiation, observation was performed under low-dose conditions. The resulting images were analyzed using Digital Micrograph™ software from Gatan. The average sizes, polydispersities and Z-potentials of the formulations were measured using a Zetasizer Nano ZS (Malvern Instruments). Data were analyzed using the multimodal number distribution software included in the instrument. Preparation of Nanoparticles (NPs) Polylactide NPs. The NPs were prepared by nanoprecipitation and displacement solvent method [24]. Briefly, 20 mg of PLA in 3 mL of tetrahydrofuran (THF) was added dropwise into 17 mL of polyvinyl alcohol (PVA) (0.2% aqueous solution) under vigorous stirring. The THF was evaporated under reduced pressure. After centrifugation at 14,000 rpm for 40 min the NPs were collected. Polyethyleneimine (PEI) coating NPs. 20 mg of PLA in 3 mL of THF was added dropwise into a 17 mL-aqueous phase containing 0.5% w/w of PEI and 0.2% of PVA. The THF was evaporated under reduced pressure. The particle suspension was centrifuged at 14,000 rpm for 40 min at 4 • C to collect the NPs. The suspension was separated into two Eppendorf, one of them with 1 mL of phosphate-buffered saline (PBS) pH 7.4 and another with 1 mL PBS pH 5.8 for subsequent conjugation with TAB. DAS-loaded NPs. The DAS-loaded NPs were prepared by the same methodology described above. Briefly, 20 mg of PLA in 3 mL of THF and 3 mg of DAS in 50 µL of DMSO were mixed to form the organic phase. The organic phase was subsequently added dropwise into 17 mL of PVA (0.2% w/w) aqueous solution under vigorous stirring. The THF was then evaporated under reduced pressure. The particle suspension was centrifuged at 14,000 rpm for 40 min at 4 • C to collect the NPs. DAS-loaded PEI coating NPs. The DAS-loaded PEI coating NPs were prepared by the same method described above. Briefly, the organic phase was added dropwise into aqueous phase, with 0.5% PEI in 17 mL of PVA 0.2% solution under vigorous stirring. The THF was then evaporated under reduced pressure. The particle suspension was centrifuged at 14,000 rpm for 40 min at 4 • C to collect the NPs. The suspension was separated into two Eppendorf tubes, one of them with 1 mL of PBS pH 7.4 and another with 1 mL PBS pH 5.8 for subsequent conjugation with TAB. TAB-conjugated NPs. The TAB was chemically conjugated to PEI coating NPs after activating [25]. Briefly, 40 mg of 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDC) and 9.7 mg of N-hydroxysuccinimide (NHS) were dissolved in 4 mL of PBS (0.1 M, pH 5.8) followed by the addition of 12 microliters of antiHER2 (21 mg mL −1 of TAB in 0.1 M PBS, pH 7.4). 1 mL of PEI coating NPs suspension in PBS pH 5.8 was added to antibody solution and left at room temperature for 12 h. The suspension was centrifuged at 14,000 rpm for 40 min at 4 • C to remove the excess of EDC/NHS. The pellet of TAB conjugated NPs were suspended in PBS pH 7.4. Standard protocol of Bradford assay was employed for quantifying the concentration of the protein in the supernatant. DAS-loaded TAB-conjugated NPs. The NPs were prepared by the same method described above. After TAB activation, 1 mL of DAS-loaded PEI coating NPs suspension in PBS pH 5.8 was added to TAB solution and left at room temperature for 12 h. The suspension was centrifuged at 14,000 rpm for 40 min at 4 • C the pellet suspended in PBS pH 7.4. The standard protocol of Bradford assay was employed for quantifying the concentration of the protein in the supernatant. Stability of NPs in human serum. The stability of the NPs was performed in 10% serum. Briefly, the NPs were incubated at 37 • C, at concentration equal to 1 mg·mL −1 . The hydrodynamic radius (R H ) and polydispersity index (PdI) of the formulations were calculated at predetermined intervals of time by dynamic light-scattering (DLS) measurements. Drug-release studies. To determine the amount of drug release from respective nanoparticles, 10 mg of lyophilized nanoparticles were suspended in 25 mL of phosphate buffered saline (PBS pH 7.4) and sealed in a dialysis membrane (molecular weight cut off: 3500 Da). After incubation at 37 • C, 3 mL of release medium was taken out and replaced by fresh medium at certain intervals. After centrifugation DAS-release concentration was measured in a spectrophotometer at 324 nm. The drug releases were tested in three replicates. In Vitro Assays Cell culture. HER2+ BT474 and BT474-RH (TAB-resistant) cells and triple negative MDA-MB231 cells were grown in Dulbecco's modified Eagle medium (DMEM) medium supplemented with 10% inactivated fetal bovine serum. All cell lines used were provided by Drs. J. Losada and A. Balmain, who purchased them from the ATCC, in 2015. Cells authenticity was confirmed by STR analysis at the molecular biology unit at the Salamanca University Hospital. BT474-derived resistant cell line (BT474-RH) was obtained by exposure to trastuzumab (for 6-8 months). All mediums were supplemented with 2 mM L-glutamine, penicillin (20 units/mL) and streptomycin (5 µg/mL). Cells were maintained at 37 • C in a saturated humidity atmosphere with 5% of CO 2 . Drug to antibody ratio study. For evaluated if TAB cargo has influence in antiproliferation effect of NPs MTT assays was performed as described in toxicity section. BT474 were plated as the same form, and later were treated with TAB-DAS-NPs (25, 50, 75, and 100 nM) with 0,8;1,6 and 3,2 nM TAB cargo, respectively. Cell-cycle studies. BT474 and BT474-RH cells were seeded in p6-well plates (250,000 cells/well) and treated with DAS (100 nM) and TAB-DAS-NPs (100 nM of DAS) for 24 h Non-treated cells were used as control. Cells were collected and fixed with 70% cold ethanol for 30 min at 4 • C. Then, cells were washed with PBS+2%BSA and stained with propidium iodide/RNAse staining solution (Immunostep S.L.). Results were analyzed on FACSCanto II flow cytometer (BD Biosciences). The percentage of cells in each cell-cycle phase was determined by plotting DNA content against cell number using the FACS Diva software. We presented G0/G1 incremented of cells after each condition of treatment, non-treated cells were used as control. Apoptosis. BT474 and BT474-RH cells were seeded in p6-well plates (250,000 cells/well) and treated with DAS (100 nM) and TAB-DAS-(PEI)NPs (100 nM of DAS) for 72 h. Non-treated cells were used as control. Then, they were collected and stained in the dark with Annexin V-DT-634 (Immunostep S.L.), and propidium iodide (2 mg/mL) at room temperature for 1 h. Cell death was determined using a FACSCanto II flow cytometer (BD Biosciences). We divided population of cells in living cells (Annexin and PI negatives) and death cells (Annexin and/or PI positive). We presented cell death incremented (in U.A.) after the treatment referred to control, non-treated cells. Toxicity in 3D structure. BT474 and BT474-RH cells (5000 cells) were resuspended in DMEM+2% of Matrigel (Sigma-Aldrich) and seeded on a layer (1 mm) of Matrigel previously added in a p48-multiplate well. After 24 h of incubation, spheres cultures were treated with DAS (100 nM), TAB (50 nM), and TAB-DAS-(PEI)NPs (100 nM of DAS). 3D cultures growth was monitored 72 h after, Nanomaterials 2019, 9, 1793 5 of 14 taking photos with Nikon Eclipse TS100™, objective 10×/0,25 microscope inverted. Spheres diameter was measured by ImageJ software. We represented the sphere size referred to sphere's diameter score as arbitrary length units. Sphere diameters used non-treated cells as control. Statistical analysis. Software GraphPad Prism version 5 was used for statistics analysis. Data are expressed as mean ± s.e.m. from at least three independent experiments. A t-test for independent samples non-parametric assay (one-tail) or analysis of variance (ANOVA) with Newman-Keuls post-test was used to determine significant statistical differences between different condition of treatments. The level of significance was considered 95%, so p values lower than 0.05 were considered statistically significant: * p ≤ 0.05; ** p ≤ 0.01 and *** p ≤ 0.001. Statistical analysis. Software GraphPad Prism version 5 was used for statistics analysis. Data are expressed as mean ± s.e.m. from at least three independent experiments. A t-test for independent samples non-parametric assay (one-tail) or analysis of variance (ANOVA) with Newman-Keuls posttest was used to determine significant statistical differences between different condition of treatments. The level of significance was considered 95%, so p values lower than 0.05 were considered statistically significant: * p ≤ 0.05; ** p ≤ 0.01 and *** p ≤ 0.001. Characterization of NPs were carried out by the dynamic light-scattering (DLS) technique, fieldemission scanning electron microscopy (FE-SEM) and TEM (Table 1 and Figure 2). DLS studies showed average particle size of the different formulations close to 120 nm, except for DAS-loaded non conjugated and conjugated NPs which were slightly higher. The increase in the average size is expected after PEI modification. [26]. The TAB conjugation was confirmed by the decrease in the surface charge of NPs (Z-potential) to +32 mV (DAS-(PEI)NPs) to +27.7 mV (TAB-DAS-(PEI)NPs). The final particle size of TAB-DAS-(PEI)NPs was 132.1 nm with a polydispersity index (PdI) of 0.189. TEM images show nanoparticles of approximately 120 nm which exhibit a core-shell morphology. Such distribution is consistent with PEI modification which results in a 5 nm shell surrounding the PLA nanoparticles (see Figure 2b). After conjugation with TAB, the surface of the NPs is modified, and the interaction of antibodies can be clearly observed as shown in Figure 2. Table 1. Hydrodynamic diameter (nm), polydispersity index (PdI) and Z-potential of the different formulations obtained by dynamic light-scattering (DLS) measurements. Characterization of NPs were carried out by the dynamic light-scattering (DLS) technique, field-emission scanning electron microscopy (FE-SEM) and TEM (Table 1 and Figure 2). DLS studies showed average particle size of the different formulations close to 120 nm, except for DAS-loaded non conjugated and conjugated NPs which were slightly higher. The increase in the average size is expected after PEI modification. [26]. The TAB conjugation was confirmed by the decrease in the surface charge of NPs (Z-potential) to +32 mV (DAS-(PEI)NPs) to +27.7 mV (TAB-DAS-(PEI)NPs). The final particle size of TAB-DAS-(PEI)NPs was 132.1 nm with a polydispersity index (PdI) of 0.189. TEM images show nanoparticles of approximately 120 nm which exhibit a core-shell morphology. Such distribution is consistent with PEI modification which results in a 5 nm shell surrounding the PLA nanoparticles (see Figure 2b). After conjugation with TAB, the surface of the NPs is modified, and the interaction of antibodies can be clearly observed as shown in Figure 2. Loading (%LE) and encapsulation efficiency (%EE) of DAS-loaded formulations are depicted in Figure 3. TAB-DAS-(PEI)NPs showed high %EE of more than 90% with an active LE of 11.6% w/w. Higher %LE were obtained when compared to DAS-loaded magnetic micellar nanoparticles [15] and similar to those reported by encapsulating in albumin micelles [17]. In vitro release of DAS-loaded NPs was carried out using the dialysis method at pH 7.4 to mimic the physiological pH of circulation. The release mechanism for polymeric NPs is based on triphasic profiles where a first fast step belongs to "burst release", followed by a second diffusion step through the pores and channels, to end up with a third deflation lap [27]. As illustrated in Figure 3, TAB-DAS-(PEI)NPs exhibited a controlled release of DAS. TAB-DAS-(PEI)NPs only showed a DAS burst release of less than 15% at pH 7.4; then a sustained drug release profile was achieved in which 60% of DAS was released after 72 h. In case of DAS-NPs, the release was nearly completed (92%) after 72 h at physiological conditions. Loading (%LE) and encapsulation efficiency (%EE) of DAS-loaded formulations are depicted in Figure 3. TAB-DAS-(PEI)NPs showed high %EE of more than 90% with an active LE of 11.6% w/w. Higher %LE were obtained when compared to DAS-loaded magnetic micellar nanoparticles [15] and similar to those reported by encapsulating in albumin micelles [17]. In vitro release of DAS-loaded NPs was carried out using the dialysis method at pH 7.4 to mimic the physiological pH of circulation. The release mechanism for polymeric NPs is based on triphasic profiles where a first fast step belongs to "burst release", followed by a second diffusion step through the pores and channels, to end up with a third deflation lap [27]. As illustrated in Figure 3, TAB-DAS-(PEI)NPs exhibited a controlled release of DAS. TAB-DAS-(PEI)NPs only showed a DAS burst release of less than 15% at pH 7.4; then a sustained drug release profile was achieved in which 60% of DAS was released after 72 h. In case of DAS-NPs, the release was nearly completed (92%) after 72 h at physiological conditions. DAS-Loaded TAB-Conjugated NPs Display Potent and Selective Cytotoxicity in Breast Cancer Cells The in vitro cytotoxicity of the formulations was examined by MTT assay in two cell line models, one is BT474, a classic cell line that overexpress HER2, and the second model is BT474-RH, the same cell line but resistant to TAB after being treated with the antibody for a long period of time. The generation of this cell lines was described elsewhere [28]. Both models recapitulate human breast cancer in different clinical situations. BT474 and BT474-RH cells were treated at 50 and 100 nM for 72 and 120 h. Non-loaded NPs (NPs, (PEI)NPs, and TAB-NPs) did not display any significant cytotoxicity in tumoral cells, indicating an appropriate biosecurity profile of the NPs (Figure 4a). An enhanced cytotoxicity of conjugated and non-conjugated DAS-loaded NPs was observed in both cells (Figure 4b). Figure 5 showed an effect of free DAS (IC 50~1 00 nM (72 h)), and TAB-DAS-(PEI)NPs (IC 50~5 0 nM (72 h)) in BT474 and BT474-RH cancer cells. Finally, the administration of TAB-DAS-(PEI)NPs was more active than administration of single agent TAB or DAS at different time points (72 h and 120 h), indicating the efficacy of the vectorized NPs. DAS-Loaded TAB-Conjugated NPs Display Potent and Selective Cytotoxicity in Breast Cancer Cells The in vitro cytotoxicity of the formulations was examined by MTT assay in two cell line models, one is BT474, a classic cell line that overexpress HER2, and the second model is BT474-RH, the same cell line but resistant to TAB after being treated with the antibody for a long period of time. The generation of this cell lines was described elsewhere [28]. Both models recapitulate human breast cancer in different clinical situations. BT474 and BT474-RH cells were treated at 50 and 100 nM for 72 and 120 h. Non-loaded NPs (NPs, (PEI)NPs, and TAB-NPs) did not display any significant cytotoxicity in tumoral cells, indicating an appropriate biosecurity profile of the NPs (Figure 4a). An enhanced cytotoxicity of conjugated and non-conjugated DAS-loaded NPs was observed in both cells (Figure 4b). Figure 5 showed an effect of free DAS (IC50∼100 nM (72 h)), and TAB-DAS-(PEI)NPs (IC50∼50 nM (72 h)) in BT474 and BT474-RH cancer cells. Finally, the administration of TAB-DAS-(PEI)NPs was more active than administration of single agent TAB or DAS at different time points (72 h and 120 h), indicating the efficacy of the vectorized NPs. We confirmed the cell viability effect of TAB-DAS-(PEI)NPs in 3D spheroid cultures generated from BT474 and BT474-RH cell lines ( Figure 5). 3D spheroid cultures, constitute a more physiologically model than 2D cell cultures for the evaluation of novel therapeutic strategies. As observed for 2D cell cultures, the invasion capacity of matrigel-embedded 3D cultures of BT474 and BT474-RH cells was significantly reduced after TAB-DAS-(PEI)NPs treatment ( Figure 6). Next, we used the non-over expressing HER2 cell line MDAMB-231 to confirm that the effect was secondary to the binding of the NPs to HER2. Administration of TAB-DAS-(PEI)NPs showed similar MTT inhibition to DAS-(PEI)NPs at two different doses 50 nM and 100 nM after 72 h (Figure 7a). These results demonstrated that conjugation with TAB facilitates the uptake of nanoparticles by targeting cells that overexpress HER2 and had similar effects in those that do not express HER2. The physical stability of TAB-DAS-(PEI)NPs was studied in PBS at different time points. The values of R H and PdI of the TAB-DAS-(PEI)NPs were measured by DLS (Figure 7b). The negligible increase in either particle size or PdI during a 7-day long experiment suggest high stability against aggregation. The antiproliferative activity of TAB-DAS-(PEI)NPs on BT474 were measured by MTTs assays at different times. The TAB-DAS-(PEI)NPs remained active after 3 months of preparation and storage as NPs suspension at 4 • C (Figure 7c). It is important to note for further clinical development that the lyophilization of TAB-DAS-(PEI)NPs decreased cytotoxicity activity of the formulation (Figure 7d). We confirmed the cell viability effect of TAB-DAS-(PEI)NPs in 3D spheroid cultures generated from BT474 and BT474-RH cell lines ( Figure 5). 3D spheroid cultures, constitute a more physiologically model than 2D cell cultures for the evaluation of novel therapeutic strategies. As observed for 2D cell cultures, the invasion capacity of matrigel-embedded 3D cultures of BT474 and BT474-RH cells was significantly reduced after TAB-DAS-(PEI)NPs treatment ( Figure 6). (Figure 7b). The negligible increase in either particle size or PdI during a 7-day long experiment suggest high stability against aggregation. The antiproliferative activity of TAB-DAS-(PEI)NPs on BT474 were measured by MTTs assays at different times. The TAB-DAS-(PEI)NPs remained active after 3 months of preparation and storage as NPs suspension at 4 °C (Figure 7c). It is important to note for further clinical development that the lyophilization of TAB-DAS-(PEI)NPs decreased cytotoxicity activity of the formulation (Figure 7d). Figure 6. Invasion capacity of matrigel-embedded 3D cultures of BT474 and BT474-RH cells is reduced with TAB-DAS-NPs. Cells were grown in a semi-solid matrigel matrix. Then, 3D cultures were exposed to the indicated doses of the drugs. After 72 h was taken pictures (a) and quantified the spheres size (b). Scale bar= 100 µm. * p < 0.5; ** p < 0.005; *** p < 0.001. Drug to antibody ratio (DAR) can be an important factor influencing effectiveness of Antibody drugs Conjugate NPs (ACNPs) [29]. DAR must be homogenous throughout the NPs and with an optimal balance between cytotoxicity and pharmacokinetic profile. Drug to antibody ratio (DAR) can be an important factor influencing effectiveness of Antibody drugs Conjugate NPs (ACNPs) [29]. DAR must be homogenous throughout the NPs and with an optimal balance between cytotoxicity and pharmacokinetic profile. Figure 8 showed MTTs assays of TAB-DAS-(PEI)NPs with different TAB cargo (0,8 nM to 3,2 nM). No significant differences based on different cargoes were observed at several concentrations of TAB-DAS-(PEI)NPs on the cytotoxicity of BT474 cells. Drug to antibody ratio (DAR) can be an important factor influencing effectiveness of Antibody drugs Conjugate NPs (ACNPs) [29]. DAR must be homogenous throughout the NPs and with an optimal balance between cytotoxicity and pharmacokinetic profile. Figure 8 showed MTTs assays of TAB-DAS-(PEI)NPs with different TAB cargo (0,8 nM to 3,2 nM). No significant differences based on different cargoes were observed at several concentrations of TAB-DAS-(PEI)NPs on the cytotoxicity of BT474 cells. DAS-Loaded TAB-Conjugated NPs Increment Cycle Arrest and Cell Death More in TAB-Resistant Cells To explore whether their mechanism of action was different from free DAS, the two HER2+ cell lines, BT474 and BT474-RH, were treated with TAB-DAS-(PEI)NPs and stained with propidium iodide/RNase solution at 24 h of treatment. DAS blocked the progression through the G1/G0-phase boundary. Administration of TAB-DAS-(PEI)NPs showed a slight increase in G1 compared to the free drug, what confirmed that the NPs mediates their effect in the same manner as DAS in its free formulation (Figure 9a). On the other hand, Figure 9b showed enhanced apoptosis in resistant cells, treated with TAB-DAS-(PEI)NPs in comparison with free DAS and free TAB, which suggests that the resistant cells rely more on this kinase that the naïve ones, a finding in line with previous reports [12]. DAS-Loaded TAB-Conjugated NPs Increment Cycle Arrest and Cell Death More in TAB-Resistant Cells To explore whether their mechanism of action was different from free DAS, the two HER2+ cell lines, BT474 and BT474-RH, were treated with TAB-DAS-(PEI)NPs and stained with propidium iodide/RNase solution at 24 h of treatment. DAS blocked the progression through the G1/G0-phase boundary. Administration of TAB-DAS-(PEI)NPs showed a slight increase in G1 compared to the free drug, what confirmed that the NPs mediates their effect in the same manner as DAS in its free formulation (Figure 9a). On the other hand, Figure 9b showed enhanced apoptosis in resistant cells, treated with TAB-DAS-(PEI)NPs in comparison with free DAS and free TAB, which suggests that the resistant cells rely more on this kinase that the naïve ones, a finding in line with previous reports [12]. Discussion Breast cancer remains one of the most common malignancies worldwide and the HER2 positive breast cancer subtype constitutes 25% of this population. Despite the development and implementation of new treatments to our daily clinical armamentarium, HER2 positive metastatic breast cancer remains an incurable condition. In this context, the discovery, design and optimization of novel and improved therapeutic strategies is a main objective. Antibody-drug conjugated nanoparticles (ACNPs) represent a relatively new approach that is based on the success and potential of antibody conjugation and nanotechnology [19,20]. In comparison with antibody-drug conjugates (ADC), ACNPs can deliver the drug in a controlled manner preserving its chemical structure, avoiding unpredicted metabolization, and reducing toxicity. The combination of chemotherapies in NPs offers the opportunity to overcome pharmacokinetic differences in drug agents to ensure their Discussion Breast cancer remains one of the most common malignancies worldwide and the HER2 positive breast cancer subtype constitutes 25% of this population. Despite the development and implementation of new treatments to our daily clinical armamentarium, HER2 positive metastatic breast cancer remains an incurable condition. In this context, the discovery, design and optimization of novel and improved therapeutic strategies is a main objective. Antibody-drug conjugated nanoparticles (ACNPs) represent a relatively new approach that is based on the success and potential of antibody conjugation and nanotechnology [19,20]. In comparison with antibody-drug conjugates (ADC), ACNPs can deliver the drug in a controlled manner preserving its chemical structure, avoiding unpredicted metabolization, and reducing toxicity. The combination of chemotherapies in NPs offers the opportunity to overcome pharmacokinetic differences in drug agents to ensure their delivery at the disease site in the required proportions. The diversity of drug agents that can be incorporated into ACNPs offers further development opportunities than can be afforded with standard ADC technologies. In this context, the main objective of this work was to develop ACNPs for the treatment of breast cancer. DAS was encapsulated in trastuzumab-vectorized NPs with the objective to improve its solubility and avoid its rapid metabolism. The DAS nanocarriers obtained for this purpose were characterized by size, PdI and superficial charge. The average NPs size was approximately 100 nm, in the same range or even smaller than the values reported for DAS encapsulation by other authors in albumin nanoparticles and magnetic micelles [15,17]. Similar %LE was obtained to DAS encapsulation in albumin nanoparticles with slightly smaller average size and Z-potential values [15]. Considerably higher values for LE was obtained in comparison with the polymeric micelles reported for the DAS encapsulation [14]. Once the formulation was optimized reaching an EE value close to 90%, TAB were attached over the surface after PEI coating and covalent binding by EDC/NHS chemistry [26]. PEI coating for antibody conjugation was chosen for this first approach due to its easier formulation and low cost. In any case, ACNPs showed Z-potential to guarantee enough stability for further in vitro studies. In vitro release studies of the ACNPs showed a sustained release of DAS over 72 h with a negligible burst release compared to non-triggered DAS-loaded NPs. The DAS release from the polymeric NPs was comparable to that reported from polymeric micelles and metal nanoparticles [14][15][16][17]. After PEI coating and subsequent antibody conjugation the nanocarriers achieved more sustained DAS release over time. ACNPs induced cytotoxicity specifically in cell lines overexpressing HER2 like BT474, showing more activity than DAS alone. In TAB-resistant cell lines, BT474-RH, TAB-DAS-(PEI)NPs showed more activity than single agents alone, demonstrating the vulnerability that constitute SRC inhibition on this cell population. Non-loadable NPs did not show activity confirming the security profile of the nanocarriers. Non-targeted nanovehicles were also assessed on cells lacking the expression of HER2, to confirm that the specificity of the effect depended on the binding with TAB to the receptor. As a result, it was shown that ACNPs were much more efficient than free DAS and non-targeted DAS-loaded nanovehicles. ACNPs reduced non-HER2-overexpresing cell lines availably in the same way than non-targeted DAS nanocarriers which demonstrates the specificity of the strategy. Cellular mechanistic studies of the DAS-loaded ACNPs confirmed the induction of apoptosis in HER2 overexpression breast cancer cells, as well as cell-cycle arrest in the G1/G0-phase. Of note, the effect was similar in both BT-474 and BT-474RH cells although the activity was more pronounced in the latter, suggesting that the mechanism of action was similar, but the resistant clones depended more on SRC inhibition. We are aware that an in-depth evaluation of the binding and internalization process would provide relevant information. In this context, efficient internalization of ACNPs have a clear role in its efficacy. However, we consider that this kind of study was beyond the aim of this work that mainly focused on the generation and characterization of the NPs. Finally, stability studies regarding storage and drug-antibody ratio of the ACNPs were performed to assess their potential use in further in vitro and in vivo studies. The activity of the ACNPs after storage in suspension at 4 • C was maintained over three months. No significant changes were observed in RH and PdI, indicative of no significant aggregation after being storage. The activity of ACNPs was reduced after lyophilisation, whereas the TAB cargo over the NPs did not influence significantly over the activity of the ACNPs. Conclusions In this study we demonstrate that the encapsulation of DAS into TAB-targeted biodegradable polymeric NPs resulted in in vitro efficacy, particularly in HER2-overexpressing cells, maintaining the same mechanism of action as DAS given alone. In addition, the generated NPs are stable over time. These results open the door for further assessment of efficacy and safety using in vivo studies that could be the based for its future clinical development.
2019-12-19T09:14:07.611Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "6b54f15a4c1ac1390fd79ddf2cf57d27843e73b9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/9/12/1793/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "154058ac6cc7e1879b31d4498e8cb558a8a71ac0", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
119255794
pes2o/s2orc
v3-fos-license
The 1+1+2 formalism for Scalar-Tensor gravity We use the 1+1+2 covariant approach to clarify a number of aspects of spherically symmetric solutions of non-minimally coupled scalar tensor theories. Particular attention is focused on the extension of Birkhoff's theorem and the nature of quasi-local horizons in this context. I. INTRODUCTION Scalar-Tensor (ST) theories of gravity are among the most studied extensions of General Relativity (GR). Initially introduced as a completion of GR by Brans and Dicke [1], ST theories have found a wide range of uses, which include providing models of the inflation mechanism [2] and modelling the late-time acceleration of the Universe [3]. These theories arise naturally in the context of quantum field theory on curved spacetime [4] and have also played a role in the debate about the no-hair conjecture of black holes and its various realisations [5]. The name scalar-tensor theory is often used to describe theories in which both minimally and non-minimally coupled scalar fields appear. The first class of theories are the most studied, due in part to their simplicity and connection to the original models of inflation. The second class has instead gained attention with the discovery of their connection with higher dimensional theories such as Kaluza-Klein and (Super-)String theories) [6]. For both these classes, we have now a relatively clear picture of many aspects of the cosmologies of these models [7,8] (including some unexpected phenomena [9]) and with the wealth of cosmological data currently available, there are many ways to compare their predictions with more standard models of the Universe. It therefore comes as a surprise that in comparison with cosmology, the investigation of spherically symmetric solutions of both types of theories has been more limited. This is particularly true for theories which involve a non-trivial self-interacting potential. Indeed, many of the most important models of ST theories are characterised by a non-trivial self-interaction of the scalar field potential and in situations where there is a non-minimal coupling to gravity, very few results are known other than the (non-independent) four types of Brans solutions [10]. Determining detailed information about spherically symmetric solutions for these theories could open the way to a new generation of tests based on astrophysical data rather than those from cosmology. This paper is an attempt to move in this direction. We aim to obtain a deeper insight into the properties of spherically symmetric solutions of ST theories, focusing mainly on the non-minimally coupled theories (many of our results turn out to also be valid in the minimally coupled case). The class of non-minimally coupled theories we consider are those characterised by a standard kinetic term for the scalar field. However, the actions we consider can be recast in the general form given by Bergmann, Nordtveldt and others [11] by a simple reparameterisation of the scalar field. Key to our analysis is the powerful 1+1+2 covariant approach, developed by Clarkson and Barrett [12]. This formalism is a natural extension to the 1+3 approach originally developed by Ehlers and Ellis [14] and it is optimised for problems which have spherical symmetry. The 1+1+2 formalism has been applied to the study of linear perturbations of a Schwarzschild spacetime [12] and to the generation of electromagnetic radiation by gravitational waves interacting with a strong magnetic field around a vibrating Schwarzschild black hole [15]. In cosmology it has been used to investigate perturbations of several Locally Rotationally Symmetric spacetimes (LRS) [13]. Using this approach we clarify some aspects of spherically symmetric solutions of non-minimally coupled ST theories, in particular the applicability of Birkhoff's theorem. We also present some new exact solutions. The paper is organised in the following way. In section II we give the general equations of ST gravity. In Section III we briefly review the covariant approach and deduce the 1+1+2 framework from the 1+3 one. We then drive the 1+1+2 equations in general and relate the 1+1+2 quantities to the metric components. In section IV we give the 1+1+2 equations for a general non-minimally coupled ST theory of gravity and we present some important results which are needed if one is to use the 1+1+2 formalism to find exact solutions. In section V we discuss the existence and the meaning of Birkhoff's theorem in this framework and finally section VII is dedicated to our conclusions. Unless otherwise specified, natural units ( = c = k B = 8πG = 1) will be used throughout this paper and Latin indices run from 0 to 3. The symbol ∇ represents the usual covariant derivative and ∂ corresponds to partial differentiation. We use the −, +, +, + signature and the Riemann tensor is defined by where the Γ a bd are the Christoffel symbols (i.e. symmetric in the lower indices), defined by The Ricci tensor is obtained by contracting the first and the third indices Symmetrisation and the anti-symmetrisation over the indexes of a tensor are defined as Finally the Hilbert-Einstein action in the presence of matter is given by II. GENERAL EQUATIONS FOR SCALAR TENSOR GRAVITY The most general action for ST theories of gravity is given by (conventions as in Wald [16]): where V (ψ) is a generic potential expressing the self-interaction of the scalar field and L m represents the matter contribution. Varying the action with respect to the metric gives the gravitational field equations: and the variation with respect to the field ψ gives the curved spacetime version of the Klein-Gordon equation where the prime indicates a derivative with respect to ψ. Both these equations reduce to the standard equations for GR and a minimally-coupled scalar field when F (ψ) = 1. Equation (8) can be recast as where T ψ ab has the form Provided that ψ ,a = 0, equation (9) also follows from the conservation equations The reformulation above will be very important for our purposes. In fact, the form of (10) allows us to treat scalar tensor gravity as standard Einstein gravity in the presence of two effective fluids and permits a straightforward generalisation of the 1+1+2 formalism to these equations. III. 1+1+2 COVARIANT APPROACH In the following we give a brief review of the 1+1+2 covariant approach [12]. We will proceed first with the standard 1+3 decomposition and then perform a further split of the spatial degrees of freedom relative to a preferred spatial direction. This allows us to derive a set of variables better suited to systems in which a spatial direction is important (i.e., the radial one in the case of spherical symmetry). A. Kinematics In 1+3 approach we define a time-like congruence, with unit tangent vector u a (u a u a < 0 1 ). In this way, any tensor field can be projected along u a (extracting the temporal parts) or into the 3-space orthogonal to u a using the projection tensor h a b = g a b + u a u b . In the 1+1+2 approach, we further split this 3-space by introducing the spatial unit vector e a orthogonal to u a , so that e a u a = 0 , e a e a = 1 . Then the tensor , N a a = 2 (14) projects vectors into the 2-surfaces orthogonal to e a and u a . It is obvious that e a N ab = 0 = u a N ab . Using N ab , any 3-vector λ a = h a b λ b can be irreducibly split into a component along e a and a sheet component Λ a , orthogonal to e a , i.e., A similar decomposition can be done for 3-tensors λ ab = h c a h d b Φ cd , which can be split into scalar (along e a ), 2vector and 2-tensor parts as follows: The Levi-Civita 2-tensor is defined as where ε abc is the 3-space permutation tensor, which is the volume element of the 3-space and η abcd is the spacetime 4-volume element. ε ab plays the usual role of 2 volume element for the 2-surfaces. With these definitions it follows that any 1+3 quantity can be locally split into three types of objects: scalars, 2vectors and 2-tensors defined on the 2-surfaces orthogonal to e a . B. Derivatives and the kinematical variables Using u a and h ab we can obtain two derivative operators: one defined along the time-like congruence: and the projected derivative D: Applying the covariant derivative to u a we can obtain the key 1+3 quantities: whereu a is the acceleration, Θ is the expansion parameter, σ ab the shear and ω ab is the vorticity. In the same way as before we can now split the D operator using e a and N ab : The covariant derivative of e a can be split in the direction orthogonal to u a into it's irreducible parts to give: For an observer that chooses e a as a special direction in spacetime, φ = δ a e a represents the expansion of the sheet, λ ab = δ {a e b} is the shear of e a (i.e., the distortion of the sheet), ξ = 1 2 ε ab δ a e b is a representation of the "twisting" or rotation of the sheet and a a =ê a its acceleration. Using equations (15) and (16) one can split the 1+3 kinematical variables and Weyl tensors as follows: where E ab and H ab are the electric and magnetic part of the Weyl tensor respectively. It follows that the key variables of the 1+1+2 formalism are: Similarly, we may split the general energy momentum tensor in (10) as: where µ is the energy density and p is the pressure. The anisotropic fluid variables q a and π ab can be further split as: C. 1+1+2 equations for LRS-II Spacetimes Because of its structure, the 1+1+2 formalism is ideally suited for a covariant description of all the LRS spacetimes. These spacetimes possess a continuous isotropy group at each point [17] and exhibit locally a unique, preferred, covariantly defined spatial direction. A subclass of the LRS spacetimes, called LRS-II, contain all the LRS spacetimes that are rotation free. As a consequence, the variables Ω, ξ and H are identically zero in LRS-II spacetimes and fully characterise the kinematics. The propagation and constraint equations for these variables are obtained by the Ricci and (twice contracted) Bianchi identities and can be found in [12]. Let us now turn to the case of spherically symmetric static spacetimes which belong naturally to LRS class II. The condition of staticity implies that the dot derivatives of all the quantities vanish. Therefore the expansion vanishes, (Θ = 0), and this implies Σ = 0. The same holds for the heat flux Q. Hence the set of 1+1+2 equations which describe the spacetime become: Eliminating E and using the constraints (35) and (40) the system above can be reduced tô One could, of course, decide to eliminate other variables. In particular one might try to retain (39), due to its simplicity. However, as we will see in section IV, this choice has to be taken with great care, especially when attempting to find exact solutions. Once these equations have been solved it is useful to connect the 1+1+2 quantities to the metric coefficients and hence reconstruct the metric. Consider now the general spherically symmetric static metric: Using the definition of covariant derivative one obtains: and φ = δ a e a = 1 Note that we have two equations for three metric components, so at first sight it might seem that given a solution of the 1+1+2 potentials A and φ, there is no way to determine the metric. One needs to remember, however, that the form of the coefficient B depends on the choice of the coordinated ρ, so that the factor √ B can be reabsorbed into the definition of ρ and effectively the metric (46) has only two unknown functions. Therefore a p coordinate directly associated with the "hat" derivative, would havê X = X ,p which implies B(p) = 1. The formulae above reveal an interesting connection between the 1+1+2 formalism and the Takeno variables [18]. In fact one can see that many of the theorems proved by Takeno have a correspondence in the 1+1+2 formalism. IV. SPHERICALLY SYMMETRIC STATIC SPACE-TIMES IN SCALAR TENSOR GRAVITY The simplest way to write the 1+1+2 equations for the case of ST gravity is to use the recasting of the field equations that we gave in Section II. In particular T ψ ab can be decomposed as in (31) with In this way it is possible to write (34)(35)(36)(37)(38) as where we have assumed F = 0. The above equations characterise the static and spherically symmetric solutions of a general ST theory of gravity. Note that in spite of the fact that (55) contains second derivatives of the scalar field ψ, it does not correspond exactly to the Klein-Gordon equation. In fact, equation (54) can be shown to correspond to a combination of the Klein Gordon equation and the trace of the general field equations. Let us now show how (52-56) can be used to obtain exact solutions. The first thing to do is to choose a suitable radial coordinate. A clever choice is to proceed in a way that equation (39) has a trivial solution like in the case of the coordinate r for whichX = − 1 2 rφ∂ r X in [12]. The Gauss curvature is therefore just K = r −2 . However, one must be careful in this respect to check that (55) is fulfilled, because the choice above decouples K from φ. This can be clearly seen by considering the theory with the solution of (52-54), which corresponds to the following solution for the metric Although the above solution satisfies (52-54) and (39), it does not satisfy (55) and is not a solution of the field equations (8). This happens because the coordinate change removes the connection between K and φ so that (39) does not guarantee that (55) is satisfied. On the other hand, the theory does satisfy the system (52-55) for this convenient choice of radial coordinate and it is easy to find the exact solu- which, in terms of the metric coefficients is given by: The above solution satisfies all the Einstein equations upon direct substitution. Since this solution does not reduce to Minkowski spacetime in any limit of the parameters, it is clearly not asymptotically flat. The associated Newtonian potential can be calculated in the usual way and contains a constant term of the same order of the gravitational constant 2 . V. THE EXISTENCE OF A SCHWARZSCHILD SOLUTION AND BIRKHOFF'S THEOREM An important question one can address using the system (52-56) is whether or not the theory (6) admits in general a Schwarzschild solution and if the Birkhoff theorem is at all satisfied. In what follows we discuss this problem in detail. The Schwarzschild solution is obtained when φ and A satisfyφ 2 On using a conformal transformationg ab = Ω 2 g ab with Ω 2 = F (ψ), the theory (61) is mapped into General Relativity with a minimally coupled scalar field with the potential An exact solution for a similar theory has been found by Chan t al. in [19] and this means that the two solutions are related. Incidentally, this solution is also related to the ones found in [20,21] and that have been found in other contexts. In addition, since the Ricci scalar is identically zero, we find that the standard Klein-Gordon equation holdŝ Substituting the above equations in (52-54) and assuming F = 0, we obtain: It is easy to see that this system has a (double) solution which is clearly inconsistent. This means that the class of scalar tensor theories of gravity discussed in section (6) have no Schwarzschild solution if Ψ is not constant. This result clearly has consequences for Birkhoff's theorem. There are a number of formulations of this theorem in the context of General Relativity [22][23][24][25][26][27][28][29]. We will adopt the one of Schutz [30] which can be stated as follows: The Schwarzschild solution is the only static, spherically symmetric, asymptotically flat solution of General Relativity in vacuum. There are other more mathematically precise definitions of the Birkhoff's theorem, but for our purposes the one given above will be sufficient. Since there is no Schwarzschild solution associated to the theory (6), it is clear that the classical formulation of Birkhoff's theorem given above does not apply. However, one could define a generalised version, for example: Scalar tensor gravity must possess a unique static, spherically symmetric and asymptotically flat solution in vacuum. Let us adopt this as an Extension of Birkhoff's Theorem (EBT) and let us see what impact it has on the system (52-54). • Staticity and Spherical symmetry Staticity and spherical symmetry is guaranteed by the construction of the 1+1+2 formalism so that all the solutions of the (52-54) have this property by definition. • Vacuum condition In ST gravity there is not a clear definition of the concept of a vacuum: one could argue that the scalar field ψ is a form of matter or a scalar part of the gravitational field [31]. In the first case one cannot have a proper vacuum solution, however in the classical treatment there is no reason to consider φ to be a matter field. In [22], Birkhoff's theorem was generalised to the case of a static matter source. If we use this result, as far as the scalar field is static, we can always consider this hypothesis to be satisfied. • Uniqueness of the solution Proving the uniqueness of the solution of the system (52-54) is however more tricky: one has to prove that in the explicit form the L.H.S. of this system is Lifschitz continuous so that the Picard-Lindelöf theorem is satisfied. This implies that the functions F and V need to be Lifschitz continuous and A and φ need to be continuos in the variable associated with the hat derivative. In addition, the condition needs to be satisfied. This last condition is evident only if the (52-54) is expressed in standard form. Note that this condition also implies that F should not have any zeros i.e. that the gravitational interaction cannot change sign. • Asymptotic flatness The most difficult hypothesis to prove is that of asymptotic flatness. The general proof of this properties for a given metric requires very refined theoretical tools like the Penrose's conformal compactification [16]. The covariant approaches offers an interesting alternative, although a somewhat less general, approach to this problem. In fact decomposing the Riemann tensor in terms the 1+3 variables one obtains [14] R ab cd = R ab P cd + R ab which in terms of the 1+1+2 variables and the static and spherically symmetric case reduces to If a metric is asymptotically flat, there will be a limit in which u a , e a and N ab are constant tensors and the Riemann tensor is identically zero. This implies, by definition, that in this limit A has to be zero and φ has to be zero and that the above relations become equations for µ, p, Π and E. Using (36) it is easy to see that which means Therefore, the behaviour of E is determined by µ, p and Π: if these last quantities tend individually to zero then Riemann tensor will also tend to zero. Using (49-51), one obtains that in the case of (6) this is realised if which is compatible with what is found in [32]. It is interesting to note that in this limit (41) and (42) reduce to the equations that give rise to the Schwarzschild solution. A. The relation with f (R)-gravity and conformal transformations. The form of the Birkhoff theorem given above (EBT) is clearly unsatisfactory. It implies that no general conclusion about this issue can be made for ST gravity and consequently forces us to check whether this theorem holds on a case by case basis. To alleviate this situation, one could think of using what has have learned in the case of f (R)-gravity (see [33]). In this paper it was found that the validity the original Birkhoff's theorem is guaranteed if Since we know that f (R)-gravity can be mapped into a Brans-Dicke-like theory with a non-trivial potential we can ask if the results of [33] lead to any insight on the validity of the EBT for ST gravity. Unfortunately the answer is negative. In fact, since the Schwarzschild solution is characterised by R = 0, (86-87) implies immediately that the scalar field is constant. In other words, the conditions found in [33] effectively correspond to GR via (86-87). B. Conformal transformations. Another interesting way to attack this problem is to look at it from the point of view of conformal transformations. It is well known that under a conformal transformation ST theories of gravity of the type (6) are mapped into GR minimally coupled to a scalar field [34]. Can we then use conformal transformations to discover EBT complying theories? Let us consider the conformal transformatioñ with Ω 2 = F (ψ). It is well known that under this transformation, equation (6) in vacuum can be recast as and As mentioned earlier, it has been recently shown that if the scalar field is static, the (Jebsen-)Birkhoff theorem holds for these theories [22]. It would be interesting to determine that if we have a solution of (89) satisfying the EBT, it is possible to obtain a solution of (6) satisfying the EBT. In other words, we need to check that the EBT holds under a conformal transformations. Let us see now examine how the conditions of the EBT transform under (88). • Staticity and Spherical Symmetry It is obvious that a time independent transformation will map a static metric into a static metric. In addition, it it is easy to see that under the conformal transformation above, the 1+1+2 vector quantities are mapped to zero. For example: If the quantities on the RHS are subject to spherical symmetry, it is clear thatà a = 0. The same reasoning applies to all the other quantities. • Vacuum condition This aspect of the conformal transformation can be confusing. With respect to the definition of a vacuum given above, one effectively passes from vacuum theory to a theory in which matter is present. However, as we have seen, this additional form of matter is static and we can use the results of [22] to conclude that the change of nature of the scalar field does not affect the transformation properties of the EBT. • Uniqueness of the solution. In order to prove that uniqueness is preserved, one has to ensure that the conditions of the Picard-Lindelöf theorem are still satisfied in the transformed system. The Lifschitz continuity of the functions F and V is guaranteed by the fact that they are not modified by the conformal transformation. However, in the field re-definition part of the transformation, the scalar field ψ is changed by (90), so one has to prove that the continuity is preserved under re-definition of the field. However, this operation involves an integral of a continuous function (F is assumed to be always different from zero) and therefore it is continuous. For A and φ, under conformal transformation one hasà so that if Ω is continuous they preserve their continuity. Remarkably, when we substitute (94) into (52-54), (73) is preserved and no other conditions need to be added. Consequently, for a regular conformal transformation, the uniqueness of the solution is not modified. • Asymptotic flatness It is easy to see that the relations (75-78) are invariant under conformal transformations. Therefore, the conditions for asymptotic flatness found from these equations remain the same. However, the thermodynamic quantities are rescaled via the conformal factor: therefore we require that if a tilded thermodynamic quantity goes to zero this behaviour is guaranteed also for the un-tilded quantities. It is clear that this can happen only if the conformal factor asymptotically approaches a constant. In terms of the scalar field, these conditions amount to Ψ → Ψ 0 = const. and W (Ψ) = F (Ψ(ψ)) −1 V (Ψ(ψ)) → 0. The first condition is satisfied only if (90) converges to a constant when F and its first derivative do so and this gives us a constraint on the ST theories that satisfy the EBT. We can summarise what we have found above as follows: Given a conformal transformation in which the conformal factor is static, continuous and asymptotically approaches a constant, one can use a solution of GR with a minimally coupled scalar field to obtain a ST theory with an accompanying solution satisfying the EBT. Let us verify this result explicitly 3 . Consider the minimally coupled theory 4 The spherically symmetric solutions for these theory are well known [35][36][37][38]. A solution which is also asymptotically flat is given by ds 2 = −Ã(r)dτ 2 +B(r)dρ 2 +C(r)(dθ 2 +sin 2 θdφ 2 )], (97) with the scalar field and 0 < γ < 1. Using the results above we can generate a set of theories with accompanying solutions satisfying the EBT. For example, choosing or with α > 0. We obtain: and respectively. In this way, the coefficients of the accompanying solution satisfying the EBT are with the scalar field solution A second possibility is with α > 1. We obtain: In this way, the coefficients of the accompanying solution satisfying the EBT are with the scalar field solution Finally a third possibility would be with α > 1. We obtain: In this way, the coefficients of the accompanying solution satisfying the EBT are with the scalar field solution It is clear from the above examples that this process can be repeated with any exact solution in the Einstein frame. VI. CONCLUSIONS. In this paper we have used the 1+1+2 formalism to analyse the spherically symmetric metrics in the context of non-minimally coupled Scalar-Tensor (ST) gravity. As in the case of the 1+3 covariant approach, our method can be easily applied if one treats the non-Einsteinian parts of the gravitational interaction as an effective fluid. The key 1+1+2 equations form a closed system of three differential equation plus two constraints, which can be simplified considerably by carefully choosing the radial coordinate. Note however, that this choice can result in a decoupling of the key equations, leading to solutions of the 1+1+2 equations which are not solutions of the full Einstein equations. The main result of this paper relates to the existence of the Schwarzschild solution in ST gravity and on how this impacts on the original formulation of Birkhoff's theorem. Using the 1+1+2 equations it is easy to show that no ST theory admits a Schwarzschild solution unless the scalar field is trivial. It follows that one cannot define a Birkhoff theorem in the usual way. Instead we proposed an extension to this theorem (EBT) in which the role of the Schwarzschild solution is taken by the general static and spherically symmetric solution for these theories. Using the conformal relation between GR and ST gravity, we demonstrated that the EBT is preserved under a conformal transformation. In this way, the knowledge of a unique static and spherically symmetric and asymptotically flat solution in GR minimally coupled to a scalar field leads to the derivation of a number of theories for which the EBT is satisfied. The investigation of the detailed properties of these solutions and how they relate to Astrophysics will be the subject of future work.
2016-10-17T15:25:09.000Z
2013-06-11T00:00:00.000
{ "year": 2016, "sha1": "41df1e911b00d06973c8f341442e262640f7a8f2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1306.2473", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1f757e49209d5143c962aa6a858a371ffc2b19cf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249018218
pes2o/s2orc
v3-fos-license
On Understanding and Mitigating the Dimensional Collapse of Graph Contrastive Learning: a Non-Maximum Removal Approach Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations. GCL can generate graph-level embeddings by maximizing the Mutual Information (MI) between different augmented views of the same graph (positive pairs). However, the GCL is limited by dimensional collapse, i.e., embedding vectors only occupy a low-dimensional subspace. In this paper, we show that the smoothing effect of the graph pooling and the implicit regularization of the graph convolution are two causes of the dimensional collapse in GCL. To mitigate the above issue, we propose a non-maximum removal graph contrastive learning approach (nmrGCL), which removes"prominent'' dimensions (i.e., contribute most in similarity measurement) for positive pair in the pre-text task. Comprehensive experiments on various benchmark datasets are conducted to demonstrate the effectiveness of nmrGCL, and the results show that our model outperforms the state-of-the-art methods. Source code will be made publicly available. Introduction Graph Representation Learning (GRL) has become increasingly popular for the ubiquitous graphstructured data across domains, including traffic [45], social network [9], and knowledge graph [31]. Graph Neural Networks (GNNs) [22,39] are utilized as backbones of GRL to learn low-dimensional embeddings of nodes or graphs while maintaining structure and attribute information. Most GNN models are trained in the (semi-)supervised learning setting requiring abundant manually-annotated labels. In case of insufficient data labels, recent Contrastive Learning (CL) based on Information Maximization (InfoMax) principle [24] has shown promising performance for self-supervised learning with success across fields including computer vision [6,16] and natural language processing [41,11]. These CL methods maximize the Mutual Information (MI) between different augmented views of the same instance while minimizing the MI between those of the different instances. Inspired by the above CL models, Deep Graph InfoMax (DGI) [38] applies the InfoMax principle to graph representation learning, which relies on maximizing the mutual information between one graph's patch-level and global-level representations. Following SimCLR [6], a series of graph contrastive learning methods [15,44] enforce the embedding of positive pair (i.e., augmented views of the same graph) to be close and the embedding of negative pair (i.e., augmented views of different graphs) to be distant in the Euclidean space. GCC [30] referring to MoCo [16] contrasts graph-level embedding with momentum encoder and maintain the queue of data samples. However, graph contrastive learning suffers from the dimensional collapse problem, i.e., the space spanned by embedding vectors is only a subspace of the entire space as shown in Fig. 1(b). The concentration of information on part of dimensions weakens the distinguishability of embeddings in downstream classification tasks. We analyze two reasons for the dimensional collapse in GCL: (1) The smoothing effect of graph pooling layer makes initially untrained embeddings of positive pairs highly similar in pattern. The alignment between positive pair embeddings lead part of dimensions to collapse. (2) The implicit regularization of graph convolution layers where the product of learnable weight matrices tend to be low-rank subsequently cause the rank deficiency of embedding space. In this paper, we propose a novel Non-Maximum Removal Graph Contrastive Learning (nmrGCL) approach for self-supervised graph representation learning. The key idea of nmrGCL is to learn complementary embeddings of augmented graphs, inspired by non-maximum suppression (NSM) which is widely used in visual object detection [48,27]. Specifically, for positive pairs, the embedding of the first augmented view identifies the prominent dimensions and then removes these dimensions in the embedding of the second in pre-train process. We conduct experiments on bioinformatics and social networks datasets to show the effectiveness. The contributions of this paper are as follows: • We formally point out and theoretically analyze the so-called dimensional collapse in graph contrastive learning, which has not been discussed in graph learning literature to our best knowledge. It limits the expressiveness of embeddings in downstream tasks e.g. node/graph classification. We reveal the two reasons for the dimensional collapse in GCL: i) smoothing effect of permutationinvariant graph pooling and ii) tendency to be low-rank of multi-layer graph convolution. • We propose a novel non-maximum removal graph contrastive learning approach (nmrGCL) for self-supervised graph classification tasks and extension to node classification task. Our model effectively encourages the encoder to learn complementary representation in pretext tasks using non-maximum removal operation. • Experiments on multiple datasets show that nmrGCL outperforms state-of-the-art SSL methods on graph classification tasks and achieves competitive performance on node classification tasks. Preliminary Problem definition and notations. Let G = (V, E) denote a graph, where V = {v 1 , v 2 , · · · , v N }, E ∈ V × V denote the node set and the edge set, respectively. The adjacency matrix containing the connectivity of nodes is denoted as A ∈ {0, 1} N ×N , where the entry The feature matrix is denoted as X ∈ R N ×F , where the i-th column x i ∈ R F is the F -dimensional feature vector of node v i . For self-supervised graph-level representation learning, given a set of graphs G = {G 1 , G 2 , · · · } without class information, our objective is to learn a GNN encoder g θ (X, A) ∈ R F which encodes each graph G into a F -dimensional vector z G ∈ R F . Selfsupervised node-level representation is defined similarly. These low-dimensional embeddings can be used in downstream tasks, such as node and graph classification. lv Graph Contrastive Learning. Transferred from contrastive learning in the vision field, graph contrastive learning aims at learning embeddings for nodes or graphs through maximizing the consistency between augmented views of the input graph via contrastive loss. Here briefly review the pipeline of graph contrastive learning as shown in Fig. 2: i) Graph Augmentation. Given a graph G, we define the a-th augmented view asG (a) = t a (G), where a ∈ {1, 2}, t a is selected from a group of predefined graph augmentation T . Motivated by image augmentations, various graph augmentation are proposed and categorized into two types: structure-based and feature-based. Commonly used graph augmentation methods include: 1) Node-Drop and NodeShuffling: randomly discards or shuffles certain portion of nodes with their edges and features. 2) EdgeAdd and EdgeDrop: randomly adds or drops certain portion of edges in graphs. 3) FeatureMasking: randomly masks a portion of dimensions in node features with zero. 4) FeatureDropout: randomly masks features of some nodes. 5) Subgraph: generates a subgraph with Random Walk. The details for the used augmentation approaches are given in Appendix C. ii) Graph Encoder. A shared GNN [22,39] encoder f (·) is used to extract the low-dimensional graph or node representations for each augmented graphG (a) . Given an augmented graphG (a) with an adjacency matrix A and a high-dimensional node features matrix X, where x n = X[n, :] is the feature vector of node v n , the l-th layer updates each node's representation by message passing: where h (l) n is the representation of the node v n in the l-th layer of GNN with h (0) is the set of neighbors of node v n , AGGREGATE (l) (·) can be the sum or average operation, and COMBINE (l) (·) can be the concatenation or average operation. Then the graph-level embedding of G (a) can be obtained through the READOUT function of GNN, which is similar to the pooling in CNN: where K is the number of layers of the GNN model. In this paper, a two-layer MLP is applied on top of GNN encoder to obtain z i . iii) Embedding Contrast. Attract positive pairs and repulse negative pairs simultaneously with InfoNCE loss by identifying the positive pair from a mini-batch. For a mini-batch where τ is the temperature parameter, and the overall loss is L = a∈{1,2},i∈{1,··· ,N } L (a) i . Understanding Dimensional Collapse in Graph Contrastive Learning Self-supervised learning aims at learning embeddings for input instance by maximizing the similarity between embeddings of augmented views. These methods could fall into a collapse problem, where all input data are encoded as the same embedding. Contrastive methods address this problem by minimizing the distance between negative samples. Although graph contrastive learning which is transferred from vision field avoids complete collapse, GCL still suffers from dimensional collapse. Definition 1 (Phenomenon of Dimensional Collapse in Contrastive Learning) Given a group of graphs {G i } with their embeddings {z i ∈ R F } by certain methods, dimensional collapse is a phenomenon that the space spanned by the embeddings {z i } is a proper subspace of R F . We analyze that both permutation-invariant pooling layer and convolution layer as commonly adopted in existing massage-passing GNN encoders can cause dimensional collapse in GCL. Dimensional Collapse Caused by Pooling Frequently used graph pooling layers aggregate node embeddings to obtain the graph-level embeddings with permutation invariant function including mean or summation operation for graph classification. We first analyze that the variance provided by augmentation will be reduced by the pooling layer, which cause dimensional collapse. We consider a single layer Vanilla GCN [22] with average pooling. Suppose that each graph generates two augmented views by randomly dropping p portion of nodes with their features. Then the embedding vector for each view G (a) , a ∈ {1, 2} is: where e is a vector of |N | number of ones, A (a) = D 1/2 A (a) D 1/2 is the normalized adjacency matrix, D is the diagonal degree matrix, W is the learnable parameter matrix of GCN, and σ(·) is the non-linear activation function. In vision field, the encoder can filter the disturbance of augmentation and extract common high-level semantic information of image. But in the graph field, since the graph convolution and average pooling can smooth both node and graph embeddings, initial embeddings of positive pairs extracted by arbitrary encoders are close to each other. We begin the analysis by considering a cycle graph and neglecting the activation function for simplicity. Also assume that the element of one-hot encoding node features follows independent identical Bernoulli distribution. Lemma 1 Given a graph G with the set of nodes |V | = N , two augmented views are generated by randomly discarding p-proportion nodes. The expected number of nodes shared by the two augmented graph is Given a cycle graph G = (V, E) and node drop portion p, embeddings of two augmented views indexed by 1 and 2 are obtained by Eq. 4 with a random encoder. Then we have the lower bound for expectation of cosine similarity: where n is the ratio of common and unique nodes in two augmented views with expectation E[n] = (1 − p)/p discussed in Lemma 1. The proof is given in Appendix B.3. It shows that, even with a random encoder, the difference between embeddings of positive pairs are small. Similar results can be obtained when using other augmentation methods. We discuss Theorem 1 with a toy example. Consider a cycle graph G with 5 nodes connected sequentially. The feature vector of each node is {x 1 , x 2 , x 3 , x 4 , x 5 }. Two augmented views are generated by dropping the 4-th and 5-th node respectively. The graph convolution layer aggregates the feature of neighbors and they becomes: The graph-level embeddings of augmented views obtained with average pooling layer are: The difference between embeddings is mainly attributed to the different retained nodes, which makes the two embeddings highly similar. This situation also occurs when using other augmentation methods and considering activation function. We empirically demonstrate this property through a specific experiment. We randomly select a graph from COLLAB dataset and generate two augmented views by dropping 30% nodes and masking 30% dimensions of node features, respectively. Then generate two 64-dimensional embeddings with a randomly initialized 3-layer GCN with ReLU activation. Shown as the heat map in Fig. 3, the patterns of the two embeddings strongly aligns. Additionally, we observe that parts of elements are much larger than others in both embeddings and these larger elements share the same positions. In this paper, we introduce a threshold δ according which, we define the position of elements larger than δ as "prominent" dimensions. Definition 2 Given a graph G with embedding z ∈ R F and a threshold δ, the "prominant" dimensions are defined as: We then analyze how "prominent" dimensions cause the dimensional collapse from the perspective of the gradient of InfoNCE with derivation in Appendix B.1. The gradient of InfoNCE w.r.t the embeddings of positive pairs are: i and z (2) i grow linearly with the value of z (2) i and z (1) i in each dimension, which still holds for L (2) i due to the symmetry. Also because two embeddings share a number of common prominent dimensions, minimization of InfoNCE loss mainly lies in increasing prominent dimensions which will be more prominent cumulatively. Intuitively, contrastive learning aiming at maximizing the similarity between positive pairs leads to a shortcut that only a few dimensions to be relatively much larger. These prominent dimensions control the representation of graphs and suppress the expressiveness of other dimensions. Consequently, the downstream classification performance mainly depends on prominent dimensions and neglects the leverage of other dimensions. Dimensional Collapse Caused by Graph Convolution Layer We then analyze the other cause of dimensional collapse in terms of an implicit regularization of the graph convolution layer, where the learnable weight matrices have tendency to be low-rank. The implicit regularization of neural network has been studied in [4,13,20]. We consider a K-layers Vanilla GCN as the encoder for graph contrastive learning. For simplicity, we neglect the pooling layer and the activation function. Node embeddings are concatenated as the graph-level embeddings. The learnable weight matrix of graph convolution layers are W 1 , W 2 , · · · , W K . Additional, we choose feature-based augmentation methods such as Feature Masking or Feature Dropout. The feature matrix of two augmented views are X (1) and X (2) and the normalized adjacency matrix A of the augmented views remains unchanged. Thus, the embedding vectors for augmented views are: where K is the depth of the GNN encoder. Note the weight matrices can be written as product form, W := W 1 W 2 · · · W K . We study the characteristic of gradient chain (gradient descent with infinitesimally small learning rate) of the matrices product. We denote the gradient flow of W as: where t is the continuous time index since the infinitesimally small learning rate. We first perform the singular value decomposition (SVD) on matrices product: where σ m is the m-th singular value of W, u m and v m is the m-th column of U and V, respectively. We adopt the expression for the gradient flow of the matrices product from Theorem 1 in [3] as: where [·] α , α ∈ R + denotes power operator over positive semi-definite matrices. Substituting Eq. 11 into Eq. 10 we can analyze the gradient flow of singular values under InfoNCE loss. Theorem 2 The singular values of the product matrix W (t) evolve by: where The proof is in Appendix B.5. Theorem 2 states that the gradients flow of the m-th singular valueσ m for matrices product of graph convolution layer are proportional to the value of that singular value σ m . The larger singular values grow considerably faster than smaller ones, thus some of singular value of weight matrices turn out to be relatively small and close to zero. Moreover, singular value decomposition factorize a matrix into a combination of a group of orthonormal bases. The singular values represent scale of corresponding bases while the number of non-zero singular values equals to the rank of the matrix. Thus, the weight matrices product tend to be approximately low-rank. The embedding space is applying a linear transformation (weight matrices product) on the feature space. The rank deficiency of matrices product finally leads the embedding space to be low-rank, i.e., dimensional collapse of embedding space. Negative Pair : Linear threshold gate : Hadamard product Figure 4: Overview of nmrGCL which here involves two augmented views of each input graph. A shared GNN encoder g θ (·) encodes each augmented view as low-dimensional embeddings, where the darker colors represents the larger values. For positive pairs, the first embedding identifies prominent dimensions whose values are larger than a predefined threshold, and masks these dimensions of the second embedding with zeros. The removal operation is not performed on negative pairs. Proposed Approach As discussed in Sec. 3, current graph contrastive learning paradigm suffers the dimensional collapse, where parts of embedding dimension lose efficacy in representing the input graph. As illustrated in Fig. 1(b), when the embeddings are compressed into a lower-dimensional subspace, samples with different labels are hard to separate along the axis. Henceforth, the obtained embeddings may suffer underfitting in downstream tasks e.g. graph classification. More concretely speaking, we note that [10,34,17] quantify the inter-class separability with Wasserstein distance between different classes. In fact, by increasing the distance between samples on collapsed dimensions, Wasserstein distance between classes can be increased, indicating stronger inter-class separability. Based on these evidences and analysis, we propose the non-maximum removal graph contrastive learning (nmrGCL) which directly set prominant dimensions to zero for one of positive pair. As shown in Fig. 4, given the input graph G, we first generate two augmented graph viewsG 1 andG 2 . Then each augmented view is encoded into a low-dimensional embedding with one shared encoder. After that, for positive pairs (i.e., two augmented views of the same graph), the embedding of the first augmented view conducts the removal operation on the embedding of the second one, to prevent the shortcut that only focuses on part of dimensions to maximize the similarity. Since the negative pairs (i.e., augmented views of the different graph) do not share the same prominent dimensions, the removal operation is not applied on the negative pairs. Finally, the parameters of the encoder are learned with contrastive objectives. Erasing Operation. The key innovation of our proposed nmrGCL approach is the removal operation which enforces the encoder to learn complementary embeddings. nmrGCL aims at mining information of inconspicuous dimensions rather than being restricted to scarce common prominent dimensions through an adversarial manner. After the GNN encodes two augmented views {G (1) ,G (2) } into embeddings {z (1) , z (2) }, the embedding of the second augmented view z (2) is erased with the guidance of z (1) . Formally, the augmented views of the input graph are transformed by the GNN encoder g θ (·) into a pair of embeddings r (a) ∈ R K×P , where a ∈ {1, 2}, K is the number of layers of GNN, and P is the number of output dimensions in each layer. A projection head h(·) composed of a 2-layer MLP and ReLU non-linearity is applied on all embeddings for optimization objective, with min-max scaling which is denoted as z (1) . Then a binary mask M ∈ {0, 1} F is created to conduct the removal operation on the embedding of the second augmented view z (2) by: where the threshold δ is a hyper-parameter. The erased embeddingẑ (2) as the complementary embedding can be obtained withẑ (2) = z (2) M, where is the Hadamard product. Model Training We train the nmrGCL end-to-end by maximizing the agreement between positive pairs {z j,+ = h(g θ (t 2 (G j ))) M , and z (2) j,− = h(g θ (t 2 (G j ))), where t a (·) is the augmentation function, g θ (·) is the GNN encoder, h(·) is the projection header, M is the mask in Eq. 13, and {+, −} stands for positive or negative pairs. The loss for the mini-batch with size B is given as follows, where sim(·, ·) denotes a similarity measure e.g. cosine used in the paper: The training process is described in Algorithm 1 of Appendix D. Related Work Graph contrastive learning. As one of the main approaches of self-supervised representation learning, contrastive learning based on mutual information maximization(MI) principle has raised a surge of attraction in computer vision [6,16] and natural language processing [42,23]. Inspired by visual contrastive learning, a series of graph contrastive learning methods are devised. Deep Graph InfoMax (DGI) [38] first applies the InfoMax principle to graph representation learning. DGI relies on maximizing the mutual information between patch-level and global-level representation of one graph. GMI [29] jointly maximizes feature MI and edge MI individually, without augmentation. MVGRL [15] generates two augmented graph view via graph diffusion and subgraph sampling. Based on SimCLR [6], GraphCL [44] enforces the embedding of positive pair (i.e., augmented views of the same graph) to be close and the embedding of negative pair (i.e., augmented views of different graphs) to be distant in the Euclidean space. CuCo [8] further utilizes the curriculum learning to select the negative samples. AD-GCL [36] optimizes adversarial graph augmentation to prevent learning redundant information. JOAO [43] adaptively selects the augmentation for specific dataset. Collapse in Self-Supervised Learning. Self-supervised contrastive learning methods may suffer from collapse problem as shown in Fig. 1(a), i.e., obtained embeddings degenerate to an constant vector. MoCo [16] and SimCLR [6] address the collapse problem by repulsing negative pairs in optimization objective. BYOL [12] propose momentum encoder, predictor and stop gradient operator to avoid the collapsed solution. SimSiam [7] simplifies the BYOL by removing the momentum encoder and shows that the remaining stop-gradient mechanism is the key component to prevent collapse in self-supervised learning. [46] theoretically analyze how SimSiam avoid collapse solution without negative samples. [19,21] point out the dimensional collapse and further show that strong augmentation and the weight matrices alignment cause the dimensional collapse in vision field. Baselines. We compare with three groups of baselines for graph classification. The first group is supervised GNNs including GCN [22] and GIN [39]. The second group includes the graph kernel methods: Weisfeiler-Lehman Sub-tree kernel (WL) [33] and Deep Grph Kernels (DGK) [40]. The last group includes unsupervised graph representation learning methods: Sub2Vec [2], Graph2Vec [26], InfoGraph [35], GraphCL [44], CuCo [8], AD-GCL [36], where the last four methods are state-of-the-art contrast-based graph representation learning methods. For node classification, we compare nmrGCL with supervised methods including GCN [22] and GIN [39] and contrast-based self-supervised methods: DGI [38], MVGRL [15], GCA [51] and CCA-SSG [47]. Implementation Details. For graph classification, we use the graph isomorphism network (GIN) [39] as the encoder for its expressiveness in distinguishing the structure to obtain the graph-level representation. Specifically, we adopt a three-layer GIN with 32 hidden units in each layer and a sum pooling readout function. Then the embeddings generated by the encoder are fed into the downstream SVM classifier. The threshold δ in the removal operation is set to 0.7. We utilize the 10-fold CV to train the SVM and record mean accuracy with standard variation of five-time trials. Other hyper-parameters remain consistency with the GraphCL [44]. For node classification tasks, we follow settings of DGI [38] which uses the GCN as the encoder and logistic regression downstream classifier. More details of the experimental setup can be found in Appendix E. Experimental Results Main results. The results of self-supervised graph classification are reported in Tab. 1. We can see contrast-based methods generally exceed both the graph kernel methods and traditional unsupervised methods, indicating the advantages of contrastive learning. nmrGCL outperforms other unsupervised representation learning baselines with significant improvement across eight of nine datasets, especially on sparse-graph, demonstrating the superiority of our approach. For example, nmrGCL achieves 80.61% accuracy on dense-graph dataset DD, surpassing GraphCL by 1.99% accuracy and CuCo by 1.41% accuracy individually. Meanwhile, the nmrGCL achieves 73.83% accuracy on sparse-graph dataset IMDB-B, exceeding the GraphCL by 2.69% and AD-GCL by 2.34% accuracy, respectively. The results are attributed to the key component in our approach: the non-maximum removal operation, which allows information to be represented in all dimensions of embeddings rather than concentrating on a small number of the prominent dimensions. The similarity between two embeddings is no longer dominated only by prominent dimensions in both embeddings as in previous works. Finally, with a more uniform distribution of information, all dimensions of the embeddings contribute when performing downstream classification tasks, bringing about a notable improvement. However, results of contrast-based methods show a large variance due to the randomness, which indicates that improving the stability of graph contrastive learning is a valuable direction. Tab. 2 reports the results of node classification. Since the GNN encoder in the graph classification task has one pooling layer which is not in the encoder for the node classification task, only the weight matrices of graph convolution layer cause the dimensional collapse in node classification tasks. Thus our approach is more suitable for graph-level classification. Results show that our method achieves competitive performance compared to the SOTA self-supervised node classification methods. Ablation study. Ablation studies on graph classification are conducted to verify the superiority of our non-maximum removal operation for learning complementary embeddings. We design following 5 variants of the proposed nmrGCL: Table 3: Graph classification accuracy (by mean and std) for transfer learning by ROC-AUC. The model is pre-trained on ZINC 2M dataset and transferred to other four datasets via fine-tuning. The settings follow [18]. nmrGCL-non-min: for positive pairs, the dimensions of the second embedding to be erased are the smallest dimensions of the first embedding instead of the largest ones. (5) nmrGCL-learn: the removal mask is learned by randomly initialized learnable parameters with Sigmoid function. Fig. 5 compares nmrGCL and its variants, from which we make the following observations. First, the classification results decrease if the non-maximum removal operation is deleted, verifying the efficacy of leaning the complementary embedding. nmrGCL-rand improves the GraphCL but does not outperform the nmrGCL. nmrGCL-bi has the similar performance with nmrGCL. The nonmin removal operation can hardly improve and even hurts the model. nmrGCL-learn has different performance on different datasets. On datasets such as DD and IMDB-B which have a large edge-node ratio (715/284, 96/19), learnable mask strategy outperforms the nmrGCL. On NCI1, REDDIT-M datasets with small edge-node ratio (32/29, 594/508), the learnable mask strategy performs poorly. Parameter Sensitivity and Transfer Learning Study Sensitivity study w.r.t. the removal threshold δ. We vary the value of the crucial hyper-parameter threshold δ in the removal operation in Eq. 13 from 0.0 to 1.0 on four datasets, and the results are shown in Fig. 6. Unexpectedly even if the second embedding is completely removed when δ = 0, the randomly initialized untrained encoder still reaches a competitive performance, because the SVM can classify graphs directly based on node features. In general, we find that when the threshold δ is less than 0.7, the classification accuracy grows with the increase of the threshold. The optimal value of δ is 0.6-0.8 for most datasets, with the exception of dataset NCI1, where the optimal value is 0.4. Sensitivity study w.r.t. intensity of augmentations. Since the sparsity of augmented views affects the common prominent dimensions of both embeddings, we vary the ratio of nodes, edges or features discarded in graph augmentation including NodeDrop, EdgeDrop and FeatureMasking on four datasets. From the results in Figure 7, we observe that classification performance degenerates as the intensity of augmentation grows overly high. The optimal modification probability for most datasets is 0.1 to 0.3. These results are in record with the observation that graph-data are sparse and hard to be recovered after discarding information. Transfer learning study. We conduct experiments on four large-scale datasets to evaluate the transferability in predicting the molecular property. The encoder is pre-trained on the ZINC dataset without label and fine-tuned on other datasets, where all settings follow [18]. We select baselines including no-pre-trained GIN, GraphCL [44] and strategies used in [18] including EdgePred, AttrMasking and ContextPred. The result is shown in Tab. 3 with mean and standard deviation of ROC-AUC score for five trials. nmrGCL achieves the best performance on three of four datasets and outperforms GraphCL on all datasets. Detailed setup of transfer learning is in Appendix E.3. Conclusion and Broader Impact We have pointed out and theoretically analyzed a problem called dimensional collapse in graph contrastive learning (GCL), where information of embeddings concentrates on parts of dimensions. We identify that graph pooling and convolution layers of GNNs specifically cause the dimensional collapse in GCL. To alleviate the dimensional collapse, we propose the graph complementary contrastive learning named nmrGCL. For each positive pair, nmrGCL identifies "prominent" dimensions in embedding of the first augmented view and erases these dimensions in the second, which is considered as the complement of the first. The complementary embedding helps the encoder learn neglected information and enhance the distinguishability of the embedding. Experiments show that nmrGCL significantly outperforms the state-of-the-art SSL graph classification methods. We did not identify any technical limitation, nor potential negative impact of our methods to the society. A More Related Works Graph neural networks. Graph Neural Networks (GNNs) have attracted growing attention for analyzing graph-structured data in recent years. Generally, GNNs are categorized into spatialdomain and spectral-domain approaches. Based on the spectral graph theory, [5] first defines the graph convolution in the spectral domain through the eigen-decomposition of the graph Laplacian, defectively causing high computational cost. Graph Convolution Network [22] utilizes the 1-st approximation of the Chebyshev expansion to simplify the calculation. Spatial-based approaches follow a message passing scheme [1], where each node collects the information from its neighbors iteratively. GraphSAGE [14] aggregates the information from randomly sampled neighborhoods to scale to large graphs. GAT [37] introduces the attention mechanism to assign scores for each node pair. GIN [39] generalizes the Weisfeiler-Lehman test and reaches the most expressive power among GNNs. Comparison to DirectCLR [21]. The theoretical analysis in DirectCLR does not apply to the graph domain due to the different mechanisms in GNN and CNN. DirectCLR attributes the dimensional collapse in vision field to two reasons: too strong augmentation on images and the interplay between adjacent fully connected layers. In comparison, our paper concentrates on the graph field and analyze that two specific component in GNN cause the dimensional collapse in the GCL, i.e., the graph pooling layers smooth the variance between embeddings of positive pair and the product of multi-layer graph convolutions has the tendency to be low-rank. In terms of the algorithms, our nmrGCL removes the prominent dimensions in one of positive pairs in pre-text tasks while the DirectCLR simply removes the projection head. Removal of projection head cannot improve the downstream classification accuracy in the graph domain since it cannot address the alignment between embeddings of positive pair. Our methods may be extended to visual contrastive learning since the implicit regularization exists in deep neural network. B Proofs in Sec. 3 B.1 Derivation of Gradient of InfoNCE w.r.t Embeddings The InfoNCE loss for the first augmented view of i-th graph is given by: The gradient of InfoNCE w.r.t positive pairs embeddings can be derived: The gradient of InfoNCE for the other view can be simply derived due to the symmetry. B.2 Proof of Lemma 1 Lemma 1 Given a graph G, two augmented views are generated by randomly discarding p-proportion nodes. The expected number of nodes shared by the two augmented graph is (1 − p) 2 · N Proof. Let V be the set of nodes in G. and let V (1) and V (2) be sets of nodes in two augmented graphs. V (1) and V (2) are independent permutation of V , where |V (1) | = |V (2) | = (1 − p)|V |. The probability that any node n is in both augmented graph is (1 − p) 2 . By linearity of expectation, since the total number of nodes is N , the expectation of nodes in both augmented graphs is 3 Proof of Theorem 1 Theorem 1 Given a cycle graph G = (V, E) and node drop portion p, embeddings of two augmented views indexed by 1 and 2 are obtained by Eq. 4. Then we have the lower bound for expectation of cosine similarity: where n is the ratio of common and unique nodes in two augmented views with expectation E[n] = (1 − p)/p discussed in Lemma 1. Proof. A cycle graph G = (V, E) is a simple graph such that |V | = |E| = N (N > 3). Thus, each node is connected exactly to two nodes. The feature represented by a one-hot vector of each node is x i ∈ R F . Two augmented graphs G (1) = (V (1) , E (1) ), G (2) = (V (2) , E (2) ) are generated by randomly dropping p portion of nodes, |V (1) | = |V (2) | = p · N . Denote the ratio of common and unique nodes in two augmented views with n: The expectation of n can be derived with Lemma 1: Denote the set of commonly retained nodes in both augmented graphs as V c , the set of uniquely retained nodes in each augmented nodes as V (1) u , V (2) u for short. Without loss of generality, the nodes in these sets are selected sequentially and consecutively as follows: Given the single layer graph convolution defined in Eq. 4, the aggregated node features vectors of two augmented graph become: Applying the average pooling (readout) layer on node embeddings, the graph-level embeddings can be obtained with: The expression for cycle graph embeddings can be generalized to regular graphs. Since the number of nodes N can be very large in real datasets, thus we can have the approximation: Note that the embedding can be divided into two parts: the aggregation of common nodes features and the aggregation of unique nodes features. Assume that each element in one-hot encoding feature of each node follows a Bernoulli distribution, ∀i, j, x i [j] ∼ Bern(q). Then we have: Then the cosine similarity between two embeddings becomes: Additionally, the cosine similarity reaches the minimum when (S c − S 1 ) ⊥ S 1 and (S c − S 2 ) ⊥ S 2 . sim(z (1) , z (2) ) ≥ S c 2 + S 1 2 + S 2 2 + S 1 , S 2 S c 2 + S c · S 1 + S c · S 2 + S 1 · S 2 ≥ Sc 2 S1 2 + 2 Sc 2 S1 2 + Sc S1 + Sc S2 + 1 Thus, the expectation of similarity is bounded by: where n is the ratio of common nodes and unique nodes in two augmented views with expectation E where σ m is the m-th singular value of W, u m and v m is the m-th column of U and V, respectively. Proof. Given a matrix W with singular value decomposition W = UΣV . Differentiate the SVD of W w.r.t time:Ẇ Since U(t) and V(t) have orthonormal columns, multiply U (t) from the left and V(t) from the right and have: Since Σ(t) is a diagonal matrix where m-th diagonal entry equals the m-th singular value σ m , we focus on the diagonal singular values: Since u m and v m are unit-norm bases, we have u m (t),u m (t) = 1 2 d dt u m (t) 2 2 = 0 and v m (t), v r (t) = 0 similarly. Thus the equation becomes: where in Eq. 6. Proof. We first adopt the following equation from Theorem 1 in [4]: where [·] α , α ∈ R + is the power operator on positive semi-definite matrices. Substitute the singular value decomposition of matrix in Eq. 9 into the above equation: Again, since V and V consist of orthonormal columns, i.e., ∀i, j, i = j, u i u j = v i v j = 0. Thus, we multiply v m from left side and u m from right side on both hands of equation: The gradient of loss on W also can be obtained with the chain rule: . The gradients of InfoNCE loss w.r.t embedding vectors are derived in Eq. 6 and here denoted as g z (a) i . Thus, finnaly we have the following expression for derivative of singular values: where i ), and g z (k) i is the gradient over z (k) i in Eq. 6. C Details of Used Graph Augmentation Approaches Given a graph G = (V, E, X) where V is the node set, E is the edge set, and X is feature matrix. The augmented graph viewG can be represented get with following methods: EdgeAdd and EdgeDrop purtubs the adjacency matrix by randomly add or discard a portion of edges in graphs. Formally, the adjacency matrix of augmented graph is: where M d , M a ∈ {0, 1} N ×N are dropping and adding matrix. Specifically, M d and M a first copy A and (1 − A) respectively and then randomly mask a portion of ones with zeros, where the ratio of dropped or added edges is a hyper-parameter p. NodeDrop perturbs the structure of the given graph by randomly discarding a portion of nodes with their features and connected edges. Since most GNN are utilized in transductive manner, we do not consider NodeAdd in this paper for node perturbations. Similar to EdgeDrop, each node in graphs has a probability of p to be dropped. Then the adjacent edges in adjacency matrix and features of dropped nodes are also masked with zero. NodeShuffling perturbs the structure of the given graph by randomly shuffle a portion of nodes with their features and connected edges. Feature vectors of p-portion nodes are put into other positions while the adjacency matrix remains the same. FeatureMasking randomly masks a portion of features in all nodes. We first generate a mask M F ∈ {0, 1} F where each entry follows a Bernoulli distribution with parameter (1 − p). Then the node feature matrix of augmented graph is: where is the element-wise product. FeatureDropout masks feature vectors of p-portion nodes with zeros. SubgraphSmapling generates a subgraph with random walks with following steps. Start random work from a node. The walk has a probability which is proportional to edge weights to travel to neighbors and a hyper-parameter probability p to go back to the start node. The augmented graph is the sub-graph induced by walked nodes in previous travels. Then each node in sub-graph is reordered by the sequence of appearance when traveling. D Training Algorithm Algorithm 1 Training procedure of non-maximum removal for graph contrastive learning Input: Training set G = {G j } |G| j=1 , GNN encoder g θ (·), augmentation distribution T , threshold δ, mask matrix M , batch size B Output:The pre-trained encoder g θ (·) 1: Randomly initialize parameters θ of the GNN encoder and set all elements of M to be 0. 2: for each mini-batch B sampled from G do 3: for k = 1, 2, · · · , B do 4: Select two augmentation methods t 1 , t 2 from T 11: Update the parameters of g θ (·) and h(·) with adam optimizer by mininizing L. Unsupervised Node Classification Datasets. We use 4 benchmark datasets including Pubmed, Coautho-CS, Amazon-Photo and Amazon-Computers [32] with statistics shown in Tab. 4. Pubmed is a widely used node classification dataset containing one citation network, where nodes are papers and edges are citation relationships. Coauthor-CS are co-authorship graphs where nodes are authors and edges represents the existence of co-authorship in any paper. The labels of nodes are most active fields which each author study on and the node features are keywords of each author's papers. Amazon-Photo and Amazon-Computers are two co-purchase networks where nodes are products and connectivity of nodes depends on frequency of co-purchase. Each node has a feature vector according to product review and is labeled with categories. Graph Classification Datasets in Transfer Learning. We use ZINC datasets for pre-training and for 4 datasets BBBP, ToxCast, SIDER, and CinToxfor downstream tasks. These datasets contain biological interactions and chemical molecules from [18] . E.3 Experiment Configuration Unsupervised graph classification. The model is evaluated following [44]. We use the 3-layer GIN with 32 hidden unit dimensions and sum readout function. We utilize the LIBSVM with default setting in sklearn [28] 4 as the downstream classifier. Unsupervised node classification. Following [38] we employ 2-layer GCN as the encoder with 512 hidden units and l 2 -regularized logistic regression classifier in sklearn [28] for downstream tasks. Graph classification in transfer learning We pre-train the encoder on ZINC dataset and fine-tune on 4 other datasets. We utilize the 5-layers GIN encoder with 300 hidden units to keep consistency with [18]. A linear classifier is added on top of graph-level embeddings to predict downstream graph labels. In fine-tuning process, parameters of the pre-trained encoder and the linear classifier are end-to-end optimized together. We use the data processed in pytorch-geometric form provided by [18]. All datasets for unsupervised graph and node classification tasks can be downloaded and pre-processed with pytorch_geometric. For all three pretext tasks, we train the model with Adam optimizer with learning rate 0.001, epochs 500, and batch size 256 following previous works. We also utilize the early stop mechanisms. The split for train/validation/test sets for all downstream tasks is 80%:10%:10%. F Comparison of Augmentation Methods Since our methods is asymmetrical, we conduct experiments on using different augmentations in settings of unsupervised graph classification tasks. All the settings and configurations remain consistent with previous unsupervised graph classification tasks. We exhaustively compare the every combination of two graph augmentation methods including Edge Add (EA), Edge Remove (ER), Feature Dropout (FD), Feature Masking (FM), Node Drop (ND), Node Shuffling (NS), Random Walk Subgraph (RWS) on 8 datasets. The augmentation hyper-parameter p is set to 0.7 for all combinations. The results of 5-trials' mean are shown in Fig. 8, from which we make the following observations. First, the performance varies significantly when using different augmentation approaches. For example, for COLLAB dataset, the combination of FD and RWS only achieves 68.27% accuracy while the combination of NS and ER achieves 75.07% accuracy. Thus, we argue that using compound augmentation approach, e.g., discard a portion of nodes and mask a portion of nodes at the same time for one view, could be a direction for further research on graph contrastive learning. Self-supervised classification performance may be stabilized and improved by adaptively or learnablely compounding the augmentation approahces. Second, we identify that on some datasets such as COLLAB, IMDB-B, NCI1, using two identical augmentation approaches for two augmented views obtain relatively good even the best classification results. This observation provides reference for how to design augmentation schemes for graph domain.
2022-03-25T01:15:39.175Z
2022-03-24T00:00:00.000
{ "year": 2022, "sha1": "da925c21ca185846855e9f1d4af10c35ff59141f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "da925c21ca185846855e9f1d4af10c35ff59141f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
17351133
pes2o/s2orc
v3-fos-license
XPath Node Selection over Grammar-Compressed Trees XML document markup is highly repetitive and therefore well compressible using grammar-based compression. Downward, navigational XPath can be executed over grammar-compressed trees in PTIME: the query is translated into an automaton which is executed in one pass over the grammar. This result is well-known and has been mentioned before. Here we present precise bounds on the time complexity of this problem, in terms of big-O notation. For a given grammar and XPath query, we consider three different tasks: (1) to count the number of nodes selected by the query, (2) to materialize the pre-order numbers of the selected nodes, and (3) to serialize the subtrees at the selected nodes. Introduction An XML document represents the serialization of an ordered node-labeled unranked tree. These trees are typically highly repetitive with respect to their internal node labels. This was observed by Buneman, Koch, and Grohe when they showed that the minimal DAGs of such trees (where text and attribute values are removed) have only 10% of the number of edges of the trees [2]. The DAG removes repeating subtrees and represents each distinct subtree only once. A nice feature of such a "factorization" of repeated substructures, is that many queries can be evaluated directly on the compressed factored representation, without prior decompression [2,6]. The sharing of repeated subtrees can be generalized to the sharing of repeated (connected) subgraphs of the tree, for instance using the sharing graphs of Lamping [9], or the straight-line (linear) context-free tree (SLT) grammars of Busatto, Lohrey, and Maneth [3]. The recent "TreeRePair" compressor [11] shrinks the (edge) size of typical XML document trees by a factor of four, with respect to the minimal unranked DAG (cf. Table 4 in [11]). It was shown by Lohrey and Maneth [10] that tree automata and navigational XPath can be evaluated in PTIME over SLT grammars, without prior decompression. This is used to build a system for selectivity estimation for XPath by Fisher and Maneth [5]. Roughly speaking, the idea is to translate the XPath query into a certain tree automaton, and to execute this automaton over the SLT grammar. In this paper we make these constructions more precise and give complexity bounds in terms of big-O notation. We use the "selecting tree automata" of Maneth and Nguyen [13] (see also [1]), in their deterministic variant. Similar variants of selecting tree automata have been considered in [15,16,17]. We explain how XPath queries containing the child, descendant, and following-sibling axes can be translated into our selecting tree automata. It is achieved via a well-known translation of such XPath queries into DFAs, due to Green, Gupta, Miklau, Onizuka, and Suciu [7]. We then study three different tasks: (1) to count the number of nodes that a deterministic top-down selecting tree automaton selects on a tree represented by a given SLT grammar, (2) to materialize the pre-order numbers of the selected nodes, and (3) to serialize, in XML syntax, the depth-first left-to-right traversal of the subtrees rooted at the selected nodes. The first problem can be solved in O(|Q||G|) where Q is the state set of the automaton, and G the SLT grammar. The second and third problem can be solved in time O(|Q||G| + r) and time O(|Q||G| + s), respectively, where r is the number of selected nodes and s is the length of the serialization of the selected subtrees. Note that the length s can be quadratic in the size of the tree represented by G (e.g., if every node is selected). Thus, s is of length (2 |G| ) 2 if G compresses exponentially. We show how to obtain a compressed representation of this serialization by a straight-line string grammar G ′ of size O(|Q||G|r). Most of the constructions of this paper are implemented in the "TinyT" system. TinyT and a detailed experimental evaluation is given by Maneth and Sebastian [14]. is acyclic and connected. The grammar G produces exactly one tree, denoted by val(G). It can be obtained by repeatedly replacing nonterminals A ∈ N by their definition P(A), starting with the initial tree P(S). Replacements are done in the obvious way: a subtree A(t 1 , . . . ,t k ) is replaced by the tree P(A) in which y i is replaced by t i for 1 ≤ i ≤ k. We define the rank of the grammar as the maximum of the ranks of all its nonterminals. We extend the mapping val to nonterminals A and define val(A) as the tree obtained from A(y 1 , . . . , y k ) by applying the rules of G (and treating the y i as terminal symbols). The tree val(A) is a binary tree with internal nodes in U + and leaves labeled or y i . Each y i with 1 ≤ i ≤ k occurs once, and y 1 , . . . , y k occur in pre-order of val(A). The size of an SLT grammar G is defined as the sum of sizes of the right-hand side trees of all rules. The size of a tree is defined as its number of edges. Example. Consider the SLT grammar G 1 with three nonterminals S, B, and T , of ranks zero, one, and zero, respectively. It consists of the following productions: It should be clear that the tree val(G 1 ) produced by this grammar is the binary tree shown on the right of Figure 1. XPath to Automata We consider XPath queries without filters. In Section 5 we explain how filters can be supported. Such queries are of the form Q = /a 1 :: t 1 /a 2 :: t 2 / · · · /a n :: t n where a i ∈ {child, descendant, following-sibling} and t i ∈ { * } ∪ U + . Thus, we support two types of node tests (i) a local (element) name and (ii) the wildcard "*", and support three axes: child, descendant, and following-sibling. For a query Q and XML tree t we denote by Q(t) the set of nodes that Q selects on t. We do not define this set formally here. It was shown by Green, Gupta, Miklau, Onizuka, and Suciu [7] that any XPath query Q containing only the child and descendant axes can be translated into a deterministic finite state automaton DFA(Q). Note that their queries and automata also allow to compare text and attribute values against constants. The DFA constructed for a given query, is evaluated over the paths of the unranked XML input tree. When a final state is reached at a node, then this node is selected by the query. Their translation is a straightforward extension of the "KMP-automata" for string matching, explained for instance in the chapter on string matching in [4]. If there are no wildcards in the query, then Green et al show that the size of the obtained DFA is linear in the size of the query. In the presence of wildcards, the DFA size is exponential in the maximal number of *'s between any two descendant steps (see Theorem 4.1 of [7]). To understand their translation, consider the following example query: where "//" denotes the descendant axis (more precisely, it denotes the query string "/descendant ::"), and "/" denotes the child axis. The corresponding automaton DFA(Q 1 ) is shown in Figure 2. For a sequence of children steps, the idea is similar to KMP [8]: when reading a new symbol that fails, we compute the longest current postfix (including the failed symbol; this is the difference to KMP) that is a prefix of the query string and add a transition to the corresponding state. Care has to be taken for wildcards, because then (in general) we need to remember the symbol read; in the example (at state 1) it suffices to know whether it is an a, or not. Selecting Tree Automata Selecting tree automata are like ordinary top-down tree automata operating over binary trees. They use special "selecting transitions" to indicate that the current node should be selected. In this paper we use deterministic selecting automata. Similar such nondeterministic automata have been considered by Maneth and Nguyen [13]. Since the XML trees may contain arbitrary labels in U + , we require that each state of the automaton has one default rule. The default rule is applied if no other rule is applicable. Definition 1 A deterministic selecting top-down tree (DST) automaton is a triple where Q is a finite set of states, q 0 ∈ Q is the initial state, and R is a finite set of rules. Each rule is of one of these forms: where q, q 1 , q 2 ∈ Q and w ∈ {%} ∪ U + . The symbol % is a special symbol not in U. Let q ∈ Q. We require that (1) there is exactly one rule in R with left-hand side (q, %), called the default rule of q, and (2) for any w ∈ U + there is at most one rule in R with left-hand side (q, w). A rule of the first form is called non-selecting rule and of the second selecting rule. The semantics of a DST automaton should be clear. It starts reading a tree t in its initial state q 0 at the root node of t. In state q at a node u of t labeled w ∈ U + it moves to the left child into state q 1 and to the right child into state q 2 , if there is a rule (q, w)β (q 1 , q 2 ) with β ∈ {→, ⇒}. If β =⇒, i.e., the rule is selecting, then u is a result node. If A has no such rule, then the default rule is applied (in the same way). The unique run of A on the tree t determines the set A (t) of result nodes. Assume we are given an XPath query Q with child and descendant axes only and consider its translated automaton DFA(Q). It is straightforward to translate the DFA into a DST automaton. If the DFA moves from q to q ′ upon reading the symbol a, then the DST automaton has the transition (q, a) → (q ′ , q); this is because the right child corresponds to the next sibling of the unranked XML tree, and at that node we should still remain in state q and not proceed to q ′ . The DST automaton that corresponds to the DFA of Figure 2 is: Consider now a general XPath query in our fragment, i.e., one that contains child, descendant, and the following-sibling axes. Consider each maximal sequence of following-sibling steps. We can transform it to a DFA by simply treating them as descendant steps and running the translation of Green et al. The obtained DFA is transformed into a DST automaton by simply carrying out the recursion on the second child only, i.e., if the DFA moves form q to q ′ on input symbol a, then the DST automaton has the transition (q, a) → (dead, q ′ ), where "dead" is a sink state. We merge the resulting automata in the obvious way to obtain one final DST automaton for the query. E.g. for XPath query /a/following-sibling :: b/c we obtain the following DST automaton: Theorem 1 For an XPath query Q we can construct a DST automaton A such that A (t) = Q(t) for every tree t. The size of A can be bounded according to Theorem 4.1 of [7]. In particular, if there are no wildcards, then the size |A | of A is in O(|Q|). Automata over SLT Grammars This section describes how to perform counting, materialization, and serialization for the set of nodes A (t) selected by the DST automaton A on the tree t = val(G) given by the SLT grammar G. Note that the case of counting was already described by Fisher and Maneth [5]; they consider queries with filters and containing more axes than in our fragment (e.g., supporting the following axes), and therefore obtain higher complexity bounds (cf. Section 5). Counting We build a "count evaluator" which executes in one pass over the grammar, counting the number of result nodes of the given XPath query. The idea is to memoize the "state-behavior" of each nonterminal of the SLT grammar, plus the number of nodes it selects. Proof. Let G = (N, S, P) and let H G be its hierarchical order Theorem 2 Given an SLT grammar G and a DST automaton We compute a mapping ϕ in one pass through the rules of G, in reverse order of H G , i.e., starting with those nonterminals A for which P(A) does not contain nonterminals. For each nonterminal A of rank k and state q ∈ Q we define ϕ(A, q) = (q 1 , . . . , q k , n) where q i ∈ Q and n is a non-negative integer. The q i are chosen in such a way that if we run A on P(A) then we reach the y i -leaf in state q i , and n is the number of selected nodes of this run. We start in state q at the root node of P(A), and set our result counter for this run to zero. If we meet a nonterminal B during this run, say, in state q ′ , then its ϕ value is already defined; thus, ϕ(B, q ′ ) = (q ′ 1 , . . . , q ′ m , n ′ ). We continue the run at state q ′ i at the i-th child of this nonterminal in P(A). We also increase our result counter for q and A by n ′ . If we meet a selected terminal node, then we increase the result counter by one. The final result count is stored as the number n in the last component of the tuple in ϕ (A, q). Finally, when we are at the start nonterminal S, we compute its entry ϕ(S, q 0 ) = (n). This number n is the desired value |A (G)|. Since we process |Q|-times each node of a right-hand side of the rules of G, we obtain the stated time complexity. Materializing Here we want to produce an ordered list of pre-order numbers of those nodes that are selected by a given DST automaton over an SLT grammar G. Clearly, this cannot be done in time O(|Q||G|) because the list can be of length |val(G)|. First we produce a new SLT grammar G ′ that represents the tree obtained from val(G) by marking each node that is selected by the automaton A . For each occurrence of a nonterminal B in the righthand sides of the rules of G, there is at most one new nonterminal of the form (q, B, q 1 , . . . , q k ), where q, q 1 , . . . , q k are states of A . The construction is similar to the proof of Theorem 2: instead of computing ϕ(A, q) = (q 1 , . . . , q k , n), we construct a rule of the new grammar G ′ of the form (q, A, q 1 , . . . , q k ) → t, where t is obtained from P(A) by replacing every nonterminal B met in state q ′ by the nonterminal (q ′ , B, q ′ 1 , . . . , q ′ m ) where ϕ(B, q ′ ) = (q ′ 1 , . . . , q ′ k , n) for some n. When during such a run a selecting rule of A is applied to a terminal symbol a, then we relabel it byâ. Finally, to be consistent with our definition of SLT grammars (which does not allow non-reachable (useless) nonterminals because the hierarchical order is required to be connected), we remove all non-reachable nonterminals in one run through G ′ . Lemma 1 Let G be a k-SLT grammar and A a DST automaton. An SLT grammar G ′ can be constructed in time O(|Q||G|) so that val(G ′ ) is the relabeling of val(G) according to A . Note that in Theorem 5 of [10] it is shown that membership of the tree val(G) with respect to a deterministic top-down tree automaton (dtta) can be checked in polynomial time. The idea there is to construct a context-free grammar for the "label-paths" of val(G); for a tree with root node a and left child leaf b, a 1 b is a label path. It then uses the property that the label-path language of a dtta is effectively regular. While we do not translate queries using count and ancestor, the automaton for this particular query is easy to construct: it uses three states q 1 , q 2 , q 3 to count the number of nodes modulo three. For simplicity the SLT grammar G: DST automaton A : Figure 3: A relabeling SLT grammar G ′ with start production q 1 , A 0 , for a given SLT grammar G with respect to a DST automaton A for query Q 2 . example is on a monadic tree, not an XML tree; therefore the rules of A are of the form q, % → q ′ , i.e., the right-hand side contains only one state instead of two. The figure also shows the SLT grammar G ′ , representing the relabeling according to Lemma 1. One can verify that G ′ produces the correct relabeled tree, by computing val(G ′ ) : (â(a(a(â(a(a(â(a(a(â(a(a(â(a(e)...) Theorem 3 Let G be an SLT grammar and A be a DST automaton. Let r = |A (val(G))|. We can compute an ordered list of pre-order numbers of the nodes in A (val(G)) in time O(|Q||G| + r). Proof. Let G = (N, S, P). By Lemma 1 we obtain in time O(|Q||G|) an SLT grammar G ′ whose tree val(G ′ ) is the relabeling of val(G) with respect to A . The list of pre-order numbers is constructed during two passes through the grammar G ′ . First we compute bottom-up for each nonterminal A (of rank k) the off-sets of all relabeled nodes that appear in P(A). An offset is a pair of integers (c, o) where 0 ≤ c ≤ k is a chunk number, and o is the position of a node within a chunk. A chunk is the part of the pre-order traversal of P(A) that is before, between, or after parameters. I.e. when A is of rank k, then there are k + 1 chunks: the chunk of the traversal from the root of P(A) to the first parameter y 1 which has chunk number 0, the chunks of the traversal between two parameters y i and y i+1 (with number i), and the chunk after the last parameter y k with number k. We construct a mapping ϕ that maps a nonterminal A, a state q, and a chunk number c to a pair (n, L) where n is the total number of nodes in the chunk and L is the list of off-sets, in order. We now do a complete pre-order traversal through the grammar G ′ , while maintaining the current-preorder number u in a counter. When we meet a nonterminal A in chunk c with a non-empty list L of off-sets, we add u to each offset and append the resulting list to the output list. Serialization Here we want to output the XML serialization of the result subtrees rooted at the result nodes of a query (given by a DST automaton). Again, we want the output in pre-order. Theorem 4 Let G be an SLT grammar and A a DST automaton. Let s be the sum of sizes of all subtrees rooted at the nodes in A (val(G)). We can output all result subtrees of A (val(G)) in time O(|Q||G| + s). Proof. The proof is similar to the proof of Theorem 3 in that it runs in two passes over grammar G ′ whose tree val(G ′ ) is the relabeled one according to Lemma 1. During the bottom-up run through the grammar, we construct a mapping ϕ that maps a nonterminal A, a state q, and a chunk number c to a sequence S of opening and closing brackets of the pre-order traversal corresponding to A, q, and c. Then during the complete pre-order traversal though G ′ we construct a sequence S ′ of opening and closing brackets containing only result subtrees of A (val(G)) and pointers to marked elements for nested result nodes. At a nonterminal A, in a state q, and a chunk c we first start appending to S ′ if ϕ(A, q, c) contains a marked node. Then when meeting nonterminals A, in state q, and chunk c inside marked nodes subtrees we always append ϕ(A, q, c) to S ′ , and we store pointers to marked nodes. Finally, based on the obtained sequence S ′ , the selected subtrees are serialized by following the |A (val(G))| pointers to their roots in S ′ . We can do better, if we are allowed to output a compressed representation of the concatenation of all result subtrees. In fact, the result stated in Theorem 4, follows from Theorem 5. We can construct a straight-line string grammar (SLP) in time O(|G|) that generates the pre-order traversal of the tree val(G), see Figure 4 for an example. But, what about an SLP that outputs the concatenation of all pre-order traversals of the marked subtrees? What is the size of such a grammar? If every node is marked, and the original tree has N nodes, then the length of the represented string is in O(N 2 ). Theorem 5 Given an SLT grammar G and a subset R of the nodes of val(G), an SLP P for the concatenation of all subtrees at nodes in R (in pre-order) can be constructed in time O(|G||R|). Proof. We assume that the nodes in R are given as pre-order numbers. Let us first observe that for a given SLT grammar H, an SLP grammar of the pre-order traversal of val(H), using opening and closing labeled brackets (for instance in XML syntax) can be constructed in time and space O(|H|), following the proof of Theorem 3 of [3] (they state O(|G|k) because they count the number of nonterminals of the SLP). In one preprocessing pass through G we compute the length of every chunk of every nonterminal. Let now u be a pre-order number in R. Using the information of the chunk lengths, we can determine, starting at the right-hand side of the start nonterminal, which nonterminal generates the node u. We keep the respective subtree of the right-hand side, and continue building a larger sentential tree, until we obtain a sentential form that has the desired terminal node of u at its root. The obtained sentential tree t is of size O(|G|). We introduce a new nonterminal S u with rule S u → t. This process is repeated for each node in R. Finally we construct a new start rule which in its right-hand side has the concatenation of all S u 's with u ∈ R. The size of the resulting grammar is O(|G||R|). Finally, we produce the SLP for the traversal strings, as mentioned above. Let us consider milder tree compression via DAGs [2], by 0-SLT grammars that do not use parameters y j . In this case we can improve the result of Theorem 5 as follows. Theorem 6 Given a 0-SLT grammar G and a subset R of the nodes of val(G), an SLP G ′ for the concatenation of all subtrees at nodes in R (in pre-order) can be constructed such that G ′ is of size O(|G| + |R|). Proof. We first bring the grammar G into "node normal form". This means that the right-hand side of each rule contains exactly one terminal symbol. Note that this may increase the number of nonterminals, but does not change the size of the grammar. Now, each subtree of val(G) is represented by a unique nonterminal. The grammar G ′ is obtained from G by considering G as a string grammar in the obvious way, and then changing the start production such that its right-hand side is the concatenation (in preorder) of the nonterminals corresponding to nodes in R. It is easy to extend Theorem 6 to slightly more general compression grammars: the hybrid DAGs of Lohrey, Maneth, and Noeth [12]. A hybrid DAG of an unranked tree is obtained by first building the minimal unranked DAG, then constructing its first-child next-sibling encoding (seen as a grammar), and then building the minimal DAG of this grammar. The hybrid DAG of an unranked tree is guaranteed smaller (or equal to) the minimal unranked DAG and the minimal binary DAG (= DAG of first-child next-sibling encoded binary trees). Theorem 6 is extended by bringing the unranked DAG into node normal form. XPath Filters An XPath filter (in our fragment) checks for the existence of a path, starting at the current node. It is written in the form [./p] where p is an XPath query as before. For instance, the query first selects those b-nodes that have somewhere below the path c/d/e, and which also have an a-child that has a b-child. Starting from such b-nodes, the query selects the f -children, and then the g-children thereof. It is well-known that such filters can be evaluated using deterministic bottom-up tree automata. For each filter path p in the query we build one bottom-up automaton (this construction is very similar to our earlier construction of DST automata), in time linear to the size of the p. We then build the product automaton A of all the filter automata. The size of this automaton is the product of the sizes of all filter paths in the query. If we run this automaton over a given input tree, then it will tell us for each node of the tree, which filter paths are true at that node. Thus, for a given SLT grammar G, if we build the intersection grammar with our bottom-up filter automaton A , then the new nonterminals (and terminals) are of the form where m is the rank of A and p, p 1 , . . . , p m are n-tuples of filter states. Such a tuple p tells us the states of each filter automaton and hence the truth value of all the filters. Given an XPath query with filters, we first build the combined filter automaton A . We then build for a given SLT grammar G, the bottom-up intersection grammar G A . We remove the filters from the query and build the DST automaton B as before. However, now we annotate the rules of this automaton, by information about filters: if at a step of the query that corresponds to state q of the B the filters f 1 , . . . , f m appear in the query, then the q-rule is annotated by these filters; when we evaluate top-down we check whether the filters are true, using the annotated information of the intersection grammar G A . It is shown in Theorem 1 of [10] that for a bottom-up automaton and a k-SLT grammar, the intersection grammar can be produced in time O(|Q| k+1 |G|). Theorem 7 Let G be an SLT grammar and A a DST automaton with filter automata F 1 , . . . , F n ; the sets of states are Q, Q 1 , . . . , Q n , respectively. Let r = |A (val(G))| and k be the rank of G. We can construct a grammar G ′ which represents val(G) with all result nodes marked, in time O(|Q|(|Q 1 | · · · |Q n |) k+1 |G|). The complexity stated in Theorem 7 is rather pessimistic and we believe that it can be improved. We are applying a result about deterministic bottom-up automata from [10]. We do want to execute our filter automata bottom-up, but, they are indeed deterministic top-down automata. In future research we would like to improve the worst-case complexity stated in the theorem above by taking this into account. Consider filters over the child axis only, e.g., [./a/b/c]. Instead of using a bottom-up automaton for the filter and constructing an intersection grammar according to [10] in time O(|Q| k+1 |G|), we use a topdown automaton for the "relative" query ./a/b/c; it can be constructed similar as our DST automata. Via Lemma 1 we obtain a marking grammar G ′ in time O(|Q||G|). We now want to transform this grammar so that instead of the c-nodes, their grandparent a-nodes are marked. How expensive is this transformation? It seems n the worst case that each occurrence of a nonterminal in G ′ must be changed into a distinct copy (and recursively for the new right-hand sides). This would run in time O(|G ′ | 2 ). Can it be improved? How can be handle other axes such as descendant? In which cases is this solution more efficient than the one of Theorem 7?
2013-11-21T13:01:02.000Z
2013-10-19T00:00:00.000
{ "year": 2013, "sha1": "1e8a8f84130766079901aabed69da0fd3a82d871", "oa_license": "CCBYNCND", "oa_url": "https://arxiv.org/pdf/1311.5573", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "67788735dc0f1b94ba618f29668c87b36f3b3d0e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
232774571
pes2o/s2orc
v3-fos-license
Towards the Autonomy: Control Systems for the Ship in Confined and Open Waters The concept of the Marine Autonomous Surface Ship (MASS) requires new solutions in many areas: from law, through economics, social sciences, environmental issues to the technology and even ethics. It also plays a central role in the work of numerous research teams dealing with the ship motion control systems. This article presents the results of the experiments with application of the selected control methods in automatic steering of the movement of an autonomous ship in the two regimes: during the maneuvers at low speed (in a harbor confined waters) and during the lake trials in open water conditions. In the first case, multidimensional state controller synthesized with Linear Matrix Inequalities (LMI) algorithms was used, while, in the second case, Model Predictive Control (MPC) control was adopted. The object for which the experiments were carried out was 1:24 scale model of the Liquefied Natural Gas (LNG) carrier. The paper presents also the design of the measurement and control system and the user interface. The experiments were conducted in the natural conditions on the lake. The results of the experiments indicate the fundamental role of the measurement system in the process of controlling an autonomous ship. Introduction Contemporary scientific research related to the marine control field is carried out i.a. as part of the "Autonomous Ships" project in Norwegian University of Science and Technology, "Autoship" project in Horizon2020, "The Mayflower Autonomous Ship" concept by the Promoting Marine Research and Exploration (PROMARE) and Autonomous Vessel with an Air Look (AVAL) project conducted in Poland. The number of research activities carried out and the interest of scientists point to large application possibilities of the created solutions. All these projects are interdisciplinary ones involving cooperation of the hydrodynamicians, electricians, navigators, and control professionals. According to the researchers' experience in the marine control field, Marine Autonomous Surface Ship (MASS) ship control systems can be divided into four main areas. • The first area is an autonomous calculation of the optimal trajectory also known as ships autonomous navigation [1][2][3][4][5]. Generating automatic trajectories for navigational maps, that include harbor infrastructure, like piers, was partially described in Reference [6][7][8]. In Reference [9], a novel three-step approach for WSL (Water-Shore Line) detection is, therefore, proposed to solve this problem through the information of an image sequence. Firstly, the initial line segment pool is built by the line segment detector (LSD) algorithm. • Verification of the proposed control algorithms should take into consideration safety at sea rules, like in Reference [10]. In Reference [11], analysis of the autonomous ship is explored, and system-theoretic process analysis (STPA) and the functional resonance analysis method (FRAM) are identified as the most representative new methods that can be used for hazard analysis of autonomous ships. • In the third area, autonomous ship control is connected with power management [12,13]. In Reference [14], the authors show the importance of autonomous power management, its impact on fuel consumption, and the need to use intelligent, self-learning algorithms. • The four area is a concept of autonomous ship control for both cruising and maneuvering speeds. For example, one can refer to the project called Advanced Autonomous Waterborne Applications Initiative (AAWA) created by Rolls-Royce and Kongsberg, described in Reference [15,16]. Project of the autonomous transport system applicable for the coastal waters and areas beyond the inlands is described also in Reference [17,18]. However, the above four areas should be integrated already from design stage all the way through calculations, navigation and safety to be a real assistance for the ship's operators. This kind of control was described in 2018 by the International Maritime Organization (IMO) and called MASS (Maritime Autonomous Surface Ships) [19]. This IMO standardization has defined the following four levels of operation for MASS. The first level is a manned ship with automated processes and decision support. The second, a remotely controlled ship with seafarers on-board. Third is a remotely controlled ship without seafarers on-board. The fourth level it is a fully autonomous ship [20]. As described in Reference [21], augmentation of present IMO-mandated vessel environmental sensor systems with future capability is essential to achieving situational awareness for MASS and ensuring proper supervision and traceability of decision-making. Questions are now being asked whether smart ships should be fully autonomous, remote controlled, or manned with a skeleton crew, and who would ultimately be responsible for the ship in question and how smart ships would affect sea traffic. Some of these problems have been discussed in Reference [22]. With great precision and direct indication of many sources describing MASS issues, as well as problems that are posed in science, are presented in Reference [23]. As described in Reference [24,25], autonomous navigation decision-making system is the core of a MASS technology, and its effectiveness directly determines the safety and reliability of navigation, playing a role similar to a human 'mind'. During a voyage, the thinking and decision-making process is very complex. This article will present the results of the research carried out on Silm Lake in Iława, Poland, for the "Dorchester Lady" training ship model. Additionally, visualizations of the MASS processes are presented. Usually, ship motion control research is tested on the software simulation models. The main reason for such approach is cost of full scale ship usage. The model of the Liquefied Natural Gas (LNG) tanker used in this project to test the control system is built in large scale (1:24), which gives very good approximation of conditions for real sea-going ship. The model is fitted partly with real marine navigation equipment, too. This factor gives uniqueness to this research. In MASS, there is a need of reference safe trajectory generation and control subsystems cooperation. The first of them is usually an anti-collision system [2], the result of which is a set of waypoints (WPTs) defining safe ship trajectory. These WPTs are computed according to the International Regulations for Preventing Collisions at Sea (COLREG). Control subsystem is designed to provide the ability to move along the designated route. As it was mentioned in Reference [23], there are 4 types of control in MASS: speed control, course control, stabilization control, path-following and trajectory tracking. Speed control, separately, does not really find application in MASS because it is based on following of route. It may be a part of trajectory tracking system, combined with course control, which will by discussed in a detailed way in Section 3.1 of this publication. Stabilization control in MASS is applied for service vessels, where dynamic positioning is a main task of the control system, i.e., PID control with feedforward action [26], robust adaptive control [27], or state-space control [28]. Path-following and trajectory tracking are the most commonly used strategies in MASS. Path-following, in contrast to the trajectory tracking, does not require ship to be at a certain WPT at a certain time. So, it is the most common concept in the MASS automatic control. One can apply relatively simple methods as PID control scheme with switching approach for different operating conditions [29], more computationally complicated ones as optimal robust control combined with roll stabilization [30], and also use artificial intelligence, applying neural path following controller to the ship [31]. Trajectory tracking is also popular in MASS control concept, i.e., using sliding mode control [32], robust adaptive control [33], or artificial intelligence [34]. The overall concepts of the research trends in path-following and trajectory tracking are convergent. Beyond them, predictive techniques, also proposed in Section 3.2, have their place [35,36], as well. Training Ship Conducting research on an autonomous ship requires: safe trajectory generation system, appropriate controller, and vessel equipment adapted for automatic control. Today, due to the lack of relevant regulations, it is not possible to use real commercial vessels for scientific research. Therefore, a scaled-down floating training ship, one of the small fleet owned by Foundation for Safety of Navigation and Environment Protection, was used. This ship, Liquefied Natural Gas (LNG) Carrier "Dorchester Lady", presented in Figure 1, was adapted for autonomous shipping. She is described in detail in Reference [37][38][39]. The training ship has been built in scale 1:24 according to geometric, kinematic and dynamic similarity laws. Only the Reynolds number cannot be kept constant, due to the fact that ship and model move in the same environment; so, the complete kinematic and dynamic similarity to the full scale ship is not obtained. This leads to relevant seagoing ship dynamics mapping for training and research purpose. Training ship particulars are presented in Table 1. The model of LNG carrier is equipped with two DC motor driven azipods with counter-flow propellers, a tunnel thruster and azimuth thruster, both located on the bow. The model operates in manned mode using signals from the gyrocompass and Global Positioning System (GPS) receiver. External disturbances, like wind force and direction, are measured by the anemometer. Training ship positions are determined with centimeter accuracy do to GPS system working in the Real Time Kinematic (RTK) mode. Measurable external disturbances, like wind force and direction, may be used as inputs in the optional feedforward controller. Automation of the Ship Motion Control Processes Automation of the ship motion control process requires synthesis of the control system for the desired vessel. It is based on the control law creation and its application to a real vessel. In general, three types of control modes are distinguished: trajectory tracking, path following, and reference speed tracking. Fully functional MASS requires three subsystems cooperation ( Figure 2): -Supervisory navigation system-where the safe trajectory is generated based on the waypoints sequence, voyage management data, and information about other ships moving in the vicinity, taking into account International Regulations for Preventing Collisions at Sea (COLREG). - Control system-where, based on the course and speed reference signals, desired actuators' commands are computed. In this subsystem, the controller cooperates with the state observer and thrust allocation system for low speed multidimensional control. -Controlled plant-ship equipped with controllable actuators and measurement devices. Firstly, an algorithm based on Linear Matrix Inequalities (LMI) was tested for low speeds, but it did not work in open water. Hence, research was started to develop another control algorithm for high speed, and, in this case, Model Predictive Control (MPC) was the right choice. The "failed" results are not presented here due to limitation of the tex length. The advantage of the controller based on linear matrix inequalities is the size of the gain matrix K = [3 × 6]; additionally, the multidimensional control for low speeds worked perfectly during the verification on the lake, where, as the figure shows for individual speeds u, v, r, there was no cross-coupling, which is a great advantage of this research. MPC is a control strategy using an internal ship's model in order to predict her future motion. The ship is characterized by high inertia, which makes the control more complicated and reduces its quality. MPC usage allows inclusion of the ship dynamics into future control signals calculation process. So, due to these advantages, this control scheme is applicable for such plants as ships, and our research experience shows that it may be successfully applied to the MASS. Multidimensional Control of Autonomous Ship Maneuvering in Port Ship control algorithm for movement on restricted area along a selected trajectory was created using linear matrix inequalities (LMI). As a restricted area, we mean confined waters, like harbor area or lock entrance. The control object, the "Dorchester Lady", ship model is a nonlinear object, especially at low velocities, since, during ship dynamics, modeling for controller synthesis linearization of the model around its working point was used. The identification process of a model took into consideration: stationary Kalman filter system [40] (this system is used for u, v, r velocities estimation), because "Dorchester Lady" ship model was not equipped with instruments for measuring linear velocities and thus, the need exists for Kalman filter system, -thrust allocation system used for calculating three components of vector: to vector T with seven components of propulsion devices control signals. Controlled object has three input signals: τ x , τ y , τ p , and three output signals:û,v,r, where: [τ x ] is the reference force (thrust) on the ships longitudinal axis, [τ y ] is the reference force (thrust) on the ships lateral axis, and [τ r ] is the reference rotational moment. The method of power distribution between the individual propellers is determined by the number and type of devices installed and their arrangement in or under the hull. Therefore, there is no single commonly used algorithm, and each such arrangement is basically designed individually. In the case of the training vessel, this system is based on Moore-Penrose pseudo-inverse matrix calculations. The method used in the allocation was analogous to those described for the thrust allocation of the "Blue Lady" model ship in Reference [38,41]. Figure 3 shows the signals transmitted from the controller to the allocation systems. The diagram in Figure 4 shows that the autonomous ship steering in the low speed consists of two main stages. The first stage concerns the synthesis of the low speed regulator based on the LMI. The second stage, marked in green on the same diagram, concerns the synthesis of the trajectory regulator. This means that controller input signals are differences between reference and filtered velocity signals. And controller output signals are three force signals, τ x , τ y , and τ p , which are sent to the thrust allocation system. Matrices A, B, and C of the controlled object, the "Dorchester Lady" ship model, have the below form: Basic canonical form of linear matrix inequalities is (based on Reference [42]): where: matrices marked as F 0 ..Fi ∈ R nxn are real and symmetrical, where symmetrical matrix has the form of: F i = F T i for i = 0, ..., m, and the term " 0" means that the matrix F(x) is positively defined. LMI conditions create a convex set of limitations that has to be formulated for state space controller synthesis process. The synthesis of the regulator is based on three conditions shown in Figure 5. This means calculating gain matrix K for the state space controller described by the formula: To calculate matrix K, one must know the values of matrices X and Y, which are calculated using an optimization software based on defined LMI conditions. For this we assume that matrix X is symmetrical and positively defined and that it's inverse, real matrix Y −1 exist. "Yalmip" and "SeDuMi" libraries for MATLAB software were used for controller synthesis [41,43]. After the calculations, the controller matrix K has the below form: The important fact is that control matrix is a full matrix (and not diagonal), which means that all three velocities are controlled at the same time and are interconnected. Autonomous Ship Open Water Trajectory Tracking Autonomous ship moving at operational speed is a problem classified as open water ship motion control. One of the methods for determining the trajectory return path, a straight line along which ship is returning to the reference trajectory, is based on the ships return course computation. Line-of-sight (LOS) algorithm may then be used [44]. It is determined based on three consecutive waypoints: passed, closest, and next one, combined with the present ship's position. The example of the ship's positioning relative to the reference trajectory is shown in Figure 6. Intersection of the return course (ψ los ) with the reference trajectory (x los , y los ) is determined by the cross-track error (y err ) and line length (d los ). Trajectory return course is defined by the equation [44]: where (x, y) is a current ship's position. Trajectory tracking controller at operational speed, for the "Dorchester Lady" training ship, was created with the use of Model Predictive Control (MPC) technology. Internal plant model was identified with the use of MATLAB System Identification Toolbox. Due to the faster calculation time, it was decided to use linearized state space model describing the relationship between azipod angle of rotation (δ) and ship's rotational velocity (r). The predictive incremental state-space model [38] has the form described below: where: -x k+1 -predicted next state; - x k -predicted current state; -u k -current control signal (azipod angle of rotation δ); -y k -current output signal (rotational velocity r). Matrices A, B, C, and K of the controlled object, the "Dorchester Lady" ship model, have the form: The internal identified model links azipod angle of rotation with the rotational velocity in order to allow for future control signal (azipod angle of rotation) predictions. Sub-optimal control signals are computed during constrained quadratic programming optimization procedure based on the cost function: where: γ u ,γ y -output signal change and error weight coefficients; -r, y, δu-reference, output, and control signal change values; -(k + p|k)-signal value at k + p time moment predicted in k time moment; -N, N u -prediction and control horizon lengths. Indeed, the proposed solutions relate specifically to the Dorchester Lady model; we did not present more general identifications. Our knowledge was based on the non-linear training ships model proposed by Reference [37]. The LMI controller applied for the presented MASS was described in a detailed way in Reference [41,43]. The MPC controller was based on the linaerized incremental model presented in Reference [38]. Essential Components Arrangement of the Autonomous Training Ship Training ships used in Iława Ship Handling Research and Training Center are fully functional models of seagoing vessels, used for marine officers training and for research. The "Dorchester Lady" is equipped, i.e., with Anschütz Standard 20 Gyro Compass, GPS Reciver Leica System 1200, and Gill WindObserver ultrasonic anemometer. Signals from the aforementioned navigational equipment are transmitted using National Marine Electronics Association (NMEA) 0183 standard, working on the basis of serial links. Three RS-232 and RS-422 channels are used to connect devices with automatic control system, which is presented in Figure 7. Communication between control system and ship actuators is also realized using RS-232 standard. Full-mission automatic control system is operating in one of three modes: trajectory tracking-in which it cooperates with safe trajectory generation subsystem. After defining a safe and achievable trajectory, reference waypoints are transformed into reference control signals-reference course and main engine set-points. maneuvering mode in a restricted area-where ship movement is defined by the set of waypoints and desired ship's heading. This is the way the ship moves, e.g., when approaching the quay. Surge, sway, and yaw are controlled. Operation in this mode requires not only azipods usage; bow and azimuth thrusters are also activated to perform necessary motions. -Last Minute Maneuver (LMM) for collision avoidance-where safe trajectory generation subsystem defines thrusters' setpoints allowing for collision avoidance or minimizing its effects (switch input signals marked by red in Figure 7). When tracking trajectories at operating speed only main propulsion and a steering gear of the ship are active. LNG carrier "Dorchester Lady" is propelled and steered by the azipods. Maneuvering in a restricted area requires the use of all installed thrusters, which efficiency is high at the low speeds. All thrusters are electrically powered, so it is hardly possible to use them all together working with maximum power. There is a need to use thrust allocation system, which distributes energy between individual actuators based on reference longitudinal and transversal thrusts and rotational moment. The control system implemented on board of the training ship is designed with the use of MATLAB/Simulink software. Industrial computer IPC934-230-FL equipped with 8-port Quatech Serial Device Server is used as a Simulink Real-Time Target (SLRT)-programmable controller. Application of the hardware solution described above allows for ship's realtime control, fast prototyping using host computer equipped with MATLAB software and real-time data acquisition for the visualization purposes. HMI for Research and Documentation Purpose Standard navigational data visualization form is to show them on the standard Electronic Chart Display (ECDIS) screen. But, in the presented case, there is a need to control not only reference trajectory, current ship's position, and heading for automatic control or autonomous operation of the ship. The supervisor during the system tests needs, moreover, knowledge about actuators settings, control errors, control signals' histories, external disturbances, and operating mode, as well as to have ability to switch to the manual control in case of emergency. Autonomous Training Ship (ATS) data visualization is implemented in a dedicated application built in MATLAB AppDesigner tool. A graphical user interface has been created, for which the main screen shown in Figure 8a has the following functionalities: -Autonomous ship selection option: after choosing a ship to control, in panel "Actuators" thrusters configuration corresponding to those actually installed on the particular training ship is presented. Their setpoints are updated and visualized every second. -"Start", "Stop", and "Save" buttons: they are available depending on the state of the SLRT controller software and allow the user to start and stop application and save data from its memory. One of the important reasons why the entire user interface has been object-programmed using MATLAB language is the possibility of reading data directly from the controller in real-time and plotting them on the graphs. Exemplary screens demonstrate longitudinal, transversal, and rotational ship's speeds, heading, and cross track error (Figure 8b,c). User can manually define timescale for each graph separately. There is also tab called "Manual Control" in the application (Figure 8d). Controls placed there allow for ship's manual control via SLRT. After switching from "Automatic Control" to "Manual", all controls located on the right side of screen are enabled. There is possibility to change all thrusters' setpoints. Their current values are presented on the left side of screen. In the "Velocities" panel, the ship motion vector is shown. Adjustment of the control signal values may be done via SLRT structure, where all tunable parameters were compiled. If there is any discrepancy between value set in the Target controller and one stored in the SLRT structure, the parameter in the controller is then adjusted. The lack of an efficient data readout mechanism from SLRT Target brought timer usage on. In each time step, all readable parameters from SLRT structure are compared with these displayed in the user interface in previous time step. If there are any differences, displayed values are adjusted. Results In the MASS ship, motion may be divided into two main parts, namely port maneuvers and trajectory tracking. In order to present the way of whole system operation, we have prepared two sample sets of maneuvers. The first of them presents quay departure in confined waters, and the second one presents trajectory tracking in open waters results. The main idea of the first maneuver is to show that there is a possibility to realize safe quay departure in a fully autonomous ship control system. Reference trajectory and course are given by the superior reference trajectory generation subsystem, which takes into account quay, port infrastructure, and fairway signs positions. The role of ship motion system is then restricted to the reference tracking. The LNG carrier is highly non-linear, multidimensional control object. During low-speed, port maneuvers azipods and bow thruster are used, so there is a need of multidimensional control system application. Ship motion in the harbor should be controlled precisely in order not to collide with the other ships and infrastructure in the restricted waters. This approach requires position and course into reference velocities (u, v and r) recalculation. Control quality is assessed then based on them. In the system, it was required that the overshoot for each speed should not exceed 20%. The second described case concerns trajectory tracking under normal operating conditions. We decided to present results of the reference tracking, which is generated by the superior safe trajectory generation system as a set of WPTs. The course is not given in this case by the superior system; it is counted as a bearing between two consecutive WPTs. This system takes into account water depth and changes reference in order not to collide with the other ships. Measure of the quality of regulation is defined as steady-state cross-track error, which cannot exceed ship's breadth. The trials of the control systems were carried out in the Ship Handling Research and Training Center in Iława. The multidimensional low-velocities system with the LMI controller was tested in the port area, while the full speed MPC control system in the open waters of the lake. Figure 9 shows the training ship trajectory and recorded histories of the key parameters of the port departure maneuver. The reference trajectory marked on the upper part of the figure by dashed line consisted of 4 waypoints. Their data are presented in Table 2. The longitude and latitude numbers are set in World Geodetic System (WGS) 84 format. One can observe satisfactory controller performance in the longitudinal and rotational channels, while, in the transversal channel, quality of the control is poor, especially in the final part of the maneuver. The control system working with very small levels of the set-point values for thrusters is extremely sensitive for wind gusts. This is important, particularly, for ships with high lateral area, like LNG carriers. Please note that the wind velocity measured by the anemometer should be scaled up to the size of ship model by square root of the scale multiplication factor ( √ 24). Therefore, feedforward controller compensating wind influence seems to be necessary. Figure 10 shows MPC-LOS trajectory tracking results and recorded histories of the key parameters of the maneuver. Reference trajectory, marked on the upper part of the figure by red line, consists of four waypoints. They are presented in global Earth coordinate system. Their data are presented in Table 3. The longitude and latitude numbers are set in WGS 84 format. This way of the results presentation was used in order to emphasize good trajectory tracking performance. Cross track error (XTE) is then shown as a difference between ship's and reference trajectory in meters. Admissible cross-track error, lower than ship's breadth, is marked with the red line in the y e graph. It is shown that recorded cross-track error goes beyond the acceptable range only on turns, where XTE is not a parameter that can be used to assess the quality of regulation. The upper part of the Figure 10 shows that turns are made without overshoot and trajectory is tracked without oscillations. Apparent wind speed indicates the level of wind disturbance and is reflected in the azipods angle of rotation oscillations. Use of a feedforward controller in the future research is likely to minimize this effect. These oscillations are seen, because MPC controller is sensitive to the model and plant mismatch and the controller itself reacts to a fast-changing wind disturbance that has already occurred. In the lowest part of the figure, LOS rotational velocity reference (dashed line) and its tracking (solid line) are shown. A delay in the setpoint tracking close to the control horizon is observed. The tests showed that one of the features of the LMI controller is the ultimate small size of the control matrix [3 × 6], and the results of the regulated speeds relative to the setpoints are within 20% overshoot. As the training vessel is to be a research vessel, it was decided that for high open water speeds a controller with MPC prediction would perform much better. Measured steady-state cross-track error is within predefined limit not exceeding ±ship's breadth. The most important issue was to verify the results in terms of MASS, i.e., autonomy, safe harbor maneouvers, reference trajectory provided by a separate system tracking possibility, control process visualisation, and all the subsystems cooperation, rather than to develop and test in a detailed way an automatic ship control algorithms. This goal of the research has been achieved. Lower part: recorded signals marked with following symbols: y e , v w , δ z , r, r re f -cross-track error ("-" sign indicates that ship has reference trajectory on the port side), apparent wind speed, azipod angle of rotation ("-" sign indicates thet azipods are rotated to the port side), rotational velocity ("-" sign indicates counterclockwise rotation), and reference rotational velocity, respectively; reference rotational velocity is marked with dashed line. Discussion The idea of MASS is a solution that will reduce the number of seafarers working on ships in the future and can even eliminate them. Today, Kongsberg Maritime augmented with Rolls-Royce Marine technology is leading the activities on a commercial level of autonomous shipping. Scientists all over the world are also looking for novel algorithms allowing for autonomous shipping. They have to face not only technical, physical, and control problems but also have to comply with existing legal requirements. Today, there is a lack of rules and guidance on how to build and operate autonomous vessels. The International Maritime Organization (IMO) is discussing this topic and adopted interim guidelines for autonomous MASS surface ships at its 101st session in 2019. Currently, the legislation concerns the adoption of 4 degrees of autonomy (from the level where the ship's systems support the decisions of the master present on the ship to level four defining an unmanned, self-manipulating and self executing ship). On IMO websites, the Commission informs about the planned regulation for 2020 on rights and rules appropriate to the new situation for research work and training conditions for MASStype. An important aspiration of the new regulation may be the need to undertake training activities for both crews, operators, and vessel traffic control groups. Conducted research, presented in this paper, allowed for creation of fully functional motion control system for MASS based on the training ship. This indicates the legitimacy of further research in the autonomous ship field. Development of algorithms for automatic control of the ship's motion in various operating conditions is an essential shape of the process of seagoing autonomous merchant ship creation. MASS, except for favorable legal conditions, requires work on algorithms for determining a safe trajectory and their combination with a full-mission controller. The challenge is, naturally, to build the reliable motion control system for autonomous full-size ship for her entire voyage from port to port. Promising results in this field have been achieved as shown in the draft in Reference [45]. In addition, one more aspect should be highlighted except the algorithms and calculation methods of control signals used. It seems to be even more important in the autonomous ship motion control technology. It is the measurement system. The experiments with the ATS described in this work have shown that gyrocompass, precise GPS in RTK mode, anemometer, and the reliable measurements of propulsors are sufficient to control scale model ship. In case of the full-size one, deliberate policy ought to be introduced to establish trustworthy measurement system for ship motion control. Conclusions The research presented in this work has shown that it is possible to control the motions of the autonomous ship in different stages of her voyage using LMI and MPC control paradigms. The experiments were conducted in the real environment using the scale model training ships. One of the more important aspect of the control system is to design trustworthy measurement structure to keep the autonomous ship aware of the environmental factors. The most important issue was to verify the results in terms of MASS, i.e., autonomous rather than automatic ship control. The most important issue was to verify the results in terms of MASS, i.e., autonomous rather than automatic control of the vessel. Therefore, it is innovative to carry out a number of maneuvers on the lake to test autonomous vessel control considering the two methods LMI and MPC. This is the first step extremely necessary to test the algorithms based on the anti-collision solutions needed for autonomy according to MASS4. We plan to test other, more demanding control techniques and to assess which of the control algorithms will work best in autonomous ship control.
2021-04-04T06:16:26.898Z
2021-03-24T00:00:00.000
{ "year": 2021, "sha1": "14a08dc241bb216296d95f03e92460166d760ade", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/7/2286/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "536b5bae9272caacdfabbcd6282e2a9cca1c1e19", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
55248773
pes2o/s2orc
v3-fos-license
Corrigendum Corrigendum to ( Global Stability of Malaria Transmission Dynamics Model with Logistic Growth ) In the article titled “Global Stability of Malaria Transmission Dynamics Model with Logistic Growth” [1], there was a sign error on the global stability analysis of the endemic equilibrium point where a preceding sign should be minus (-) instead of plus (+) in equation (34) and inequality (35). e corrected equation and inequality are shown as follows: dV dt = nh (1 − nh) (2 − nh∗ nh − sh∗ sh ) + αsh∗ (2 − nh nh∗ − sh sh∗) + βsh∗iV∗ (3 − ih ih∗ − nhiV∗ nh∗iV − shih∗iV sh∗ihiV∗) + βsh∗iV∗ (1 − iV∗ iV )( iV iV∗ − nh nh∗) + ρih∗ (nh∗ nh − 1)( nh nh∗ + ih ih∗) + nV∗iV∗ ( nVih nV∗ih∗ − iV iV∗)(1 − iV∗ iV ) − ε (nV − nV∗) 2 − ih∗iV∗ (1 − ih ih∗)(1 − iV iV∗) . (34) Introduction Malaria is a mosquito-borne disease caused by Plasmodium parasite, which is transmitted through the bites of an infected mosquito.In 2017, the World Health Organization report reveals estimations of 216 million malaria cases and 445 thousand deaths due to malaria were registered worldwide in 2016.However, the most malaria cases and deaths were shared by the WHO Africa region, which account for 90% of cases and 91% deaths.The most predominant malaria parasite in the WHO Africa region is Plasmodium falciparum, accounting for 99% of malaria cases in 2016 [1]. Malaria is entirely preventable and treatable disease if the recommended interventions are properly applied.Individuals should have taken some aggressive measurements to decline malaria burden.Personal protection measures are the first line of defense against mosquito-borne diseases.Mosquito repellent is a method used for personal protection; and these are the substances used for exposed skin to prevent human-mosquito contact.Insecticide Treated Bed Nets (ITNs) are used for individuals against malaria to reduce the morbidity of childhood malaria (below five years of age) by 50% and global child mortality by 20-30% [2,3].When used on a large scale, ITNs are supposed to represent efficient tools for malaria vector control but there is a limitation of resistance to insecticides used for a saturated net.The resistance of the most important African malaria Anopheles gambiae to protrude is already widespread in several West African countries [4,5]. Nowadays, mathematical models become an important and popular tools to understand the transmission dynamics of the disease and give an insight to reduce the impact of malaria burden in the society.This is because mathematical modeling can answer the following questions raised by the public health authorities and policy makers to make the correct decisions: (1) how severe will the epidemics be? (2) How long will it last?(3) How effective will an intervention be? (4) What are the effective measures to control and eliminate an endemic disease?The earliest malaria model study originated from Ross in 1911 [6] and later modification made by Macdonald [7].Some further extensions of Ross-Macdonald models for malaria were described in [8][9][10][11][12][13].Tumwiine et al. [13] define the reproduction number, 0 , and show the existence and stability of the disease-free equilibrium and an endemic equilibrium.Recently, many works on host-vector interaction models have been done in [14][15][16][17][18][19][20][21][22][23][24].In [18,20,22,25], global stability of equilibria has been investigated using suitable Lyapunov functions; and their results show that the disease-free and endemic equilibrium points become globally asymptotically stable if 0 ≤ 1 and 0 > 1, respectively.Application of the optimal control theory becomes an important tool for investigating the efficiency of joint control intervention strategies to minimize the impact of malaria disease and cost-effectiveness of implementing them [19,21,23,24].Their studies suggest that the optimal control strategies can effectively reduce the malaria disease. Motivated by the above studies, we extend the model presented in [14] by taking into account a logistic model with population dependent birth rates for both human and vector populations that describes self-limiting growth of both the human host and mosquito vector populations.We consider logistic malaria model as no population can grow exponentially at all time, in general.A number of populations initially grow exponentially, but, due to competition and limited resources availability, their population size decline, after some time, to a stable size , called the maximum carrying capacity.The competition for limited resources (including food, territory, light, water, and oxygen) decreases the fertility or survival of individuals.Furthermore, this paper presents application of the model to study abrupt and periodic variations of malaria and sensitivity analysis applied to understand the important parameters in transmission and prevalence of the malaria disease.The purpose of this work is to investigate the global stability of both disease-free and endemic equilibrium points. Malaria Model We consider that the total human host population, ℎ , at a time * , is divided into three disjoint compartments: susceptible ℎ , infectious ℎ , and recovered ℎ .The total vector population, V , at a time * , is divided into two mutually exclusive subpopulations of individuals who are susceptible, V , and infectious, V .The susceptible human and vector populations are recruited at the rates ℎ and V , respectively, where The susceptible human and vector populations decrease due to natural death at a rate ℎ for humans and V for vectors, and those that move to the infected classes at a rate ℎ V and V ℎ , respectively.The infected human population grows as a result of new infection at a rate ℎ ℎ V and decline due to natural mortality, disease induced death, and recovery at a rate ℎ , ℎ , and ℎ , respectively.For details, see the schematic diagram of the model in Figure 1.The state variables and parameters for the model are described in "State Variables, Parameters, Descriptions, and Their Dimensions of Malaria Model" section. The model has made the following assumptions: both the total sizes of human and vector populations not being constant; all variables and parameters involving the model assumed to be nonnegative; all newborns susceptible to infection; mosquitoes not dying because of infection; no recovery compartment for infected mosquitoes; and the recovered human population developing permanent immunity.From the schematics diagram of transmission of malaria between human and mosquito (see Figure 1), we have the governed differential equations which describe the dynamics of malaria, with initial conditions At all times, ℎ = ℎ + ℎ + ℎ and V = V + V .Moreover, their differential equations are satisfying respectively.We may notice that the vector population equation is completely decoupled from the human equations which is physically reasonable.If we eliminate ℎ and V and add the total population equations, then we finally have 2.1.Basic Properties.Since the model system (1) involves human and mosquito populations, all its associated variables and parameters are nonnegative. Theorem 1. Solutions of the model system (1) with positive initial data will remain nonnegative for all time ≥ 0. Invariant Region.The malaria model (1) will be analyzed in biologically feasible region.Thus, the feasible solutions set for the model written by is positively invariant and then the model is biologically meaningful and mathematically well posed in the domain Ω. The proof is omitted for simplicity. Model Analysis To analyze the malaria model in system (4), we use the normalized quantities instead of the actual populations.Since ℎ and V may vary, these scales are not suitable for use in the scaling.However, the typical choice for logistic models is to use the sustainable populations ℎ and V for the scales. In the present case, we shall also consider varying ℎ and V .It is, therefore, convenient to write ℎ = 0 ℎ ℎ () and , where 0 ℎ and 0 V are typical sizes and where ℎ and V take care of the time variations.At the moment, we just assume that ℎ () = V () = 1.We shall scale the time * with the quantity 1/ ℎ by setting = ℎ * .The scaling is then The scaled equations then become subject to suitable initial conditions, The basic reproduction number, 0 , is the single most important parameter in epidemiological modeling.It measures the average number of the secondary infections caused by a single infective in an entirely susceptible population during its whole infectious period [26].To derive the basic reproduction number 0 of model ( 9), we use the next generation matrix approach described in [27][28][29].The infected compartments of system ( 9) are ℎ and V .Following [29], the new infection matrix and the transition matrix are given, respectively, by Hence, the basic reproduction number, 0 , is the dominant eigenvalue of the next generation matrix −1 and becomes From (12), it is noted that the reproduction number depends on the product of the number of humans that one mosquito infects through its infectious lifetime, 0Vℎ , and the number of mosquitoes that one human infects through its infectious lifetime, 0ℎV . Theorem 2. The disease-free equilibrium point, 0 , is locally asymptotically stable if all eigenvalues of the characteristic equation of the variational matrix lie below zero. Proof.At the equilibrium point, 0 , the variational matrix is given by The characteristic equation may be written as det[( 0 ) − ] = 0.It implies Clearly, we have Thus, this shows that the solution, 0 , is unstable since 1 and 4 lie above zero. Proof.The variational matrix at the equilibrium point, 1 , becomes Thus, the characteristic equation of the variational matrix is given by where The characteristic polynomial in (17) has roots 1 = − 1, 2 = − 1, 3 = −, with negative real parts since 0 < , < 1.By Routh-Hurwitz criterion [28], the other roots 4 and 5 have negative real parts if both 0 and 1 lie above zero. Theorem 4. The disease-free equilibrium point, 1 , is globally asymptotically stable in Ω if 0 ≤ 1; otherwise it is unstable. Proof.Consider the Lyapunov function where The time derivative of the function along the solutions of (9) becomes Thus V ≤ 0 if 0 ≤ 1 and the equality V = 0 holds if and only if ℎ = V = 0. Therefore, the largest compact invariant set in {( ℎ , V ) ∈ Ω : V = 0} is the singleton { 1 }, where 1 is the disease-free equilibrium.LaSalle's Invariant Principle [30] implies that 1 is globally asymptotically stable in Ω. Stability of Endemic Equilibrium. To find the endemic equilibrium 2 , we shall keep the assumptions about V * and V * from the singular perturbation equations (see the fourth and fifth equations of system ( 9)) and then focus on the first three equations of system (9).We consider the equilibrium solutions, using ℎ * as the basic quantity.From the third equation of ( 9), we have an expression for ℎ * : Addition of the second and third equations of system (9) gives Now, let us consider the first equation of ( 9) which connects ℎ * and ℎ * .That is, Equation ( 24) has only real and positive solutions for , then (24) has two solutions: for each value of ℎ * . One might wonder what happens when the inequality is violated and ℎ * > ((1−)/2) 2 .The solution of (24) will then be majorized by the equation which has only one unstable equilibrium point at = ((1 − )/2) and otherwise tends to 0. Moreover, it is clear that ℎ * needs to be larger or equal to ℎ * .Let us, therefore, consider + ℎ * and − ℎ * in the light of this restriction.The various situations are best described in terms of the graph shown in Figure 2. In the graph, the upper solution (upper red dot) is acceptable since + ℎ * > ℎ * , whereas the lower solution is outside the region and hence unacceptable.Further inspection of the graph shows the following.(3) For (1 − )/2 < < 1 − , one or both of the solutions are acceptable. The main conclusion is that there are acceptable solutions with respect to size for all ℎ * where 0 ≤ ℎ * ≤ (1 − ) 2 /4. Assume that ℎ * < (1 − ) 2 /4, where only + ℎ * is acceptable with respect to size.Then substituting expressions ( 22) and ( 25) into the equation in (23) and simplifying lead to the following quadratic equation: where From (27), it can easily be seen that > 0. Further, if 0 > 1, then < 0. Thus, the number of possible positive real roots of ( 27) can depend on the signs of .This can be analyzed using the Descartes rule of signs on the quadratic polynomial (27).The different possibilities for the roots of ( 27) are tabulated in Table 1. Thus, the malaria model has a unique endemic equilibrium if 0 > 1 and whenever cases 2 and 3 are satisfied.Hence, the endemic equilibrium then becomes and ℎ * is the unique positive root of ( 27). Proof.We shall propose the Lyapunov function where The Lyapunov function is continuous for all ℎ , ℎ , ℎ , V , V > 0. The time derivative of the function along the solutions of system (9) becomes From the equilibrium point of the malaria model ( 9), we have the following relations: By adding and subtracting ℎ * V * , ℎ * V * ( ℎ V * / ℎ * V ) and using ( 31) and ( 33) in (32), after intensive simplification, we have ) . ( Since the arithmetic mean is greater than or equal to the geometric mean, then we have Also, if Furthermore, if Hence, it follows from (34), ( 35), (36), and (38) that / ≤ 0 in Ω.Thus, the equality / = 0 holds only when , where 2 is the endemic equilibrium.From the LaSalle's invariant principle [30], the unique equilibrium 2 of system ( 9) is globally asymptomatically stable for 0 > 1. Sensitivity Analysis. To understand the relative importance of parameters which are responsible for transmission and prevalence of malaria disease, described in the model ( 9), we perform a sensitivity analysis.Sensitivity indices help us to measure the relative change in a state variable while a parameter changes.The normalized sensitivity index of a variable to a parameter is the ratio of the relative change in the variable to the relative change in the parameter [27].We calculate the sensitivity indices of 0 to assess which parameter has a great impact on 0 and hence the greatest effect in determining whether the disease dies out or persists with population. Let be the generic parameter of model (9).We, now only, derive the normalized sensitivity index of 0 to each of the parameters involved in 0 , defined by the ratio of the relative change in 0 to the relative change in the parameter ; that is, This index shows how sensitive 0 is to a change in the parameter .We notice that This indicates that Π 0 does not depend on any parameter value.Similarly, for the other parameters, we have We evaluate the above sensitivity indices, in Table 3, using the parameter values in Table 2.The basic reproduction number, 0 , is most sensitive to the contact rates of human to vector and vector to human, with 0 = 0.5000 and 0 ] = 0.5000 as it can be seen in Table 3.This shows that any increase (decrease) by 10% in or ] will increase (decrease) by 5% in 0 .The other parameters with highest sensitivity indices are , with 0 = −0.6266,and , with 0 = −0.3812.Increasing (decreasing) by 10% will decrease (increase) in 0 by 6.266% and increasing (decreasing) by 10% will decrease (increase) in 0 by 3.812% or vice versa.The rest of parameters, and , have less significant effect in 0 . In conclusion, the vector death rate, the human induced death rate, and the contact rates are important parameters in the model which have a significant impact on prevalence and transmission of the malaria disease; these parameters are able to control so that an intensive effort/work has to be done to eradicate the malaria disease from the population.Furthermore, one can understand from the sensitivity indices that vector control is the most effective control strategy. Application of the Model In this section, we present more simulations illustrating the abrupt and periodic variations of the model.We fix a reasonable parameter values of the model for numerical simulations.Steps down the vector population by 50%. We allow the mosquito sustainable level, V () = 0 V V (), noted in Section 3, to vary with respect to time.However, we first keep ℎ fixed in order to investigate the impact of fast variation in V on the human populations.Periodic variations in V are shown in these plots for different periods.Plots of the abrupt changes in the sustainable populations, ℎ = 0 ℎ ℎ () (see in Section 3) and V , are located in Figures 3-7, whereas plots of periodic variations are shown in Figures 8 and 9. Abrupt changes in the human and mosquito populations may, for example, be due to intensive spraying of the mosquitoes some massive emigration (refugee camps) or immigration for the humans.In Figures 3 and 4, steps down and up in vector sustainable population about 50% are plotted.These plots show that the transition occurs very fast and the system adjusts quickly to the new equilibrium.In Figures 5-7 increases about 50% and 100% and decrease about 50% in ℎ are shown.Step change in humans needs caution since it may lead to unphysical solutions.In Figure 5, after a transient, the solution converges to the new equilibrium.An increase in ℎ in Figure 6 may lead to an increase in the fraction of the human susceptible population for small time intervals, but not much dramatic change is shown.However, the slow change in human population after transient is shown in Figure 7.In the plots of periodic variation, one observes that the fast variation is quickly adapted by the vector population.This is because of the fast time scale for the mosquito population.Most of the figures show that the humans follow a slow variations relative to the vectors.Therefore, it is possible to say that fast variations in V do not imply large variation in human population.In general, for periods shorter than 1/ ℎ , the human population do not in practice show the variations in the vector populations, but for long periods they do, but apparently weaker. Conclusions In this work, we developed and analyzed a logistic malaria model to study the global stability of both disease-free and endemic equilibrium points.Mathematically, we formulated a five-dimensional system of deterministic ordinary differential equations and defined the domain where the model is epidemiologically feasible and mathematically well-posed.The model used the next generation matrix approach to obtain an explicit formula for a reproduction number, 0 , which is the expected number of secondary cases produced by a single infectious individual during its entire period of infectiousness in a fully susceptible populations. Qualitative analysis of the model determines stability analysis of the equilibrium points.Accordingly, we obtained two diseases-free equilibrium points 0 and 1 .The equilibrium point, 0 , is unstable and unphysical, while the equilibrium point, 1 , becomes both locally and globally stable whenever 0 < 1 and 0 ≤ 1, respectively.We also have shown that the endemic equilibrium point, 2 , is globally asymptotically stable if 0 > 1.Furthermore, sensitivity analysis of the model shows that the human induced death rate, the contact rates (human to mosquito or vice versa), and mosquito death rate have a significant effect on transmission and prevalence of the malaria disease.Moreover, numerical simulations are carried out in the application of the model to investigate how variations in the sustainable level of the vectors affect the human population.One can see from these simulations that fast variations in V do not lead to large variations in the human population. Figure 1 : Figure 1: The compartmental model for malaria transmission. Figure 2 : Figure 2: Graph illustrating the behaviour of the solutions of the equation between ℎ * and ℎ * . Figure 4 : Figure 4: Numerical simulation for the fractions of human and vector population versus time with constant parameter values the same as in Figure 3 and steps up the vector population by 50%. Figure 5 : Figure 5: Numerical simulation for the fractions of human and vector population versus time with constant parameter values the same as in Figure 3 and steps up the human population by 50%. Figure 6 : Figure 6: Numerical simulation for the fractions of human and vector population versus time with constant parameter values the same as in Figure 3 and increasing the human population by 100%. Figure 7 : Figure 7: Numerical simulation for the fractions of human and vector population versus time with constant parameter values the same as in Figure 3 and = 0.001, and steps down the human population by 50% in the form of a constant drop over a time interval of length 100. Figure 9 : Figure 9: Periodic variations in V with V = 1 + 0.3 sin(/6), for ℎ = 1, and the other parameters and initial values remain fixed as in Figure 8 apart from = 0.5 over the time interval [0 100]. ℎ : Number of susceptible humans at a time * ℎ : Number of infected humans at a time * ℎ : Number of recovered humans at a time * V : Number of susceptible mosquitoes at a time * V : Number of infected mosquitoes at a time * ℎ : The total human population at a time * V : The total mosquito population at a time * ℎ : Sustainable level of human population at a time * V : Sustainable level of mosquito population at a time * ℎ : Per capita birth rate of human population.Dimension: Time −1 ℎ : Per capita natural death rate for humans.Dimension: Time −1 ℎ : Per capita disease-induced death rate for humans.Dimension: Time −1 ℎ : The human contact rate.Dimension: Mosquitoes −1 × Time −1 ℎ : Per capita recovery rate for humans.Time −1 V : Per capita birth rate of mosquitoes.Dimension: Time −1 V : Per capita natural death rate of mosquitoes.Dimension: Time −1 V : The mosquito contact rate.Dimension: Humans −1 × Time −1 . Table 2 : Some parameter values of the malaria model; all parameters are nondimensional. Table 3 : Sensitivity indices of 0 to parameters for the malaria model.
2018-12-08T16:52:20.718Z
2018-10-18T00:00:00.000
{ "year": 2018, "sha1": "ae83f2b5a6aed4726d206ddea318d6b02b65fb11", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ddns/2018/4640601.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ae83f2b5a6aed4726d206ddea318d6b02b65fb11", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Biology" ] }
199547999
pes2o/s2orc
v3-fos-license
MicroRNA‐9 exerts antitumor effects on hepatocellular carcinoma progression by targeting HMGA2 Accumulating evidence has demonstrated that the aberrant expression of microRNAs (miRs or miRNAs) may contribute to the initiation and progression of various types of human cancer and may also constitute biomarkers for cancer diagnosis and therapy. However, the specific function of miR‐9 in hepatocellular carcinoma (HCC) remains unclear, and the mechanisms that underlie HCC are incompletely understood. Here, we report that miR‐9 expression was significantly decreased in clinical tumor tissue samples, as well as in a cohort of HCC cell lines. In addition, it was demonstrated that overexpression of miR‐9 suppressed the proliferative and migratory capacity of HCC cells and impaired cell cycle progression. Furthermore, high mobility group AT‐hook 2 (HMGA2) was verified as a downstream target gene of miR‐9 using a luciferase reporter assay. Quantitative RT‐PCR and western blotting implicated HMGA2 in the miR‐9‐mediated reduction of HCC cell growth. In vivo, transfection with miR‐9 mimics down‐regulated the expression of HMGA2, thus leading to a dramatic reduction in tumor growth in a mouse xenograft model. These results suggest that miR‐9 may exert critical antitumor effects on HCC by directly targeting HMGA2, and the miR9/HMGA2 signaling pathway may be of use for the diagnosis and prognosis of patients with HCC. Accumulating evidence has demonstrated that the aberrant expression of microRNAs (miRs or miRNAs) may contribute to the initiation and progression of various types of human cancer and may also constitute biomarkers for cancer diagnosis and therapy. However, the specific function of miR-9 in hepatocellular carcinoma (HCC) remains unclear, and the mechanisms that underlie HCC are incompletely understood. Here, we report that miR-9 expression was significantly decreased in clinical tumor tissue samples, as well as in a cohort of HCC cell lines. In addition, it was demonstrated that overexpression of miR-9 suppressed the proliferative and migratory capacity of HCC cells and impaired cell cycle progression. Furthermore, high mobility group AT-hook 2 (HMGA2) was verified as a downstream target gene of miR-9 using a luciferase reporter assay. Quantitative RT-PCR and western blotting implicated HMGA2 in the miR-9mediated reduction of HCC cell growth. In vivo, transfection with miR-9 mimics down-regulated the expression of HMGA2, thus leading to a dramatic reduction in tumor growth in a mouse xenograft model. These results suggest that miR-9 may exert critical antitumor effects on HCC by directly targeting HMGA2, and the miR9/HMGA2 signaling pathway may be of use for the diagnosis and prognosis of patients with HCC. Hepatocellular carcinoma (HCC) is the sixth most prevalent cancer worldwide and is considered one of the main causes of cancer-related deaths, particularly in China [1,2]. Chronic viral hepatitis B and C infections, alcoholic or nonalcoholic steatohepatitis, aflatoxin intoxication and interactions among these factors have been verified to be involved in the initiation and progression of HCC [3,4]. Notably, only 30%-40% of patients can undergo curative resection, and the prognosis of patients with HCC remains poor because of high rates of cancer metastasis and disease recurrence. Although recent advances in HCC therapy and in functional genomics have led to the development of targeted therapies [5,6], the pathogenesis and potential molecular targets for prognosis prediction in HCC have yet to be fully characterized. High mobility group AT-hook 2 (HMGA2) expression was increased in many human cancers, such as colorectal, breast, ovarian and gastric cancers [7][8][9][10]. Because HMGA2 immunopositivity is associated with advanced stage disease and tumor aggressiveness, HMGA2 has emerged as a tumor biomarker [11]. Some studies have shown that HMGA2 regulated cell cycle progression, differentiation and cellular senescence, while enhancing or suppressing the expression of several genes [12]. A wide range of studies have demonstrated that microRNAs (miRs or miRNAs) exert an inhibitory effect on tumor growth and may improve the sensitivity of chemotherapy drugs [13,14]. miRNAs are small noncoding RNAs that function as important regulators of Abbreviations GAPDH, glyceraldehyde-3 phosphate dehydrogenase; HCC, hepatocellular carcinoma; HMGA2, high mobility group AT-hook 2; miRNA, microRNA; NC, negative control; qRT-PCR, quantitative RT-PCR; SD, standard deviation. human gene expression [15,16], with >1500 human miRNAs documented in the miRBase database to date that are known to affect a number of diverse cellular processes in embryonic development and disease states [17][18][19]. In addition, a growing number of studies have indicated that miRNAs are strongly associated with multiple cellular functions, including proliferation, migration, apoptosis, cell cycle progression and angiogenesis, which contribute to tumorigenesis [20,21]. miR-NAs, such as miR-200, miR-21 and miR-34b/c, have recently been demonstrated to act as potent regulators during the development and progression of HCC [22][23][24]. miR-9 is known to play an important role in tumorigenesis and cancer progression. In brain tumors, miR-9 is selectively expressed in neural tissues under normal conditions and mediates their development. The expression of miR-9 has been found to be elevated in the neural tissues of brain tumors when compared with tumors of other histological subtypes, and exhibits a tissue-specific expression pattern [25,26]. In addition, Talin-1, which serves a significant role in regulating the transmutation of carcinomas, has been demonstrated to be down-regulated by miR-9 in ovarian epithelial cancer [27]. However, the role of miR-9 in HCC remains largely unknown, and its potential downstream targets have not yet been fully defined. The aim of the present study was to determine the expression of miR-9 in clinical HCC tissue samples and in various HCC cell lines, to investigate the function of miR-9 in HCC in vitro and in vivo, and to identify the underlying downstream targets implicated in HCC. Human tissue samples A total of 168 HCC tissue samples and paired normal adjacent liver tissues were separately obtained from patients undergoing surgery from February 2017 to January 2018. All patients provided permission for the collection of their tissue samples. The experiments were approved by the Ethics Committee of the Sichuan Academy of Medical Sciences and Sichuan Provincial People's Hospital (Sichuan, China), and written informed consent was obtained from each enrolled patient. This study also conformed to the standards set by the Declaration of Helsinki. The tissue specimens were stored at À80°C for subsequent analysis. Cell culture HCC cell lines (Huh-7, MHCC97H, LM3 and Hep3B) and normal human hepatocytes (THLE-3) were purchased from the Shanghai Institute of Biochemistry and Cell Biology, Chinese Academy of Sciences (Shanghai, China). The cells were maintained in Dulbecco's modified Eagle's medium supplemented with 10% FBS (Gibco; Thermo Fisher Scientific, Inc., Waltham, MA, USA) in a humidified waterjacket incubator at 37°C with 5% CO 2 . Dual-luciferase assays In wild-type (WT) pMIR-HMGA2-wt (Genechem, Shanghai, China), with human genomic DNA as a template, PCR was used to amplify the 3 0 UTR fragments containing the binding sites of miR-9. Then the 3 0 UTR was cloned into pmirGLO report carrier, which was transformed into the Escherichia coli DH5 alpha; random selection of positive PCR was used to identify the recombinant plasmid. In mutant pMIR-HMGA2-mut vectors (Mut; Genechem), the mutant 3 0 UTR fragment was cloned into the pmirGLO report carrier. Liver cancer cells were plated in 24-well plates and cotransfected with either WT pMIR-HMGA2-wt or mutant pMIR-HMGA2-mut vectors, together with miR-9 mimics (Genechem) or negative control mimics (miR-NC; Genechem), and the pRL-TK vector containing the Renilla luciferase gene (Promega Corporation, Madison, WI, USA) using Lipofectamine 2000 (catalog no. 11668019; Thermo Fisher Scientific, Inc.). At 48 h after transfection, reporter activity was analyzed using the Dual-Luciferase Reporter assay system (Promega Corporation) according to the manufacturer's instructions. Western blot analysis Transfected liver cancer cells were collected, and total proteins of HCC tissues, the adjacent normal tissues and cultured cells were separated according to our protocols. In brief, radioimmunoprecipitation assay buffer was used to crack the collected cells; after centrifugation (13 400 g, 30 min, 4°C), the supernatant was collected into a 1.5-mL EP tube that was stored at À80°C in a refrigerator. Protein concentration was detected according to bicinchoninic acid kit instructions. Western blot analysis was performed as previously described [29]. In brief, immunodetection of HMGA2 was achieved using the rabbit polyclonal anti-HMGA2 serum (dilution, 1 : 1000; Sigma, Merck, KGaA, Darmstadt, Germany). GAPDH (catalog no. G5262; Sigma, Los Angeles, CA, USA) was used as an internal control. Horseradish peroxidase antibody (1 : 5000, ab6728; Abcam, Cambridge, MA, USA) was incubated and added with enhanced chemiluminescence chromogenic liquid for color imaging. Cell proliferation, invasion and migration assays Transfected liver cancer cells were seeded in 96-well plates and assayed for proliferation at 48 h using a Cell Counting Kit-8 kit (Dojindo Molecular Technologies, Inc., Kumamoto, Japan) according to the manufacturer's protocol. Cell proliferation was analyzed by measuring the absorbance at 450 nm using a microplate reader. Liver cancer cells (5 9 10 5 ) were seeded into six-well plates and cultured under standard conditions. When the cell density reached confluence, a 'wound' was generated by scraping the cell monolayer with a 200-lL pipette tip. Cell migration was determined by measuring the movement of cells into the scraped area. The process of wound closure was monitored and photographed at 12 h after wound generation using a microscope. For the invasion experiment, the low growth factor matrix was mixed with serum-free medium (1 : 4) and 60 lL mixed matrix was spread on the Transwell chamber. Then, the small chamber was placed in the 24well plate, which was placed into the constant temperature incubator for 4-6 h. The liver cancer cells were digested after transfection for 24 h, a total of 8 9 10 4 cells were placed in the Transwell chamber, in which the lower chamber was the normal culture medium containing 10% FBS, and after that, the Transwell chamber was placed in the constant temperature incubator for 48 h. The small chamber was taken out, the cells were fixed with 4% paraformaldehyde for 10 min, and crystal violet was used to stain cells for 10 min. The positive cells were counted and photos were taken. For the migration experiment, the Transwell transfer film was flattened out on the 24-well plate containing the Dulbecco's modified Eagle's medium with 10% serum. Resuspended cells were added and cultured in the well for 24 h. The cells passing through the Transwell membrane were then immobilized in 4% paraformaldehyde and photographed. Flow cytometry Transfected liver cancer cells were suspended, collected and then washed twice with saline before they were fixed with 70% ethyl alcohol. Subsequently, 5 mL Annexin V/FITC and 10 mL propidium iodide [from the Annexin V/PI apoptosis kit; Hangzhou MultiSciences (Lianke) Biotech, Co., Ltd., Hangzhou, China] were added to each sample for staining at 37°C in the dark. The liver cancer cell cycle was detected using the BD FACSCalibur (BD Biosciences, Franklin Lakes, NJ, USA), and data were analyzed using FLOWJO software (Tree Star, Inc. Ashland, OR, USA). Mouse xenograft model All procedures were performed in accordance with national (D.L.n.26, March 4, 2014) and international laws and policies (directive 2010/63/EU), and were approved by the Animal Experimental Ethics Committee, Sichuan Academy of Medical Sciences and Sichuan Provincial People's Hospital. HCC-bearing nude mice (male; aged 4-5 weeks; weight, 15-17 g) were raised and maintained using the same methods described previously [30,31]. To generate the mouse model of HCC, we first transfected liver cancer cells with 150 nM miR-9 mimics or miR-NC using Nucleofector II (Amaxa Biosystems; Lonza Group, Ltd., Basel, Switzerland). After transfection, the cells were allowed to recover in fresh medium by incubating at 37°C for 24 h. Subsequently, the cells were collected and washed with ice-cold PBS three times before they were resuspended in PBS at a density of 5 9 10 6 cellsÁmL À1 . Nude mice were divided into two groups (n = 8) and subcutaneously injected with 5 9 10 5 cells (100 lL) into the left flank. Tumor growth was measured every 3 days, and tumor volume (V) was monitored by measuring the length (L) and width (W) with calipers, and calculated using the following formula: V = (L 9 W 2 ) 9 0.5. Euthanasia was carried out by cervical dislocation after rendering mice unconscious with CO 2 . The tumors were excised and weighed on day 22, before being preserved in 4% paraformaldehyde at 4°C for immunohistochemical analysis. All of the procedures were performed in accordance with national (D.L.n.26, March 4, 2014) and international laws and policies (directive 2010/ 63/EU); they also were approved by the Animal Experimental Ethics Committee, Sichuan Academy of Medical Sciences and Sichuan Provincial People's Hospital. Statistical analysis All quantitative data for statistical analyses were derived from at least three independent experiments. Data are presented as the mean AE standard deviation (SD). Student's t-test was used to compare the mean values between the two groups; ANOVA test (Bonferroni as the post hoc test) was used to compare the mean among three or more groups. A P-value <0.05 was considered to indicate a statistically significant difference. Results Expression of miR-9 and HMGA2 in HCC cell lines and tissue samples To investigate the role of miR-9 in HCC, we employed quantitative RT-PCR (qRT-PCR) to detect miR-9 expression in HCC cell lines (Huh-7, MHCC97H, LM3 and Hep3B) and the normal hepatic cell line, THLE-3. As shown in Fig. 1A, miR-9 exhibited reduced expression in the four HCC cell lines when compared with normal hepatocytes. In addition, miR-9 expression was significantly down-regulated in HCC biopsy tissues when compared with adjacent normal tissues (Fig. 1B). These findings indicate that miR-9 may be involved in the development and progression of HCC. In contrast with miR-9 expression, HMGA2, a potential target of miR-9, was highly expressed in HCC cell lines ( Fig. 2A) and in clinical tumor samples (Fig. 2B) when compared with normal controls. In line with these observations, the quantitative analysis of HMGA2 by immunohistochemical staining demonstrated that HMGA2 was highly expressed in clinical tumor samples when compared with paired adjacent normal tissues (Fig. 2D). Meanwhile, the quantitative analysis of HMGA2 protein by western blot showed the same tendency (Fig. 2C,E). These results indicate that miR-9 expression is inversely correlated with HMGA2 expression. miR-9 regulates HMGA2 expression via targeting the 3 0 UTR of HMGA2 To investigate the potential targets of miR-9, using accessible databases, we screened the 3 0 UTR of HMGA2 for any predicted miR-9-binding sequences. Interestingly, a conserved binding site for miR-9 was identified in the 3 0 UTR of HMGA2 (Fig. 3A,B). It has been reported that high HMGA expression levels are strongly associated with the progression, metastasis and poor prognosis of specific human cancers and present a robust molecular biomarker for diagnosis [33][34][35]. To determine whether miR-9 binds to the 3 0 UTR of HMGA2, we mutated the miR-9 binding site in the pMIR-HMGA2-wt luciferase reporter to generate a mutant pMIR-HMGA2-mut reporter (Fig. 3C). The pMIR-HMGA2-wt or pMIR-HMGA2-mut reporter was cotransfected into liver cancer cells together with miR-9 or miR-NC mimics, and luciferase activity was measured. Transfection with miR-9 mimics demonstrated that the luciferase activity of pMIR-HMGA2wt was significantly decreased when compared with the control. However, transfection with miR-9 mimics did not affect the luciferase activity of the pMIR-HMGA2-mut reporter gene, suggesting that miR-9 can target the 3 0 UTR of HMGA2 (Fig. 3D). In addition, HMGA2 expression in cells transfected with miR-9 mimics was determined to confirm whether miR-9 can successfully regulate HMGA2 expression. As expected, miR-9 significantly reduced HMGA2 expression at the mRNA and protein levels (Fig. 3E,F). HMGA2 is a target of let7 family miRNAs [36]. We detected whether miR-9 regulated HMGA2 expression via effecting let7, and qRT-PCR showed that overexpression of miR-9 did not regulate the level of let7 (Fig. 3G). These results reveal that miR-9 may inhibit the expression of HMGA2 in HCC cells via directly binding to the 3 0 UTR of HMGA2. Effects of miR-9 on the proliferation, invasion, migration and cell cycle progression of HCC cells Considering the observed reduction in miR-9 expression in different types of HCC cell lines and in clinical HCC tumor tissues, the functional relevance of miR-9 in the hepatic cancer phenotype was investigated further. To assess the impact of miR-9 on cell growth, we transfected liver cancer cells with miR-NC, miR-9 mimics, HMGA2 or miR-9 mimics plus HMGA2. Using the Cell Counting Kit-8 assay, we found that cells transfected with miR-9 mimics exhibited reduced cell proliferation when compared with the control, which was rescued by transfection of miR-9 mimics and HMGA2 (Fig. 4A). In addition, flow cytometry analysis showed that cell cycle progression from the G0/G1 to the S phase was significantly suppressed in cells transfected with miR-9 mimics. Conversely, treatment with miR-9 mimics and HMGA2 induced Sphase arrest in HCC cells (Fig. 4B). Wound closure assays were performed to assess cell migration, and the results indicated that transfection with miR-9 mimics significantly attenuated the migration of liver cancer cells. In contrast, liver cancer cells transfected with miR-9 mimics plus HMGA2 exhibited an increase in migration capacity (Fig. 4C). Transwell assays showed that overexpression of miR-9 alone could inhibit the migration and invasion of liver cancer cells. When miR-9 and HMGA2 were overexpressed at the same time, the inhibition effect of miR-9 on the migration and invasion of liver cancer cells disappeared, and the migration and invasion behavior of the cells was aggravated (Fig. 4D,E). Meanwhile, liver cancer cells transfected with NC-inhibitor, miR-9 inhibitor, HMGA2 siRNA or miR-9 inhibitor plus HMGA2 siRNA. Liver cancer cells transfected with the miR-9 inhibitor exhibited an increase in proliferation, S-phase cell fraction, cell migration and invasion capacity of HCC cells, whereas down-regulation of miR-9 expression and interference with HMGA2 exhibited an inhibition in proliferation, S-phase cell fraction and migration capacity of HCC cells (Fig. 5A-E). These results suggest that miR-9 may play a crucial role in HCC cells by reducing their proliferative and migratory capacity and by impairing cell cycle progression. In contrast, ectopic expression of HMGA2 may reverse the antitumor effects of miR-9. miR-9 inhibits tumor growth in mice Liver cancer cells were used to generate a mouse HCC xenograft tumor model for further in vivo experiments. Cells transfected with miR-9 or miR-NC mimics were injected subcutaneously into the left flank of nude mice, and tumor growth was monitored for 21 days. At this point, the tumors were excised, weighed and photographed. Treatment with miR-9 mimics significantly reduced the tumor volume and weight when compared with the controls (Fig. 6A,B,H). To investigate the mechanisms by which miR-9 suppressed liver cancer tumor cell growth, we detected the expression of miR-9 and HMGA2 in mouse tumors by qRT-PCR. The results demonstrated that miR-9 was highly expressed in tumor tissues transfected with miR-9 mimics (Fig. 6C), whereas the mRNA and protein levels of HMGA2 were also reduced by qRT-PCR, immunohistochemistry and western blot (Fig. 6D,F, G). In addition, immunohistochemical staining analysis of the Ki-67 marker of cell proliferation in mouse tumor tissue samples revealed that transfection with miR-9 mimics decreased the expression of Ki-67, whereas transfection with the miR-9 inhibitor promoted the expression of Ki-67, compared with controls (Fig. 6E). Taken together, these observations suggest that miR-9 may suppress tumor growth by inhibiting HMGA2 expression. Discussion miRNAs are considered critical regulators of numerous cellular processes, including cell proliferation, migration, apoptosis, differentiation, cell cycle progression and carcinogenesis [37][38][39]. In addition, accumulating evidence suggests that miRNAs are closely associated with the initiation and progression of malignant tumors [40,41]. Although the present study is not the first to identify the functional role of miR-9 in HCC, a number of novel findings have been presented. To the best of our knowledge, this is the first study to demonstrate that miR-9 expression is closely correlated with hepatic tumor differentiation. In addition, miR-9 overexpression exerted an inhibitory effect on the proliferation, migration and cell cycle progression of HCC cells. Furthermore, HMGA2 was identified as a potential target of miR-9, and the miR-9/HMGA2 signaling pathway may be involved in HCC. It could be hypothesized that miR-9 may display antitumor activity by directly binding to HMGA2, which may contribute to its use as a diagnostic and/or prognostic marker for patients with HCC. Currently, a number of specific miRNAs have been verified to be aberrantly expressed in liver cancers characterized by poor prognosis. Coulouarn et al. [42] reported that miR-122 is a biomarker of hepatocytespecific differentiation and plays a causal role in regulating cell migration and invasion in HCC. Additional studies have demonstrated that miRNAs, including miR-21, miR-29a/b, miR-26a, miR-101 and miR-375, are associated with the pathogenesis of HCC by activating intricate signaling cascades [23,[43][44][45][46]. It has been reported that miR-9 expression was decreased in a variety of cancers and was associated with tumor invasion and progression [47][48][49]; however, the role of miR-9 in HCC pathobiology is not well understood. In a recent study, our research group demonstrated a link between miR-9 expression and the extent of tissue differentiation of HCC-derived cells. By analyzing the expression of miR-9 in primary HCC and adjacent tissues, as well as in a cohort of HCC-derived cells, miR-9 expression was observed to be significantly downregulated in primary hepatic tumor tissues and in different strains of HCC cells, suggesting that miR-9 may be strongly associated with hepatocyte differentiation and affect hepatic metabolic homeostasis. Therefore, the role of miR-9 in HCC was investigated further in the present study. Several previous studies have demonstrated that miR-9 is selectively expressed in neural tissues [26], and its expression levels are enriched in brain tumors [27] and in clinical breast cancer samples [50]. In addition, miR-9 has been found to play an oncogenic role in HCC by inducing cell growth and invasiveness [51]. To improve our understanding of the function of miR-9 in HCC, we performed a series of experiments investigating the impact of miR-9 on the hepatic cancer phenotype in the present study. The effect of miR-9 on HCC cell growth was analyzed in vitro and in vivo. The results demonstrated that overexpression of miR-9 significantly attenuated cell proliferation and migration, and impaired S-phase arrest in HCC cells. However, down-regulation of mir-9 expression had the opposite results. Consistent with the in vitro results, miR-9 exerted antitumor effects by reducing the tumor weight and size in an in vivo mouse xenograft model. Taken together, these results suggest that miR-9 may act as a tumor suppressor by regulating cell growth, migration and cell cycle progression in HCC, which is consistent with the results of a recent study [52]. To further understand the mechanisms underlying the observed ability of miR-9 to regulate HCC cell proliferation, migration and cell cycle progression, in the present study we used bioinformatic analysis to identify HMGA2 as a potential target gene of miR-9. HMGA2 is an abundant, nonhistone chromatin architectural factor that has been implicated in multiple biological processes, including cell growth and differentiation [53]. In addition, HMGA2 is highly expressed in the developing embryo and displays down-regulated expression during differentiation [54,55]. Notably, previous studies have provided evidence showing that overexpression of HMGA at the protein level in all tissues of transgenic mice was associated with the development of lymphomas and other tumor types [56,57]. Furthermore, HMGA was found to be strongly associated with metastasis and poor prognosis in some human cancers [58,59]. However, the mechanisms underlying the role of HMGA2 in HCC remain unclear. Notably, an evolutionarily conserved binding site for miR-9 in the 3 0 UTR of HMGA2 was identified using a luciferase reporter gene experiment in the present study. In addition, transfection with miR-9 mimics strongly down-regulated HMGA2 expression both at the mRNA and the protein levels in HCC cells, indicating that miR-9 may play a key role in HCC, particularly when combined with the reduced expression of HMGA2. A close association between increased HMGA expression and the progression, metastasis and poor prognosis of multiple human cancers has been observed, and HMGA may also serve as a molecular biomarker for the diagnosis of certain cancers [33][34][35]. Watanabe et al. [60] demonstrated that HMGA2 is involved in maintaining epithelial-to-mesenchymal transition in pancreatic cancer, which may represent a promising therapeutic strategy. In addition, the acquisition of antitumor properties was observed when single knockdown of HMGA2 induced long-term growth inhibition in pancreatic cancer. Furthermore, HMGA2 was found to affect the cyclin A gene and promote cell growth [61]. Thus, HMGA2 appears to be a determinant of cell invasiveness and metastasis. In the present study, HMGA2 was found to be highly expressed in a cohort of HCC cell lines and in primary hepatic tumor tissues using qRT-PCR analysis, suggesting that HMGA2 may contribute to HCC progression. A cotransfection HCC cell model was established to further understand the relationship between miR-9 and HMGA2 and its role in the hepatic cancer phenotype. The results indicated that overexpression of miR-9 exerted an inhibitory effect on the proliferation, migration and cell cycle progression of HCC cells, whereas HMGA2 significantly reversed these effects when transiently cotransfected with miR-9 mimics. In addition, we also found that down-regulation of miR-9 expression promoted malignant biological properties of HCC cells, whereas miR-9 inhibitor plus HMGA2 siRNA cotransfection reversed this promoting effect. Furthermore, an in vivo model was established to further investigate the underlying association between miR-9 and HMGA2. Treatment with miR-9 mimics inhibited tumor growth, potentially via down-regulating HMGA2 expression. Taken together, these findings confirm that HMGA2 may be a functional target of miR-9, and activation of the miR-9/HMGA signaling pathway may be a potential mechanism that may be exploited for the prevention and/or treatment of HCC. Conclusions In conclusion, the results of the present study provide a rationale for the potential use of miR-9 as a novel diagnostic and therapeutic marker that is involved in the development and progression of HCC. miR-9 may play a crucial role in the proliferation, migration and cell cycle progression of HCC cells in vitro and in HCC tumor growth in vivo via targeting HMGA2. Ectopic HMGA2 expression may reverse the antitumor effects of miR-9, suggesting that the miR-9/ MHGA2 signaling pathway may present a promising therapeutic target for patients with HCC.
2019-08-14T13:05:11.669Z
2019-09-20T00:00:00.000
{ "year": 2019, "sha1": "ff9992ecd6a216a571e4874a42f75f7fc12b6f46", "oa_license": "CCBY", "oa_url": "https://febs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2211-5463.12716", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7c5af8436ea748b7b8003cca8f06be622ebfdded", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10375422
pes2o/s2orc
v3-fos-license
Meniscal Extrusion in the Knee: Should only 3 mm Extrusion be Considered Significant? An Assessment by MRI and Arthroscopy Aim. The aim of this study was to assess whether significant meniscal extrusion of more than 3 mm or of even lesser degrees of extrusion could be considered significant. We also aimed to determine the morphology of tears that are most likely to be associated with significant extrusion. Study design and material. The study was done retrospectively on a group of 202 patients (157 males and 45 females) who had been seen in our hospital between 2007 and 2011 with meniscal tears (in one knee only) diagnosed by MRI and confirmed on arthroscopy. Extrusion of 3 mm or more (usually considered significant) was seen in 102 cases and less than 3 mm in 100. Extrusion was measured on the coronal MR images rather than on saggital images because of ease and reproducibility. The tears were confirmed by arthroscopy and correlated with the extent of extrusion on MRI. Results: Out of the total of 202 cases, 102 cases (50.5%) had extrusion of 3 mm or more on MRI. Of these, the medial meniscal posterior horn tears accounted for 63 cases (64.26%), 21 cases were medial meniscal body tears (21.42%), five medial meniscal root tears (5.1%), nine lateral meniscal body tears (9.18%) and four lateral meniscal posterior horn tears(4.08%). Forty-four cases had extrusion of 3-4 mm, 26 had extrusion of 4-5mm, 17 cases had extrusion of 5-6mm, ten had extrusion of 6-7mm and five had extrusion of 7 mm or more. One hundred cases fell in the < 3mm extrusion category, of which 80 (39.6%) were in the 2-3 mm extrusion group and 20 (9.9%) in the 1-2 mm extrusion group. They comprised of 61 cases of medial meniscal posterior horn tears, 23 cases of medial meniscal body tears, six medial meniscal root tears, eight lateral meniscal body tears and two lateral meniscal posterior horn tears. The highest proportion of meniscal tears was seen in the 2-3 mm category comprising nearly 40% of the entire study group. The majority of tears were medial meniscal posterior horn tears. Conclusion: Menisci that extruded 2-3 mm from the tibial margin formed a major proportion of menisci treated for tears by repair or menisectomy. We should consider extrusion of more than 2mm as significant. Most tears had extrusion of 2-4 mm. Study design and material. The study was done retrospectively on a group of 202 patients (157 males and 45 females) who had been seen in our hospital between 2007 and 2011 with meniscal tears (in one knee only) diagnosed by MRI and confirmed on arthroscopy. Extrusion of 3 mm or more (usually considered significant) was seen in 102 cases and less than 3 mm in 100. Extrusion was measured on the coronal MR images rather than on saggital images because of ease and reproducibility. The tears were confirmed by arthroscopy and correlated with the extent of extrusion on MRI. Results: Out of the total of 202 cases, 102 cases (50.5%) had extrusion of 3 mm or more on MRI. Of these, the medial meniscal posterior horn tears accounted for 63 cases (64.26%), 21 cases were medial meniscal body tears (21.42%), five medial meniscal root tears (5.1%), nine lateral meniscal body tears (9.18%) and four lateral meniscal posterior horn tears(4.08%). Forty-four cases had extrusion of 3-4 mm, 26 had extrusion of 4-5mm, 17 cases had extrusion of 5-6mm, ten had extrusion of 6-7mm and five had extrusion of 7 mm or more. One hundred cases fell in the < 3mm extrusion category, of which 80 ( 39.6%) were in the 2-3 mm extrusion group and 20 (9.9%) in the 1-2 mm extrusion group. They comprised of 61 cases of medial meniscal posterior horn tears, 23 cases of medial meniscal body tears, six medial meniscal root tears, eight lateral meniscal body tears and two lateral meniscal posterior horn tears. The highest proportion of meniscal tears was seen in the 2-3 mm category comprising nearly 40% of the entire study group. The majority of tears were medial meniscal posterior horn tears. Conclusion: Menisci that extruded 2-3 mm from the tibial margin formed a major proportion of menisci treated for tears by repair or menisectomy. We should consider extrusion of more than 2mm as significant. Most tears had extrusion of 2-4 mm. INTRODUCTION Meniscal extrusion occurs due to substantial disruption of the main circumferential collagen bundle fibers in the meniscus. Tears resulting in extrusion include meniscal horn/root tears, radial tears of more than 50% of meniscal width in size and large complex tears (more than one cleavage plane through the meniscus) 1,2 . These result in loss of the ability to resist hoop strain and biomechanically overload the joint articular surface. Significant meniscal root pathology may cause functional incompetence of the meniscus, with consequent early onset cartilage degeneration and osteoarthritis. This study emphasizes the importance of the association of meniscal tear with the amount of extrusion of the meniscus beyond the tibial margin that can be considered statistically significant. We have also attempted to study the significant correlation of the meniscal tear morphology with its extrusion and to verify the MRI findings with arthroscopic examination. Our hypothesis is that extrusion beyond 2 mm of the joint margin should be considered significant, more so in medial meniscal posterior horn tears. MATERIALS AND METHODS This retrospective study was carried out between 2008 and 2012 on 202 patients who underwent MRI and subsequent arthroscopy at our hospital for meniscal tears and associated pathology. There were 157 males and 45 females, with age range of 40-81 years (average 58 years). Only one knee was studied per patient. MRI examinations were performed with a 1.5-T scanner using a quadrature extremity coil. MR imaging incorporated the following sequences: sagittal spin-echo intermediate-weighted, sagittal and axial fast spin-echo T2-weighted with fat saturation and coronal fast spin-echo intermediate-weighted. For measurement of extrusion, only the coronal image at the mid point of the medial femoral condyle was assessed and extrusion of the meniscus from the margin of the tibial plateau was measured in millimeters using a PACS (Picture Archiving and Communication System) workstation. The measurement was performed by first drawing a vertical line intersecting the peripheral margin of the tibial plateau at the point of transition from horizontal to vertical; the length of another line extending from the first line to the outer margin of the meniscus was defined as the measurement of meniscal extrusion. All cases underwent arthroscopy performed by a single experienced arthroscopist within 4-6 weeks of the MRI. Forty-four cases had extrusion of 3-4 mm, 26 had extrusion of 4-5mm, 17 cases had extrusion of 5-6mm, 10 cases had extrusion of 6-7mm and 5 cases had extrusion of 7 mm or more. The remaining 100 cases fell in the < 3mm extrusion category and in this group, mean was 2.4104 mm with 95% confidence interval for mean: 1.123 through 3.698, standard deviation was 0.396 with high of 2.980 and low of 1.500. Median=2.500 and average absolute deviation from median = 0.315. Out of these 100 cases, 61 were medial meniscal posterior horn tears, 23 cases of medial meniscal body tears, six medial meniscal root tears, eight lateral meniscal body tears and two lateral meniscal posterior horn tears. Eighty cases had 2-3 mm extrusion and 20 had 1-2 mm extrusion. In the 2-3 mm extrusion group, 47 cases were medial meniscal posterior horn tears, 18 were medial meniscal body tears, 13 -lateral meniscal tears and two medial meniscal root tears. DISCUSSION Meniscal extrusion is generally defined as significant (≥3 mm) medial displacement of the meniscus with respect to the central margin of the tibial plateau 2 . Detection of meniscal extrusion is important not only because it is associated with underlying tear but also because meniscal extrusion itself is thought to be related to development of osteoarthritis 3,4 . Medial meniscal extrusion is a significant finding on MRI, showing the inability of the meniscus to protect the underlying articular cartilage. In many studies, it has been shown to precede cartilage loss and onset of bony degenerative joint disease within the knee. Meniscal extrusion is the result of any substantial disruption of the main circumferential collagen bundles. Tears resulting in extrusion include meniscal root tear, large radial tear (more than 50% of meniscal width) and large complex tears (more than one cleavage plane through the meniscus) 1,2 . These result in loss of ability to resist hoop strain (circumferential stress). During load transmission, the compression forces working on the meniscus result in hoop strain that stretches the collagen bundles in a radial direction between the anterior and posterior attachments. The integrity and orientation of the meniscal collagen fibers, the attachments of anterior and posterior horns, and the presence of intermeniscal connections are some factors that influence resistance to hoop strain 5 . Miller 6 showed that extrusion greater than 25% of meniscal width was not significantly associated with meniscal tear. But he did not account for the high incidence of meniscal degeneration, which can disrupt meniscal mechanics. According to Kenny 4 , post meniscectomy, as a result of loss of meniscal function, the following are signs (Fairbank's signs) apparent on standard AP and lateral radiographs of the knee: an antero-posterior osseous ridge projecting downward from the femoral condyle, generalized flattening of the marginal half of the femoral articular surface, and narrowing of the joint space which were hallmarks of knees with radial displacement (i.e., extrusion) of the medial meniscus and loss of meniscal function. Complete or subtotal meniscectomy also induces rapidly progressive osteoarthritis. Magee 7 showed a high prevalence of meniscal root tears in patients with meniscal extrusion on MR exam. Meniscal root tears are uncommon in patients without meniscal extrusion on MR exam. There may be a subset of patients in which the meniscal root is stretched rather than torn. Medial meniscal extrusion in patients more than 50 years of age may be associated with a meniscal "stretch" injury due to degeneration of the meniscus without a meniscal tear detectable on arthroscopy. These menisci may have increased laxity due to compromised meniscal collagen fibers. This may predispose the patient to premature osteoarthritis. Meniscal extrusion is caused by osteoarthritis in the elderly 8 and by trauma in young individuals 9,10 . Costa 1 reported that the degree of extrusion was significantly related to meniscal degeneration and that the most common extrusion was in the medial meniscus. The reason for extrusion being more common in the medial meniscus may be related to the anatomical structure and the medial meniscus being the point of weight-bearing. Tears involving the meniscal root (central attachment) are also significantly related to the severity of meniscal extrusion, seen in 3% with minor extrusion and 42% with major extrusion. With meniscus extrusion, the meniscus is unable to resist hoop stresses and cannot shield the adjacent articular cartilage from excessive axial load. Over time, this can lead to symptomatic knee osteoarthritis. Tears of the posterior meniscal root can be easily missed because of inconsistent clinical symptoms and can be overlooked without thorough arthroscopic examination. Detection of meniscal extrusion is important not only because it is associated with underlying tear but also because meniscal extrusion itself is thought to be related to development of osteoarthritis.
2017-10-10T19:01:07.424Z
2015-07-01T00:00:00.000
{ "year": 2015, "sha1": "ef48f5914061835f4730b0608f3616844eccf7c1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5704/moj.1507.013", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef48f5914061835f4730b0608f3616844eccf7c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247168465
pes2o/s2orc
v3-fos-license
A Long-Term, Open-Label Safety and Tolerability Study of Lisdexamfetamine Dimesylate in Children Aged 4–5 Years with Attention-Deficit/Hyperactivity Disorder Objective: To evaluate the long-term safety and tolerability of lisdexamfetamine dimesylate (LDX) in preschool-aged children (4–5 years of age inclusive) diagnosed with attention-deficit/hyperactivity disorder (ADHD). Methods: This phase 3 open-label study (ClinicalTrials.gov registry: NCT02466386) enrolled children aged 4–5 years meeting Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) criteria for a primary ADHD diagnosis and having baseline ADHD Rating Scale-IV Preschool version total scores (ADHD-RS-IV-PS-TS) ≥24 for girls or ≥28 for boys and baseline Clinical Global Impressions–Severity scores ≥4. Participants were directly enrolled or enrolled after completing one of two antecedent short-term LDX studies. Over 52 weeks of treatment, participants received once-daily dose-optimized LDX (5–30 mg). Safety and tolerability assessments included treatment-emergent adverse events (TEAEs) and vital sign changes. Clinical outcomes included ADHD-RS-IV-PS-TS changes from baseline. Results: Among 113 participants in the safety set, optimized LDX dose was 5, 10, 15, 20, and 30 mg in 1 (0.9%), 12 (10.6%), 21 (18.6%), 26 (23.0%), and 53 (46.9%) participants, respectively. Of the safety set, 69 participants (61.1%) completed the study. TEAEs were reported in 76.1% of participants; no serious TEAEs were reported. Only one type of TEAE was reported in >10% of participants (decreased appetite, 15.9%). Mean ± standard deviation (SD) changes in vital signs and body weight from baseline to week 52/or early termination (ET; n = 101) were 1.9 ± 7.73 mmHg for systolic blood pressure, 3.1 ± 7.58 mmHg for diastolic blood pressure, 4.7 ± 11.00 bpm for pulse, and 0.6 ± 1.38 kg for body weight. Over the course of the study, mean ± SD change in ADHD-RS-IV-PS-TS from baseline to week 52/ET was −24.2 ± 13.34 (n = 87). Conclusions: In this long-term 52-week study of children aged 4–5 years with ADHD, dose-optimized LDX (5–30 mg) was well tolerated and associated with reductions from baseline in ADHD symptoms. Introduction R esults from the 2016 National Survey of Children's Health (NSCH) indicated that *6.1 million children aged 2-17 years in the United States had ever received an attention-deficit/ hyperactivity disorder (ADHD) diagnosis from a health care pro-vider, including 388,000 children aged 2-5 years (Danielson et al. 2018). Pharmacologic interventions are recommended in children with ADHD whose symptoms do not improve with parent training and behavior management (Wolraich et al. 2019). Psychostimulants are the recommended pharmacotherapy for children and adolescents diagnosed with ADHD (Wolraich et al. 2019). Although most ADHD pharmacotherapies are not approved for use in preschool-aged children by the U.S. Food and Drug Administration (FDA), pharmacotherapy has been used to treat ADHD in children <6 years of age (Visser et al. 2016;Danielson et al. 2018;Davis et al. 2019). Approximately 18% of children with current ADHD aged 2-5 years were prescribed ADHD pharmacotherapy in 2016 according to NSCH data, with most of these individuals aged 4-5 years (Danielson et al. 2018). Lisdexamfetamine dimesylate (LDX) is approved in the United States for use in individuals aged ‡6 years diagnosed with ADHD (Vyvanse Ò 2017). Treatment with LDX (30-70 mg) was more effective compared with placebo in treating ADHD symptoms and had a favorable short-term (4-7 weeks) safety and tolerability profile in children and adolescents (Biederman et al. 2007;Findling et al. 2011;Coghill et al. 2013). Two completed antecedent shortterm LDX treatment studies examined lower doses in preschoolaged (4-5 years) children diagnosed with ADHD in support of a pediatric written request by the FDA. In a phase 2 open-label study (ClinicalTrials.gov registry: NCT02402166) in children with ADHD aged 4-5 years, LDX was well tolerated with a starting dose of 5 mg uptitrated to a maximum dose of 30 mg (Childress et al. 2020a). After an 8-week treatment period, the most frequently reported treatment-emergent adverse events (TEAEs) were decreased appetite, insomnia, and upper respiratory tract infection (Childress et al. 2020a). A 26-point mean reduction from baseline in the ADHD Rating Scale-IV Preschool version total scores (ADHD-RS-IV-PS-TS) was observed at the final on-treatment visit, and the majority (83%) of the study participants showed improvement on the Clinical Global Impressions-Improvement (CGI-I) scale (Childress et al. 2020a). In a phase 3, placebo-controlled, fixed-dose, short-term, 6-week study (NCT03260205) of LDX (5,10,20,or 30 mg) or placebo in children aged 4-5 years with ADHD, LDX was more efficacious than placebo in reducing symptoms and had a safety and tolerability profile consistent with previous LDX studies in older children (Childress et al. 2020b). This article reports the findings from a 52-week phase 3 study (NCT02466386) that further examined the long-term safety and tolerability of LDX (5-30 mg) in preschool-aged children (4-5 years of age inclusive) diagnosed with ADHD. Study design This was a phase 3, open-label multicenter study with participants who were directly enrolled or enrolled after completing one of two antecedent short-term LDX studies (phase 2, NCT02402166 or phase 3, NCT03260205) (Childress et al. 2020a(Childress et al. , 2020b. This long-term (52-week) study included four periods: screening and washout, dose optimization, dose maintenance, and safety follow-up (Fig. 1). The study was conducted in accordance with guidelines of the International Council for Harmonisation Good Clinical Practice and the principles of the Declaration of Helsinki, as well as other applicable local ethical and legal requirements. Signed informed consent and assent of the study participant and the participant's parent(s) or legally authorized representative (LAR) were required before any study-related procedures, including screening assessments. The study protocol, protocol amendments, final approved informed consent and assent documents, and all relevant supporting information were submitted by the investigator to the institutional review board (IRB) and approved by the IRB and regulatory agency (as appropriate) before study initiation. Participants The study enrolled boys and girls aged 4-5 years meeting Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) criteria for a primary ADHD diagnosis (American Psychiatric Association 2000). Participants were required to have baseline scores ‡28 (boys) or ‡24 (girls) on the ADHD-RS-IV-PS-TS and ‡4 on the Clinical Global Impressions-Severity (CGI-S) scale. The participants were also required to have undergone an adequate course of nonpharmacologic treatment or have symptoms severe enough to warrant enrollment without prior nonpharmacologic treatment, be engaged in a structured group activity that allowed for assessment of ADHD symptoms and impairment outside of the home (e.g., preschool, sports, Sunday school), have a screening Peabody Picture Vocabulary Test standard score ‡70, and have lived with the same parent/LAR for Week -4 to -1 FIG. 1. Study design. *All participants underwent the dose optimization period except those who enrolled following the phase 2 antecedent study, which included a similar dose optimization phase. Week 52/ET, data from protocol-defined last treatment study visit or early termination visit. ‡6 months. The participants and their parents/LARs were also required to be willing and able to comply and be available with all testing and protocol requirements, including oversight of morning dosing. Participants were excluded from the study if they were terminated from an antecedent LDX study for noncompliance, experienced a serious adverse event (SAE) or adverse event (AE) resulting in termination, or required or anticipated the need to take medications that have central nervous system effects or affect performance, such as sedating antihistamines and decongestant sympathomimetics or monoamine oxidase inhibitors. Participants were also excluded if they had any concurrent or acute illness, condition, or disability that could confound safety assessments or increase participant risk or had a current controlled or uncontrolled comorbid Axis I or II psychiatric disorder (e.g., posttraumatic stress disorder, adjustment disorder, bipolar disorder, pervasive development disorder, obsessive-compulsive disorder, psychosis/schizophrenia). Additional exclusion criteria included a history of serious cardiac problems; a screening or baseline blood pressure (BP) ‡95th percentile for age, sex, and height; previous failure to fully respond to amphetamine therapy; and a documented allergy, hypersensitivity, or intolerance to amphetamine or LDX excipients. Treatment All participants underwent a dose optimization period at the start of the study, except for those who enrolled following the phase 2 antecedent study, which included a similar dose optimization phase. During the dose optimization period, participants received a once-daily morning dose of 5, 10, 15, 20, or 30 mg of LDX, with a beginning dose of 5 mg and stepwise uptitration until an optimal dose was reached. The dose optimization was performed during the first 6 weeks to ensure that participants received the optimal dosage of the study drug based on TEAEs and clinical criteria. The participants' responses during the dose optimization period were divided into one of three categories: (1) intolerable response, with participants experiencing intolerable AEs; (2) ineffective response, where the participants failed to achieve at least a 30% reduction in ADHD-RS-IV-PS-TS from baseline of the antecedent study (if applicable) and a CGI-I score of 1 or 2; and (3) acceptable response, where the participants achieved at least a 30% reduction in ADHD-RS-IV-PS-TS from baseline of the antecedent study (if applicable) and a CGI-I score of 1 or 2 with tolerable AEs. Participants assessed as having an intolerable response were tapered to a lower LDX dose. If the lower dose also produced intolerable side effects, the participant was discontinued from the study. Participants assessed as having an ineffective response were titrated to the next highest LDX dose if available, provided no tolerability issues arose. Dose optimization continued until an acceptable response was achieved. Participants assessed as having an acceptable response were maintained on their current dose for the remainder of the study. End points Safety and tolerability assessments included TEAEs and changes in vital signs, body weight, and body mass index (BMI); 12lead electrocardiogram (ECG) recordings; sleep assessments (Children's Sleep Habits Questionnaire [CSHQ] and sleep diary); and the Columbia-Suicide Severity Rating Scale (C-SSRS) (Owens et al. 2000;Posner et al. 2011). An AE was defined as any untoward medical occurrence in a clinical investigation subject administered a pharmaceutical prod-uct and that does not necessarily have a causal relationship with this treatment. An SAE was defined as any untoward event resulting in death, life-threatening condition, inpatient hospitalization or prolongation of existing hospitalization, persistent or significant disability/incapacity, congenital abnormality/birth defect, or important medical event (e.g., allergic bronchospasm, blood dyscrasias or convulsions, or development of drug dependence or drug abuse). A severe AE was defined as an event that interrupted usual activities of daily living, significantly affected clinical status, or may have required intensive therapeutic intervention. A physical examination was performed at screening and baseline by a qualified licensed individual (physician, physician assistant, or nurse practitioner). In addition, an abbreviated physical examination was required before the baseline visit if >30 days had elapsed since the screening visit. Vital signs (systolic blood pressure [SBP], diastolic blood pressure [DBP], and pulse), weight, and 12-lead ECGs were assessed at screening, baseline, and each on-treatment visit. SBP and DBP measurements (sitting) were performed at each visit to the site. The vital sign measurements (BP, pulse, respiratory rate, and ECG) were obtained after the participant had rested for a minimum of 5 minutes. Any significant deviation of vital sign measurement from baseline was recorded as an AE by the investigator. Body weight was measured at screening, baseline, and each on-treatment visit. Sleep was assessed at screening, baseline, and each on-treatment visit with the CSHQ and sleep diary. The CSHQ is a 33-item parent-/LAR-reported questionnaire that evaluates common sleep problems in children. It is grouped into eight subscales (bedtime resistance, sleep-onset delay, sleep duration, sleep anxiety, night awakenings, parasomnias, sleep-disordered breathing, and daytime sleepiness) based on the participant's sleep behavior. A sleep diary was completed by the participant's parent/LAR to log daytime napping, bedtime, and wake time. The C-SSRS (pediatric/cognitively impaired version) was administered at screening, baseline, and each on-treatment visit, with the ''lifetime recent'' version completed at screening and the ''since last visit'' version completed at postscreening visits. The C-SSRS is a semistructured interview that captures the occurrence, severity, and frequency of suicide-related thoughts and behaviors during the study period. The interview included definitions and ageappropriate suggested questions to extract and analyze the type of information required to assess a suicide-related thought or behavior occurring during the course of the assessment period. The efficacy end point was the change from baseline in clinicianadministered ADHD-RS-IV-PS-TS at visit 1 and at each subsequent visit up to and including the end-of-study visit to capture the ADHD symptoms within each study period. The ADHD-RS-IV-PS is an 18-item clinician-administered instrument that rates ADHD symptom frequency defined by DSM-IV-TR criteria using examples appropriate for the developmental level of preschool children. The items are scored on a 4-point scale (range, 0 [never or rarely] to 3 [very often]); total score ranges from 0 to 54. The normative score is 13.9 for boys and 7.8 for girls (McGoey et al. 2007). The items can be further grouped into two 9-item subscales to assess inattention and hyperactivity/impulsivity. The ADHD-RS-IV-PS was used to guide dosing decisions and reviewed/completed by the investigator or subinvestigator. The additional efficacy end point was the global evaluation of participant disease severity and improvement over time as measured by the CGI scale (Guy 1976). The severity of the participant's condition was assessed by the CGI-S, a 7-point scale ranging from 1 (normal, not at all ill) to 7 (among the most extremely ill subjects) at baseline of the antecedent study or at baseline (visit 0) of this study for the directly enrolled participants. The CGI-I assessed ADHD improvement (from the appropriate baseline visit) at each visit from visit 1 to the end-of-study visit or early termination (ET) visit. CGI-I was graded on a 7-point scale (range, 1 [very much improved] to 7 [very much worse]). The CGI-S and CGI-I were completed by a clinician trained and experienced in the evaluation of preschool children with ADHD. The CGI-I was used to guide dosing decisions and reviewed/completed by the principal investigator or subinvestigator. The general cognitive ability of the participants was assessed by the Peabody Picture Vocabulary Test, Fourth Edition. It measures an individual's receptive (hearing) vocabulary for Standard American English and provides a quick estimate of verbal ability or scholastic aptitude. The Peabody Picture Vocabulary Test was administered by site personnel with training and experience in general psychological testing approved by the sponsor or delegated vendor. Data and statistical analysis The safety analysis set consisted of all participants who took ‡1 dose of investigational product. The full analysis set consisted of all participants in the safety analysis set who had ‡1 postdose ADHD-RS-IV-PS-TS assessment during the study. Unless otherwise specified, demographic and baseline characteristics were sourced from the antecedent studies or from case report forms for directly enrolled participants. All analyses were limited to descriptive statistics for observed data and change from baseline, where applicable. Efficacy analyses were performed using the full analysis set. For all efficacy analyses, baseline was defined as either the baseline value from the antecedent study or, for directly enrolled partici-pants, the last observation before the first dose of investigational product. There was no primary efficacy end point defined for this study. Efficacy and safety data were summarized by optimized dose in a post hoc analysis. The optimized dose was established by the week 6 visit (or week 8 visit for patients who enrolled from the phase 2 antecedent study and did not undergo a dose optimization period in the current study; Fig. 1). For any participant who discontinued before week 6/8, the optimized dose was selected as the last dose level exposed. For any participant who changed dose after week 6/8, the optimized dose was set at the dose level the participant received with greatest frequency. In the summaries by optimized dose, participants were evaluated for a single dose level with all usable data regardless of the actual dose level at the time of the data point. Participant disposition and demographics Of the 122 participants screened, 115 were enrolled in the study. The safety analysis set had 113 participants who were either rollover participants completing antecedent studies (n = 86) or directly enrolled participants (n = 27; Fig. 2). A total of 69 participants (61.1%) from the safety set completed the study. The most frequently reported reasons for discontinuation from the study were withdrawal by the subject or parent(s)/LAR (n = 14) and lack of efficacy (n = 8). The mean age, at the time of consent for study participation, was 4.8 -0.63 years (n = 113; Safety As shown in Table 2, TEAEs were reported in 76.1% of participants; however, the majority were mild or moderate in severity. No serious TEAEs were reported, and the incidence of severe TEAEs was low (eight severe TEAEs reported in seven participants: decreased appetite [n = 2]; and sleep disorder, irritability, affect lability, influenza, crying, and neutropenia in one participant each). The only TEAE reported in >10% of participants was de-creased appetite (15.9%). In the total population, 45.1% of participants had TEAEs that were considered related to the study drug according to the investigator. The frequency of TEAEs and severe TEAEs in the highest optimized LDX dose subgroup (LDX 30 mg) was similar to or lower than the frequency in lower optimized LDX dose subgroups. Meanstandard deviation (SD) changes from baseline to week 52 or ET (n = 101) were 4.67 -11.000 bpm for pulse, 1.92 -7.729 mmHg for SBP, 3.10 -7.581 mmHg for DBP, 0.56 -1.383 kg for body weight, and -18.65 -20.166 for BMI percentile (Table 3). Mean change from baseline in body weight and BMI by optimized dose is shown in Supplementary Figures S1 and S2. At week 52/ET, shifts from healthy weight (n = 6 [5.9%]) and overweight (n = 1 [1.0%]) categories to underweight were observed in seven participants from baseline. A total of 15 participants who were overweight at baseline shifted to a healthy weight category, and 7 participants who were obese at baseline shifted to either overweight (n = 3) or healthy weight (n = 4) categories. Three participants who were underweight at baseline shifted to overweight, and one participant shifted from overweight at baseline to obese at week 52/ET. Baseline was defined as the baseline value from the antecedent study (phase 2 study [NCT02402166]; phase 3 study [NCT03260205]) for antecedent participants or the last observation before the first dose of investigational product for directly enrolled participants. a Age was calculated as the difference between date of birth and date of informed consent for the antecedent study, truncated to years; n = 11 (10 mg); n = 16 (15 mg); n = 17 (20 mg); n = 41 (30 mg); n = 86 (total). b Current age was calculated as the difference between date of birth and date of informed consent for this study (NCT03260205), truncated to years. c BMI was calculated as [weight (kg)/height (m) 2 ]. d n = 20 (15 mg) and n = 112 (total). ADHD, attention-deficit/hyperactivity disorder; ADHD-RS-IV-PS, ADHD Rating Scale-IV Preschool version; BMI, body mass index; CGI-S, Clinical Global Impressions-Severity; LDX, lisdexamfetamine dimesylate; SD, standard deviation. No participant had a positive postbaseline C-SSRS response, and there were no reports of suicidal behavior or suicide attempts in any of the participants. Results from the CSHQ (Supplementary Table S1) and sleep diaries (Supplementary Table S2) show no notable overall trends across the optimized LDX dose subgroups. Efficacy Over the course of the study, the mean -SD change in ADHD-RS-IV-PS-TS from baseline to week 52/ET was -24.2 -13.34, showing an overall decrease in ADHD symptoms (Fig. 3). Improvements in ADHD symptoms were also observed with the CGI-I scale, with 73.6% of participants having improved (very much improved [35.6%] or much improved [37.9%]) CGI-I measurements (Fig. 4). Similar trends in ADHD-RS-IV-PS-TS reduction and CGI-I scale improvement were observed across all optimized LDX dose subgroups (Figs. 3 and 4). Discussion In this phase 3, open-label study, the long-term safety and tolerability of LDX (5-30 mg) was evaluated in children aged 4-5 years with ADHD. LDX was safe and well tolerated, with few TEAEs leading to withdrawal of study drug, and no SAEs or deaths associated with the investigational product. Overall, frequency of TEAEs and severe TEAEs did not increase with higher optimized LDX dose. Treatment with LDX (5-30 mg) reduced ADHD symptoms measured by ADHD-RS-IV-PS-TS from baseline to week 52/ET and improved ADHD symptoms measured by the CGI-I scale, both in the overall study population and in each of the optimized LDX dose subgroups. These results are consistent with recent studies of LDX in children with ADHD. In the prior 6-week, phase 3, fixed-dose antecedent study, participants treated with LDX demonstrated a mean change from baseline in ADHD-RS-IV-PS-TS at week 6 of -14.7 versus -8.8 for the placebo cohort (Childress et al. 2020b). In a 4-week phase 3 study of older children aged 6-12 years, mean change from baseline in ADHD-RS-IV-PS-TS was -26.7 with LDX (fixed doses 30, 50, or 70 mg/d) and -6.2 with placebo (Biederman et al. 2007). Finally, a 7-week phase 3 study of children and adolescents aged 6-17 years reported a mean change from baseline in ADHD-RS-IV-PS-TS of -24.3 with LDX (30, 50, or 70 mg/d dose optimized) and -5.7 with placebo (Coghill et al. 2013;Childress et al. 2020aChildress et al. , 2020b. Although this study is limited by the absence of a placebo control arm, this is in the best interests of the study participants because it is not recommended to keep participants with ADHD on placebo for the duration of a long-term study. Caution must be taken in interpreting results by optimized LDX dose subgroup because of the small sample sizes and potential impact of confounding factors that may affect the observations. Finally, there were limited data on Hispanic, Asian, and Native American populations, and psychiatric comorbidities were excluded, which may make it difficult to generalize. Conclusions LDX at doses between 5 and 30 mg/d over 52 weeks of treatment was found to be safe and well tolerated in children aged 4-5 years with ADHD. No new safety signals were identified, and the efficacy profile was consistent with robust improvements in ADHD symptoms observed in previous studies of children, adolescents, and adults with ADHD (Biederman et al. 2007;Adler et al. 2008;Findling et al. 2011;Childress et al. 2020aChildress et al. , 2020b. Clinical Significance LDX is approved for the treatment of ADHD in patients aged ‡6 years. Interest for clinical trial evidence of safety and efficacy for the treatment of younger children existed. This 52-week open-label study reports that LDX at doses between 5 and 30 mg/d in children aged 4-5 years with ADHD was found to be safe and well tolerated. No new safety signals were identified, and the efficacy profile was consistent with robust improvements in ADHD symptoms observed in previous studies of children, adolescents, and adults with ADHD. Data Sharing The datasets, including the redacted study protocol, redacted statistical analysis plan, and individual participant's data supporting the results reported in this article, will be made available within three months from initial request, to researchers who provide a methodologically sound proposal. The data will be provided after its de-identification, in compliance with applicable privacy laws, data protection and requirements for consent and anonymization.
2022-03-02T06:23:44.585Z
2022-02-25T00:00:00.000
{ "year": 2022, "sha1": "78878ba5747562b3e442715ead70d22c3e6445d7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1089/cap.2021.0138", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "10adf8acc21bd1b3041ca0729d0bc164684e8cac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14723478
pes2o/s2orc
v3-fos-license
H2AX regulates meiotic telomere clustering The histone H2A variant H2AX is phosphorylated in response to DNA double-strand breaks originating from diverse origins, including dysfunctional telomeres. Here, we show that normal mitotic telomere maintenance does not require H2AX. Moreover, H2AX is dispensable for the chromosome fusions arising from either critically shortened or deprotected telomeres. However, H2AX has an essential role in controlling the proper topological distribution of telomeres during meiotic prophase I. Our results suggest that H2AX is a downstream effector of the ataxia telangiectasia–mutated kinase in controlling telomere movement during meiosis. Introduction Telomeres are not only critical components of somatic chromosomes, but also play a unique function during meiosis. Meiosis is a cellular differentiation program during which physiological double-strand breaks (DSBs) are created and repaired, giving rise to recombination events between parental chromosomes. During the first meiotic prophase, telomeres redistribute and cluster, forming a so-called "bouquet," which may ensure proper homologue pairing before recombination (Loidl, 1990;Scherthan, 2001;Yamamoto and Hiraoka, 2001). The ataxia telangiectasia-mutated (ATM) kinase is required for transit through early prophase I (Pandita, 2002). In addition, ATM disruption has been found to alter telomere dynamics, leading to an accumulation of bouquet-stage nuclei with perturbed synapsis during zygotene (Pandita et al., 1999;Scherthan et al., 2000). One of the immediate targets of the ATM kinase in response to DNA damage is the histone H2A variant H2AX (Redon et al., 2002). The analysis of H2AX -deficient mice has demonstrated a role for H2AX in a variety of responses to DSBs, including DNA repair, checkpoint signaling, and Ig class switching (Petersen et al., 2001;Bassing et al., 2002;Celeste et al., 2002;Fernandez-Capetillo et al., 2002;Reina-San-Martin et al., 2003). Similar to ATM -deficient cells, H2AX Ϫ / Ϫ cells senesce within a few passages in culture, and display an increased frequency of chromosomal aberrations (Celeste et al., , 2003a. Moreover, H2AX Ϫ / Ϫ mice exhibit male-specific sterility, which is likely due to defects in chromatin remodeling during meiosis (Fernandez-Capetillo et al., 2003). Because of the strong correlation between defective DSB repair, genomic instability, and telomere dysfunction, we examined the role of H2AX in both mitotic and meiotic telomere maintenance. Results and discussion To determine whether H2AX regulates telomere length, we performed quantitative FISH (Zijlmans et al., 1997) on metaphase spreads derived from four independent sets of H2AX knockout and control mouse embryonic fibroblasts (MEFs). Although telomeres were slightly elongated in some of the H2AX Ϫ / Ϫ MEFs relative to H2AX ϩ / ϩ isogenic cultures (Table I), this difference in telomere length was not statistically significant ( t test, P Ͼ 0.1; at least 15 metaphases examined for each culture). Moreover, both genotypes displayed a similar heterogeneity in the frequency of telomere fluorescence intensities, indicating that H2AX deficiency did not modify the distribution of individual telomere lengths (Fig. S1, available at http://www.jcb.org/cgi/content/full/ jcb.200305124/DC1). To rule out the possibility that the decreased proliferative capacity of H2AX Ϫ / Ϫ MEFs could bias the measurements of telomere lengths, we performed quantitative FISH in a variety of other primary cells including splenocytes, purified B cells, and lymph node T cells, derived from independent H2AX ϩ / ϩ and H2AX Ϫ / Ϫ littermates (Table S1). None of the cell types showed a significant difference in telomere length ( t test, P Ͼ 0.1; at least 15 metaphases examined for each culture). As an additional quantitative measurement, we analyzed telomere lengths in B and T lymphocytes by flow cytometry FISH (Rufer et al., 1998), which confirmed the lack of significant differences between the two genotypes (unpublished data). We conclude that H2AX does not regulate telomere length in mice. H2AX deficiency is associated with chromosomal instability (Bassing et al., 2002(Bassing et al., , 2003Celeste et al., 2002Celeste et al., , 2003a. To determine whether chromosomal aberrations arise in part from modifications in telomere structure, as is the case in numerous mouse models with defects in DSB repair (Goytisolo and Blasco, 2002), we analyzed individual metaphase spreads from four H2AX ϩ / ϩ and H2AX Ϫ / Ϫ MEF cell lines that had been subjected to telomere FISH. Consistent with our previous observations , H2AX Ϫ / Ϫ MEFs exhibited a dramatic increase in chromosome breaks relative to wild-type controls ( Fig. 1 A, bottom; Fig. 1 C). However, despite the high level of genomic instability in H2AX Ϫ / Ϫ cells, we did not detect any significant increase in the number of telomere fusions in these cells ( Fig. 1 A, top). Thus, telomere dysfunction does not contribute significantly to the increased genomic instability in H2AX Ϫ / Ϫ mice. To further examine the impact of H2AX deficiency on chromosomal instability in the presence of shortened telomeres, we intercrossed H2AX ϩ / Ϫ mice with successive generations of mice deficient in the RNA component of telomerase (Terc; Blasco et al., 1997;Fig. S2, available at http:// www.jcb.org/cgi/content/full/jcb.200305124/DC1). Consistent with previous reports (Lee et al., 1998;Hande et al., 1999), we observed a dramatic increase in the percentage of telomere fusions arising in successive generations of Terc Ϫ / Ϫ mice ( Fig. 1 A, top; Fig. 1 C). However, H2AX deficiency had no apparent role in this type of fusion because a similar percentage of telomere fusions was observed in four independent G5 H2AX Ϫ / Ϫ Terc Ϫ / Ϫ (6 Ϯ 2.1%) and G5 H2AX ϩ / ϩ Terc Ϫ / Ϫ (5.9 Ϯ 1.6%) MEF cultures. Although H2AX Ϫ / Ϫ MEFs exhibited slightly higher levels of chromosome breaks in the late generation Terc knockout background than in the presence of Terc ( Fig. 1 A, bottom), this difference was not statistically significant (G0 H2AX Ϫ / Ϫ vs. G5 H2AX Ϫ / Ϫ ; t test, P Ͼ 0.1). This finding is in contrast to ATM deficiency, which has been shown to exacerbate telomere fusions and instability in the absence of Terc (Wong et al., 2003). Telomere fusions not only arise from shortened telomeres, but also arise from structural alterations such as those triggered by the inactivation of telomere-associated proteins. For example, inhibition of TRF2 results in end-end fusions, which are generated by the nonhomologous end-joining (NHEJ) DNA repair pathway (Smogorzewska et al., 2002). Recent reports documented the association of several DNA damage response factors-including ␥ -H2AX-at uncapped telomeres (d'Adda di Fagagna et al., 2003;Takai et al., 2003). To determine the role of H2AX in fusions arising from deprotected telomeres, H2AX ϩ / ϩ and H2AX Ϫ / Ϫ MEFs were infected with a TRF2 dominant-negative-expressing retrovirus (TRF2 ⌬ B ⌬ M ) or with the corresponding vector pLPC (Karlseder et al., 1999). Following the strategy used to assess the role of the NHEJ factor DNA ligase IV in telomere fusions (Smogorzewska et al., 2002), MEFs were generated in a p53-deficient background, which partially alleviates the growth defects in primary H2AX Ϫ / Ϫ MEFs (Celeste et al., , 2003a. In contrast to DNA ligase IV, H2AX was not essential for fusions arising from TRF2 dominantnegative infection (Fig. 1, B and C). In 30 metaphases examined by telomere FISH, we observed a total of 43 telomere fusions in H2AX Ϫ / Ϫ p53 Ϫ / Ϫ MEFs, compared with 29 fusions in H2AX ϩ / ϩ p53 Ϫ / Ϫ MEFs. Thus, although H2AX appears to modulate NHEJ (Downs et al., 2000;Bassing et al., 2003;Celeste et al., 2003a), H2AX is not required for chromosome fusions arising from either shortened or structurally deprotected telomeres. During mouse meiosis, telomeres reposition along the nuclear periphery to create a characteristic bouquet configuration. This clustering of chromosome ends generally occurs at the leptotene/zygotene transition (Scherthan, 2001), coincident with the initiation of homologous DSB repair (for review see Hunter et al., 2001). To date, the only protein that has been implicated in the regulation of the bouquet stage in mammals is the ATM kinase (Pandita et al., 1999). To determine whether the ATM target H2AX is involved in meiotic telomere dynamics, we investigated telomere and centromere behavior by FISH (Scherthan et al., 1996) in wild-type and H2AX-deficient testes preparations from 4-wk-old mice (Fig. 2 A). The analysis of structurally preserved spermatocyte nuclei revealed similar frequencies of preleptotene spermatocytes (1.0 vs. 1.6%) in wild-type and mutant testes suspensions (based on 2,772 wild-type and 2,567 mutant nuclei), respectively, with the difference being statistically insignificant (P ϭ 0.1; 2 and Fisher test; Fig. 2 B). However, we noted a 20-fold increase in the frequency of H2AX Ϫ/Ϫ bouquet-stage nuclei (H2AX Ϫ/Ϫ , 6%; wild-type, 0.4%; based on 2,567 mutant and 2,772 wildtype spermatogenic nuclei), with the differences being highly significant (P Ͻ 0.0001; 2 and Fisher test; Fig. 2 B). To determine the stages in which elevated levels of bouquet nuclei accumulate, we combined immunostaining of the telomere-associated protein TRF1 with that of SCP3 (Lammers et al., 1994), a component of the axial/lateral element of the synaptonemal complex (SC; Fig. 2 C). Threedimensional microscopy revealed that TRF1 signals capped the ends of axial/lateral elements that clustered at the nuclear envelope. Strikingly, many of the structurally pre-served H2AX Ϫ/Ϫ prophase I nuclei displayed a bouquet topology with telomeres clustered in a limited nuclear envelope region from early leptotene until early pachytene, with long U-shaped SCs emanating from the clustered telomeres (Fig. 2 C). The occurrence of telomere clustering as early as leptotene and its maintenance up to late zygotene/ pachytene stages contrasts with wild-type spermatogenesis of adult mice, where telomere clustering occurs only in a limited time window during the leptotene/zygotene transition (Scherthan et al., 1996). In testes suspensions of wildtype mice, bouquet-stage cells are generally detected at an average frequency of 0.2-0.8%, which underlines the shortlived nature of this stage in spermatogenesis (Scherthan et al., 1996(Scherthan et al., , 2000. Thus, the significant increase in bouquet frequencies in the H2AX knockouts as compared with agematched controls suggests that the absence of H2AX leads to an extended bouquet stage. Moreover, in contrast to wild-type spermatocytes, which exhibit massive H2AX phosphorylation in response to Spo11-mediated DSBs (Mahadevaiah et al., 2001), we found that ␥-H2AX staining was largely absent in ATM Ϫ/Ϫ leptotene/zygotene-stage spermatocytes (Fig. S3, available at http://www.jcb.org/cgi/ content/full/jcb.200305124/DC1), therefore demonstrating that meiotic DSB-triggered ␥-H2AX formation is de-pendent on ATM. These results place H2AX downstream of ATM in the signal transduction pathway that orchestrates meiotic telomere clustering. The initiation of telomere clustering appears to be a default reaction because it occurs in the absence of synapsis, homologous chromosomes, and/or recombination (for review see Scherthan, 2001). However, the accumulation of bouquet-stage meiocytes in DSB and SC-deficient yeast or worm meiosis (Trelles-Sticken et al., 1999;MacQueen et al., 2002) suggests that the resolution of telomere clustering is triggered upon completion of synapsis and/or repair. Consistent with this, both H2AX Ϫ/Ϫ and ATM Ϫ/Ϫ mice display an accumulation of spermatocytes with persistence of bouquet topology. The fact that bouquet-type arrangements in H2AX-deficient spermatocytes are observable up to pachytene suggests that the increased telomere clustering observed in ATM-deficient cells may be directly related to impaired phosphorylation of H2AX, rather than being an indirect consequence of the early leptotene/zygotene arrest. According to this view, ATM facilitates telomere-promoted homologue pairing via phosphorylation of H2AX, thereby coordinating clustering with the initiation of DSB repair. The dissolution of meiotic telomere clustering would then depend on the dephosphorylation of ␥-H2AX, which may signal the completion of DSB repair and/or induce changes in higher order chromatin structure (Fernandez-Capetillo et al., 2003). Because the exit from the bouquet stage is coordinated with completion of DSB repair (Trelles-Sticken et al., 1999;MacQueen et al., 2002), the elevated telomere clustering in H2AX Ϫ/Ϫ spermatocytes may therefore reflect an altered repair capacity of the H2AX knockout spermatocytes. Like many other mouse models with defects in DSB repair and/or telomere maintenance, absence of H2AX is associated with growth defects, radiation sensitivity, genomic instability, and cancer predisposition (Bassing et al., 2002(Bassing et al., , 2003Celeste et al., 2002Celeste et al., , 2003a. Although a number of DNA repair proteins play essential roles in maintaining telomere structure, we have found that H2AX is largely dispensable for somatic telomere maintenance. In principle, this could be explained by the fact that H2AX is not required for the recruitment of damage sensors to DNA lesions, and therefore, the cellular response to unprotected chromosome ends may proceed normally in its absence (Celeste et al., 2003b). However, H2AX is essential for the proper spatial rearrangement of chromosome ends during the first meiotic prophase. Further analysis will be necessary to dissect the role of meiotic telomer clustering and its dissolution with respect to homologue pairing and DSB repair. Mice and cell lines Generation of H2AX Ϫ/Ϫ , ATM Ϫ/Ϫ , and Terc Ϫ/Ϫ mice have been described previously (Barlow et al., 1996;Blasco et al., 1997;Celeste et al., 2002). E13.5 MEFs were obtained from intercrossing mice following standard procedures, and H2AX Ϫ/Ϫ p53 Ϫ/Ϫ MEFs are described elsewhere (Celeste et al., 2003a). For all experiments, littermates were compared. B lymphocytes were isolated using CD19 microbeads (Miltenyi Biotec), and were stimulated with LPS or LPSϩIL4 as described previously . Splenocytes or lymph node-derived B and T lymphocytes were stimulated with either LPS or Con A, respectively. Pachytene nuclei with dispersed peripheral telomeres and satellite DNA clusters (focal plane at nuclear equator). (B) Frequency of preleptotene and bouquet spermatocytes, with the latter being dramatically increased in the H2AX knockout; see Results for details. (C) Immunofluorescence of axial/lateral cores (SCP3, red) and telomeres (TRF1, green) in structurally preserved H2AX Ϫ/Ϫ nuclei (DAPI, blue). (I) Early leptotene nucleus with a tight telomere cluster at a sector of the nuclear periphery and SCP3 speckles. (II) More advanced leptotene with short SCP3 threads and clustered telomeres. (III) Two late zygotene/pachytene bouquet nuclei with more relaxed telomere clustering near the nuclear top and U-shaped SCs that extend into the nuclear lumen. (IV) Pachytene nucleus with meandering SCs and telomeres dispersed around the nuclear periphery. Bar, 10 m. Analysis of telomere lengths and fusions Quantitative FISH analysis using a Cy3-labeled (CCCTAA) peptide nucleic acid probe (Applied Biosystems) was performed as described previously (Zijlmans et al., 1997;Hande et al., 1999). Telomere length measurements were performed on least 15 metaphases for each cell type. DAPI chromosome and Cy3 telomere images were acquired with a constant exposure time that ensured all captured fluorescent signals were within the linear range. All the images from matched littermate samples were acquired blindly and in parallel on the same day. To correct for differences in the microscope settings and hybridization efficiencies, the fluorescence intensity of Cy3-labeled fluorescent beads (Molecular Probes, Inc.) was used to normalize intensities from different experiments. Quantitative analysis of telomere fluorescence was performed with the TFL Telo software, which allows for a proper identification and editing of individual telomere intensities (a gift from Dr. Peter Lansdorp). Statistical analysis of the measured telomere intensities was performed with Microsoft ® Excel 2000 (Microsoft Corp.) and Prophet (BBN Technologies) softwares. Chromosomal aberrations, including breaks and telomere fusions, were scored by examining DAPI and telomeric images from at least 65 metaphases derived from cultures of H2AX ϩ/ϩ Terc ϩ/ϩ (G0), H2AX Ϫ/Ϫ Terc ϩ/ϩ (G0), H2AX ϩ/ϩ Terc Ϫ/Ϫ (G5), and H2AX Ϫ/Ϫ Terc Ϫ/Ϫ (G5) MEFs (a total of 417, 355, 357, and 346 metaphases were examined, respectively, for each genotype). Retroviral infection and plasmids pLPC-puro and pLPC-TRF2 ⌬B⌬M retroviral vectors have been described previously (Karlseder et al., 1999). For retroviral infection, Phoenix ␣ cells (American Type Culture Collection) were seeded at 5 ϫ 10 6 cells/10-cm dish, and 20 g of each plasmid was transfected using CaPO 4 . 5 h after transfection, the cells were washed with PBS and the medium was replenished. A 10-ml supernatant was collected 72 h after transfection, passed through a 0.45-m filter, and supplemented with polybrene at 4 g/ml. MEFs were seeded 24 h before infection at 8 ϫ 10 5 cells/10-cm dish. For infection, MEFs were overlaid with virus-containing medium, and centrifuged for 1.5 h at 1,500 rpm. Cells were split into three 10-cm dishes 24 h after infection, and the medium was replaced by DME/15% FCS containing 2 g puromycin per ml. Metaphases were prepared 96 h after selection. Testicular preparations and bouquet analysis Testes suspensions containing structurally preserved nuclei for simultaneous SC immunostaining, FISH, and bouquet analysis were prepared and analyzed as described previously (Scherthan et al., 2000;Scherthan, 2002). Preleptotene and bouquet nuclei were identified by perinuclear major satellite DNA or telomeres clustered at a limited sector of the nuclear periphery, respectively (Scherthan et al., 1996). Online supplemental material Fig. S1 demonstrates similar frequency distribution of telomere fluorescence in H2AX ϩ/ϩ vs. H2AX Ϫ/Ϫ MEFs. Fig. S2 is a schematic representation of the generation of H2AX Ϫ/Ϫ Terc Ϫ/Ϫ mice with progressively shortened telomeres. Fig. S3 demonstrates ATM-dependent phosphorylation of H2AX in response to meiotic double-strand breaks. Online supplemental material available at http://www.jcb.org/cgi/content/full/jcb.200305124/DC1. We thank P. Lansdorp and S.S. Poon for providing the TFL Telo image analysis program; T. de Lange for providing the retroviral constructs and TRF1 antibodies; and Dr. Richard Hodes for critical comments on the manuscript. H. Scherthan thanks T. de Lange (The Rockefeller University, New York, NY) and C. Heyting (Wageningen University, Wageningen, Netherlands) for providing SCP3 antibodies.
2014-10-01T00:00:00.000Z
2003-10-13T00:00:00.000
{ "year": 2003, "sha1": "56867e57d74231b9211d6063f076ad0721a3bdcd", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/163/1/15/1310100/jcb163115.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "48594ee77895f3b749e70106f3d754cd9c644c79", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
7727705
pes2o/s2orc
v3-fos-license
Smart gating membranes with in situ self-assembled responsive nanogels as functional gates Smart gating membranes, inspired by the gating function of ion channels across cell membranes, are artificial membranes composed of non-responsive porous membrane substrates and responsive gates in the membrane pores that are able to dramatically regulate the trans-membrane transport of substances in response to environmental stimuli. Easy fabrication, high flux, significant response and strong mechanical strength are critical for the versatility of such smart gating membranes. Here we show a novel and simple strategy for one-step fabrication of smart gating membranes with three-dimensionally interconnected networks of functional gates, by self-assembling responsive nanogels on membrane pore surfaces in situ during a vapor-induced phase separation process for membrane formation. The smart gating membranes with in situ self-assembled responsive nanogels as functional gates show large flux, significant response and excellent mechanical property simultaneously. Because of the easy fabrication method as well as the concurrent enhancement of flux, response and mechanical property, the proposed smart gating membranes will expand the scope of membrane applications, and provide ever better performances in their applications. Results Fabrication of smart gating membranes. The fabrication procedure for our smart gating membranes is very simple and controllable (Fig. 1). Usually, via VIPS processes, porous membranes with symmetric cellular-like structure can be fabricated from homogenous membrane-forming solution (Fig. 1a,b). Nevertheless, the trans-membrane flux of such membranes with symmetric cellular-like structure is very low, because the pores are usually closed and not interconnected with each other 24 . That is, such symmetric cellular-like structures are actually undesirable for common porous membranes. Here, we take advantage of the uniform size and uniform distribution of pores in the symmetric cellular-like structures, and design a simple and controllable strategy to achieve the in situ self-assembly of nanogels at the pore/matrix interfaces. Monodisperse PNIPAM nanogels are easily synthesized by precipitation polymerization 23,25 , and then blended with PES in the membrane-forming solution using 1-methyl-2-pyrrolidinone (NMP) as solvent. During the VIPS processes with different preparation conditions, the nanogels are self-assembled in situ at the growing pore/matrix interfaces. The adsorption of dispersed PNIPAM nanogels in PES matrix onto the growing pore/matrix interface is driven by a reduction in the system interfacial energy (energy well Δ G 1 ) 26 , and the escape of nanogels from the interface to the growing pore phase is stopped by an increase in the system interfacial energy (energy barrier Δ G 2 ). When the nanogels are located at the growing pore/matrix interface, the system interfacial energy is the lowest (Fig. 1c); therefore, the nanogels prefer to stay firmly at the growing pore/matrix interface (Fig. 1d). PNIPAM is in the hydrophilic state at 25 °C 23,25 ; so, the PNIPAM nanogels self-assembled in situ at the growing pore/matrix interfaces tend to take in more water into the growing pore spaces. Thus, the size of membrane pores with in situ self-assembled nanogels at the interfaces is enlarged. As a result, unlike the cellular-like structure of PES membranes prepared without blending nanogels (Fig. 1a,b), the enlarged pores with in situ Scientific RepoRts | 5:14708 | DOi: 10.1038/srep14708 self-assembled nanogels at the interfaces are interconnected with each other inside the porous membrane that prepared from nanogel-contained membrane-forming solution via VIPS approach (Fig. 1e,f). The in situ self-assembled PNIPAM nanogels at the interfaces of interconnected pores serve as thermo-responsive gates in the membrane (Fig. 1g,h). When the environmental temperature (T) is lower than the VPTT of PNIPAM nanogels (T< VPTT), the nanogels are in the swollen state and thus the gate is closed (Fig. 1g); on the contrary, when T> VPTT, the nanogels are in the shrunken state and thus the gate is open (Fig. 1h). Because the membrane pores with in situ self-assembled nanogels at the interfaces are interconnected with each other, the thermo-responsive smart gates exist like three-dimensionally interconnected gating networks that connect the membrane pores ( Fig. 1i-l). Such three-dimensionally interconnected architecture of the pores and the gates can be very beneficial to the concurrent large flux and significant stimuli-response properties of smart gating membranes. Vapor-induced phase separation (VIPS) process for fabricating porous membrane with cellular-like structure (b) from homogenous membrane-forming solution (a). (c,d) Principle of self-assembly of nanogels at the growing pore/matrix interface, in which the adsorption of dispersed nanogels in matrix onto the growing pore/matrix interface is driven by a reduction in system interfacial energy (energy well Δ G 1 ), and the escape of nanogels from the interface to the growing pore phase is stopped by an increase in system interfacial energy (energy barrier Δ G 2 ). When the nanogels are located at the growing pore/matrix interface, the system interfacial energy is the lowest (c); therefore, the nanogels prefer to stay firmly at the growing pore/matrix interface (d). (e,f) Fabrication of porous membranes with self-assembled nanogels on the pore surfaces (f) from nanogel-contained membrane-forming solution (e) via VIPS approach. (g,h) Magnified illustration of the thermo-responsive gating function with self-assembled nanogels as gates. When the environmental temperature (T) is lower than the volume phase transition temperature (VPTT) of poly(N-isopropylacrylamide) (PNIPAM) nanogels (T< VPTT), the nanogels are in the swollen state and thus the gate is closed (g); on the contrary, when T> VPTT, the nanogels are in the shrunken state and thus the gate is open (h). (i-l) 3D graphic illustration of the functional gate (i) as well as the top view (j) and side views (k,l) of interconnected networks of functional gates connecting pores inside membrane. investigate the effects of nanogel contents on the microstructures and performances of membranes. The nanogel-contained membrane-forming solution is casted into a solution film with thickness of 200 μ m on a glass plate inside a humidity chamber maintained at some chosen combination of the vapor temperature and relative humidity. The casted film is kept in the humidity chamber for 2 min or 20 min to achieve the VIPS process thoroughly, and then immersed rapidly in a water bath at 22 °C to form flat membrane. Firstly, after exposing to the vapor at 25 °C and 70% relative humidity for 20 min, FESEM micrographs show that the blending of PNIPAM nanogels significantly affects the microstructures of membrane pores (Fig. 2b-f). As a reference, the PES membrane prepared via VIPS without adding any PNIPAM nanogels shows typical symmetric cellular-like structure through the whole thickness of membrane (Fig. 2b1,b2), and both the size and number of pores on the membrane surface are very small (Fig. 2b3,b4). Just as designed and expected, after blending PNIPAM nanogels in the membrane-forming solutions, enlarged pores with nanogels self-assembled in situ at the pore/matrix interfaces appear in the membranes (Fig. 2c-f). The magnified FESEM micrographs clearly show that the nanogels assemble orderly at the pore/matrix interfaces (Fig. 2c2-f2), and open pores with very small sizes form at the interconnected points where the adjacent enlarged pores with in situ self-assembled nanogels meet ( Fig. 2c1-f1,c2-f2), as designed in Fig. 1f-h. In order to optimize the VIPS parameters for membrane preparation, we systematically investigate the effects of the exposure time, the relative humidity and the vapor temperature of VIPS chamber on the microstructure and performance of the membranes. The FESEM images of the membranes are shown in Fig. 3. The exposure process of casting solution in vapor is the primary difference between VIPS and LIPS; therefore, the exposure time should be very important for the membrane formation. Compared with the microstructure of the membrane prepared with exposure time of 20 min and vapor at 25 °C and 70% (RH) (Fig. 2f), that prepared with exposure time of 2 min and vapor at 25 °C and 70% (RH) is significantly different (Fig. 3a). When the exposure time is 20 min, the magnified FESEM micrographs clearly show that a lot of nanogels are observed on the pore/matrix interface and the surface (Fig. 2f2,f4); however, when the exposure time is 2 min, only a few nanogels are observed on the pore/matrix interface and the surface (Fig. 3a2,a4). This phenomenon gives an effective supplement to the formation of the gating structure. The liquid-liquid phase separation occurs in the membrane-forming solution induced by the water vapor, and then the droplets of the polymer-lean phases disperse in the continuous polymer-rich phases. The mild VIPS process gives enough time for the droplets to coarsen. At the same time, the nanogels tend to move to the matrix/growing phase interface due to its hydrophilic property. In this situation, 2 min may be enough for the formation of droplets of the polymer-lean phases, but cannot support the procedure of large number of nanogels moving to the pore/matrix interfaces. Then, we fix the exposure time at 2 min and adjust the vapor temperature to 15 °C and relative humidity to 90% (RH), separately. On the condition of exposure time of 2 min and vapor at 15 °C and 70% (RH), the membrane morphology turns to be finger-like, typical structure from LIPS ( Fig. 3b1,b3). The lower temperature slows down the phase separation process, which makes the droplets of the polymer-lean phases hard to coarsen and solidify. Meanwhile, few nanogels appear on the pore/matrix interfaces because the lower temperature slows down the moving velocity of the nanogels (Fig. 3b2,b4). However, on the condition of exposure time of 2 min and vapor at 25 °C and 90% (RH), the membrane pores on the surface are enlarged (Fig. 3c3,c4), which are in accordance with previously reported work 27 . The results show that the exposure time of 2 min is too short for the vapor to influence the membrane formation. Although the vapor temperature and relative humidity varies, the ideal membrane structures cannot be achieved with exposure time of 2 min. Then, we change the vapor temperature and the relative humidity with fixing the exposure time at 20 min. On the condition of exposure time of 20 min and vapor at 15 °C and 70% (RH), the lower temperature gives the droplets of polymer-lean phases more time to coarsen, so the pore size turns to be larger (Fig. 3e). On the condition of exposure time of 20 min and vapor at 25 °C and 90% (RH), the membrane pores on the surface are also enlarged (Fig. 3d). To summarize, with increasing the nanogel content from 4.25% to 17.00%, both the number of enlarged pores with in situ self-assembled nanogels at the interfaces and that of pores on the membrane surface increase, and the pores become more and more interconnected with each other. The longer exposure time benefits the formation of the designed structure in this study, the lower vapor temperature and higher relative humidity show less significant effects on the formed membrane structure. Considering the principle of convenience and easy-to-scale-up, the mild conditions are better choices. Therefore, the condition of exposure time of 20 min, and the vapor at 25 °C and 70% (RH) are preferred. The interconnected pores with PNIPAM nanogels self-assembled at the pore/matrix interfaces provide excellent three-dimensionally interconnected gating networks for the membrane to achieve concurrent large flux and significant thermo-responsive characteristics. Trans-membrane water flux and thermo-responsive gating characteristics. Our smart gating membranes with enough in situ self-assembled PNIPAM nanogels as thermo-responsive gates show concurrent high flux and significant responsive property in responding to environmental temperature change across the VPTT (Fig. 4). For the reference PES membrane prepared without any nanogels, the trans-membrane water flux is extremely low (Fig. 4a), and the slight increase of the trans-membrane water flux of this membrane with increasing temperature is due to the thermo-induced viscosity decrease of water 6,17 . With increasing the nanogel content in the membrane, trans-membrane water flux increases remarkably (Fig. 4a). The results of the trans-membrane water fluxes are in accordance with the microstructures of membranes. As mentioned above, after exposing to the vapor at 25 °C and 70% relative humidity for 20 min, with increasing the nanogel content, both the number of enlarged pores with in situ self-assembled nanogels at the interfaces and that of pores on the membrane surface increase, and the pores become more and more interconnected with each other. That is, the more the nanogel content, the larger and the more the trans-membrane pathways for water flow; as a result, the larger the trans-membrane water flux. With the nanogel content of 17.00%, the trans-membrane water flux at 44 °C under operation pressure of 0.2 MPa is as high as 8558 kg h −1 m −2 . With PNIPAM nanogels self-assembled in situ at the pore/matrix interfaces, our membranes show remarkable thermo-responsive characteristics (Fig. 4). A sharp change of water flux appears at temperature near 33 °C, which is the VPTT of PNIPAM nanogels (Fig. 2a3). When the temperature is lower than 33 °C, the nanogels are in the swollen state and the gate is closed (Fig. 1g), as a result the trans-membrane water flux is low; on the contrary, when the temperature is higher than 33 °C, the nanogels are in the shrunken state and the gate is open (Fig. 1h), so the water flux is high (Fig. 4a). To quantitatively characterize the thermo-responsive permeation performance of the membrane, a coefficient called thermo-responsive factor (R 39/20 ) is defined as the ratio of water flux at 39 °C to that at 20 °C under trans-membrane pressure of 0.2 MPa. The more the nanogel content, the more the PNIPAM nanogels serving as thermo-responsive gates in the membrane, as a result the larger the thermo-responsive (Fig. 4b). When the nanogel content is 17.00%, the thermo-responsive factor is as high as 10.2. Trans-membrane water flux and thermo-responsive gating characteristics of membranes prepared by different VIPS parameters are investigated, too. With adjusting the preparation conditions but fixing the nanogel content being 17.00%, the flux and the thermo-responsive characteristics of the membranes vary (Fig. 4c,d). At first, on the condition of exposure time of 2 min and vapor at 15 °C and 70% (RH), the membranes with typical structure from LIPS show obvious different performance from others. Although the thermo-responsive factor is about 17.5, which is higher than others, the flux is only 127 kg h −1 m −2 at 20 °C and 2228 kg h −1 m −2 at 39 °C due to the dense surface (Fig. 4c,d). That is, the flux capacity is limited. The other two membranes prepared with exposure time of 2 min, both have a lower thermo-responsive factor around 5 (Fig. 4d), which is corresponding to the imperfect gating structures with few nanogels serving as gates at the pore/matrix interfaces. When the exposure time extends to 20 min, the larger pore size (Fig. 3d,e) brings higher flux (Fig. 4c). From the comparison of both flux and thermo-responsive factor, the condition of exposure time of 20 min and vapor at 25 °C and 70% (RH) is selected as the optimum one (Fig. 4c,d). For the membranes prepared with the condition of exposure time of 20 min and vapor at 25 °C and 70% (RH), the water flux of the membrane increases linearly with increasing the operation pressure at both 39 °C and 20 °C (Fig. 4e), which means that the PNIPAM nanogels assembled at the pore/matrix interfaces are stable enough to resist the experimental pressure and the nanogel gates remain intact in the operation processes. To further confirm the stability of PNIPAM nanogels self-assembled in situ at the pore/matrix interfaces, the water that has passed through the membrane is detected by DLS, and the results show that no nanogel is found in the water. Therefore, our smart gating membranes possess excellent reversibility and reproducibility of thermo-responsive performances (Fig. 4f). By alternatively changing the environmental temperature across the VPTT of PNIPAM nanogels repeatedly (20 °C ↔ 39 °C), the trans-membrane water fluxes at both 20 °C and 39 °C keep unchanged even after keeping the membrane in water for 70 days. Importantly, the trans-membrane water flux and the thermo-responsive property of our smart gating membranes with in situ self-assembled nanogels as functional gates can be concurrently enhanced by increasing the nanogel content. By calculating the normalized fluxes and the normalized thermo-responsive coefficients of thermo-responsive membranes with taking into account the effects of operation pressure and temperature-induced viscosity change of water, it is possible to compare the maximum normalized fluxes and thermo-responsive coefficients of membranes prepared with different methods (please see Supplementary Table S1 and Fig. S1 for details). The normalized thermo-responsive coefficient, which is the ratio of membrane resistance at low temperature to that at high temperature, can be used to compare the responsive performances of different membranes at different temperatures directly. For the previous thermo-responsive membranes prepared by introducing thermo-responsive domains into membrane materials before membrane formation via LIPS, either the maximum normalized fluxes or the maximum normalized thermo-responsive coefficients are limited (Supplementary Table S1 and Fig. S1). For the membranes prepared with grafted thermo-responsive copolymers ("Series 1" in Supplementary Table S1 and Fig. S1), although the maximum normalized fluxes are very large, the maximum normalized thermo-responsive coefficients are not high (typically less than 3.0) 20 . For the membranes prepared by blending membrane-forming materials with thermo-responsive polymers as additives ("Series 2" in Supplementary Table S1 and Fig. S1), both the maximum normalized fluxes (typically lower than 870 L m −2 h −1 bar −1 ) and the maximum normalized thermo-responsive coefficients (typically less than 1.8) are very limited 21,22 . For the membranes prepared by blending membrane-forming materials with thermo-responsive nanogels as additives ("Series 3" in Supplementary Table S1 and Fig. S1), although the maximum normalized thermo-responsive coefficients could be as high as 5.9, the maximum normalized fluxes are very low (typically less than 700 L m −2 h −1 bar −1 ) 23 . Excitingly, for our membranes prepared via VIPS with nanogel content of 17.00%, the maximum normalized flux and the maximum normalized thermo-responsive coefficient are as high as 4300 L m −2 h −1 bar −1 and 6.0 respectively (Supplementary Table S1 and Fig. S1). The results verify that, by constructing the above-mentioned unique architecture inside the membranes via VIPS, our smart gating membranes are able to achieve ever better comprehensive performances on the flux and responsive characteristics. Furthermore, the thermo-responsive gating characteristics of the composite membranes for diffusional permeation of solute molecules with different molecular weights are investigated (Fig. 5, Supplementary Fig. S2, and S3). The results show that the value of the diffusion coefficient of the same solute decreases rapidly with lowering the temperature, which is responding to the changing trend of the flux (Fig. 5a). Then, with increasing the molecular weight of the solute, the diffusion coefficient (D) turns down, owing to the increasing of the Stokes-Einstein radius of the solute for diffusion (Fig. 5a). As mentioned above, for the similar purpose, a coefficient called thermo-responsive diffusion factor (R D(39/20) ) is defined as the ratio of the diffusion coefficient of the solute at 39 °C to that at 20 °C. When the molecular weight of the solute increases from 1355 to 40000, the value of R D(39/20) undergoes a process of rising from 3.3 to 22.5 first and then falling to 11.25 later (Fig. 5b). For VB 12 , because the molecular size is small, it is easy for the VB 12 molecules to permeate through the membrane pores whether the temperature is 39 °C or 20 °C (Supplementary Fig. S3a), and the trans-membrane permeability of VB 12 is affected by the size change of the diffusion channels to a certain extent. However, for the 4000 and 10000 (MW) FITC-dextrans, at 20 °C, as the molecular size is larger than the "closed" pore size, the molecules are excluded by the membranes; while at 39 °C, the size of the these molecules becomes smaller than the Scientific RepoRts | 5:14708 | DOi: 10.1038/srep14708 "open" pore size, and then the solute molecules can permeate easily through membranes ( Supplementary Fig. S3b). As a result, the value of R D(39/20) increases remarkably. In the case of 40000 FITC-dextran with the largest molecule size in this study, even at 39 °C, the permeation of the solute molecule is still affected by the size exclusion of the membrane pores ( Supplementary Fig. S3c), because the molecular size is so larger that the 40000 FITC-dextran molecules cannot permeate through the membrane easily. For the solute molecule with molecular weight of 10000 (g/mol), the ratio of the diffusion coefficient of the solute at 39 °C to that at 20 °C is as high as 22.5, which verifies the fabricated membranes are "smart" and highly potential in separations and controlled release. Mechanical properties. Our smart gating membranes with enough in situ self-assembled PNIPAM nanogels as thermo-responsive gates exhibit excellent mechanical properties (Fig. 6). On the condition of exposure time of 20 min and vapor at 25 °C and 70% (RH), Our smart gating membranes prepared via VIPS have much better mechanical properties than the membranes prepared via LIPS (Fig. 6a,b). To compare the mechanical properties of our membranes prepared via VIPS with those prepared via LIPS, PES membranes with equal contents of nanogels are prepared via LIPS as references. Although the thicknesses of casted solution films are all 200 μ m, the thicknesses of dried membranes prepared via VIPS are 64 ± 4 μ m while those prepared via LIPS are 98 ± 5 μ m. Because the membranes prepared via VIPS have symmetric porous structures 24 while those prepared via LIPS have asymmetric porous structures 23 , the membranes prepared via VIPS are denser throughout the whole membrane thickness that those prepared via LIPS. As a result, the membranes prepared via VIPS are mechanically stronger than those prepared via LIPS. For the membranes prepared via LIPS, no matter how the nanogel content varies, the largest tensile strain at break is less than 8.0% and the largest tensile stress at break (σ b ) is smaller than 3.8 MPa; however, for our membranes prepared via VIPS, the tensile strains at break are all about 23.0% and the tensile strengths at break are all higher than 9.4 MPa (Fig. 6a,b). More importantly and surprisingly, with increasing the nanogel content from 4.25% to 17.00%, the tensile strengths at break of our membranes prepared via VIPS increase from 9.4 MPa to 13.0 MPa (Fig. 6b). The mechanical properties of membranes prepared with different VIPS parameters are also tested. The membranes prepared with the exposure time of 20 min have higher tensile strengths at break and the tensile strains at break than those prepared with the exposure time of 2 min (Fig. 6c,d). Among the membranes prepared by VIPS, the membranes prepared with 2 min, 15 °C and 70% (RH) have a typical structure like those prepared with LIPS, and have a mechanical property like those prepared by LIPS (Fig. 6a-d). The membranes prepared by higher RH with the limited exposure time of 2 min have a better mechanical property (Fig. 6c,d), which implies that higher RH speeds up the process of pore coarsening. It should be noted that with enough exposure time and fixed nanogel content, the mechanical properties of membranes vary little (Fig. 6c,d). As mentioned above, our membranes prepared via VIPS have symmetric cellular-like structures. For cellular solids, the mechanical properties are mainly affected by the most important structural characteristic parameter that is called the relative density 28 . The relative density of a cellular solid is the density ratio of the cellular material (i.e. bulk density ρ*) to the solid of which it is made (i.e. true density ρ s ). The smaller the relative density (ρ*/ρ s ) is, the larger the porosity of the porous membrane. With increasing the nanogel content from 4.25% to 17.00%, the ρ*/ρ s value of the membrane prepared via VIPS increases from 0.26 to 0.33 (Fig. 4e). The results indicate that, by adding more nanogels, although the membrane pores are enlarged and get more interconnected with each other (Fig. 2c-f), the membrane porosity is decreased slightly, which means the pore walls become denser. As a result, the tensile strength at break of (a) Typical tensile stress versus tensile strain curves of membranes, in which "V-0" and "L-0" stand for membranes prepared by VIPS and LIPS respectively with nanogel content being 0%, and "V-1" and "L-1" for nanogel content being 4.25%, "V-2" and "L-2" for nanogel content being 8.50%, "V-3" and "L-3" for nanogel content being 12.75% and "V-4" and "L-4" for nanogel content being 17. the membrane prepared via VIPS increases with increasing the nanogel content from 4.25% to 17.00%. Because of the open-cellular structures of the membranes prepared via VIPS with addition of enough nanogels, the following equation can be used to calculate the tensile strength of the membrane from the relative density 28 : where σ b is the tensile strength of the membrane, and σ ys is the yield strength of the pore wall material (PES). The calculated data of the tensile strengths of membranes prepared via VIPS with different nanogel contents fit in well with the experimental data (Fig. 6f). Both the experimental and calculated results exhibit an important and exciting phenomenon, which is that the mechanical properties of our smart gating membranes with in situ self-assembled nanogels as functional gates are enhanced with increasing the nanogel content. That is, all the flux, responsive and mechanical properties of our smart gating membranes can be simultaneously enhanced without any conflict. Discussion We have demonstrated simple and controllable fabrication of a novel type of smart gating membranes with simultaneous large flux, significant response and excellent mechanical properties, by constructing self-assembled responsive nanogels in situ on membrane pore surfaces as functional gates via a VIPS process. The generated membrane pores are three-dimensionally interconnected inside the membranes and the self-assembled nanogels on the membrane pore surfaces serve as responsive gates. With the proposed unique architecture, factors conducive to improving all the flux, responsive and mechanical properties are simultaneously introduced into the smart gating membranes. The flux, responsive and mechanical properties of the smart gating membranes can be easily customized by adjusting the nanogel content, and the effects of preparation conditions on the structures and performances of the composite membranes are systematically investigated. By using a proper recipe with enough nanogel content, a smart gating membrane could have all the high flux, significant response and strong mechanical properties. Such a combination of high flux, significant responsive characteristics and strong mechanical properties, along with an easy one-step method of fabrication, makes our smart gating membranes ideal candidates for further investigations and applications. The strategy of self-assembling nanogels in situ on the pore surfaces via VIPS and the simple fabrication procedure presented here circumvent the difficulties in simultaneously improving flux, responsive and mechanical properties of the smart gating membranes. Due to the excellent concurrent flux, responsive and mechanical properties, the smart gating membranes with in situ self-assembled responsive nanogels as functional gates will provide ever better performances in myriad applications including water treatment, controlled release, chemical/biological separations, chemical sensors, chemical valves and tissue engineering, and may open up new fields of application for smart gating membranes. Furthermore, the proposed novel strategy can be used to fabricate various kinds of functional porous materials with pores immobilized or modified by various kinds of responsive or even non-responsive nanoparticles for numerous applications, including smart gating membranes 5,6 , anti-fouling membranes 5,29 , and functional cellular solids 28 or foams 30 and so on, which might be a fertile area of research. The monomer solution was bubbled with nitrogen gas for 30 min to remove the dissolved oxygen, and then was kept in a water bath at 70 °C for precipitation polymerization for 4 h. After reaction, the PNIPAM nanogels were thoroughly purified by repeating centrifugation at 8000 rpm and redispersed in deionized water to remove the residual unreacted components. Finally, the nanogels were freeze-dried at − 35 °C for 48 h. The morphology of the PNIPAM nanogels in dried state was observed by field-emission scanning electron microscope (FESEM, JSM-7500F, JEOL). The thermo-responsive hydrodynamic diameters of nanogels in water at temperatures ranging from 20 to 45 °C were measured by dynamic light scattering (DLS, Zetasizer Nano ZS90, Malvern) equipped with a He-Ne light source (λ = 633 nm, 4.0 mW). Before each datum collection, the highly diluted PNIPAM nanogel dispersion in DI water was allowed to equilibrate for 20 min at each predetermined temperature. The morphology of the nanogels dyed with Polyfluor 570 in water at room temperature was observed by CLSM (SP5-II, Leica), with red fluorescent channel excited at 543 nm. Smart gating membranes with self-assembled responsive nanogels as functional gates were fabricated from nanogel-contained membrane-forming solution via vapor-induced phase separation (VIPS) approach. The membrane-forming solution was 1-methyl-2-pyrrolidinone (NMP) containing 17.5 wt% polyethersulfone (PES, Mw = 40,000, Changchun Jilin Special Plastics). To add the nanogels into the membrane-forming solution, a certain amount of freeze-dried PNIPAM nanogels was dispersed in NMP first, and then PES was added. The nanogel contents in the membranes, which were the blending mass ratios of PNIPAM nanogels to PES, were varied as 0%, 4.25%, 8.50%, 12.75% and 17.00%. The nanogel-contained membrane-forming solution was casted onto a glass plate with a thickness of 200 μ m. The casting was performed inside a humidity chamber maintained at 15 °C and 70% relative humidity, 25 °C and 70% relative humidity, and 25 °C and 90% relative humidity, respectively (TH-PE-100, JEIO). The casted film was kept in the humidity chamber for 2 min or 20 min and then immersed in a water bath at 22 °C to form flat membrane. As references, membranes were also prepared with the same recipes via liquid-induced phase separation (LIPS) approach, in which the casted film was immediately immersed into a water bath at 22 °C and left in water for 20 min. The microstructures of membranes were investigated by FESEM (JSM-7500F, JEOL). To observe the cross-sections, membrane samples were put into liquid nitrogen for enough time, fractured mechanically, and stuck to the sample holder. All the samples were sputter-coated with gold for 60 s before observation. Methods Thermo-responsive gating property testing. To investigate the thermo-responsive gating characteristics of the prepared membranes, trans-membrane water fluxes at different temperatures were studied first. The water flux experiments of membranes were carried out using a filtration apparatus under a constant trans-membrane pressure of 0.2 MPa. Each membrane had been immersed in DI water over 24 h before testing the water flux. The diameter of the effective membrane area for water permeation was 40 mm. The test temperature range was chosen from 20 °C to 39 °C. In the experiments, a thermostatic unit was used to control the temperatures of the membranes and the feed water. The tests for water flux of each membrane at each temperature were carried out more than five times to obtain an average value for the water flux. Mechanical property testing. The mechanical properties of the membranes were tested by a commercial test machine (EZ-LX, Shimadzu). The membrane samples were cut into dumbbell shapes of standardized JIS-K6251-7 sizes (length 35 mm, width 2 mm, and gauge length 12 mm) with a sample-cutting machine (Dumbbell). Both ends of the dumbbell-shaped samples were clamped, and stretched at a constant velocity of 20 mm min −1 . At least five samples were tested for each membrane. Trans-membrane diffusional permeation experiments. Trans-membrane diffusional permeation experiments of composite membranes that prepared with the condition of exposure time of 20 min and vapor at 25 °C and 70% (RH) were carried out. The environmental temperatures were changing between 20 °C and 39 °C. VB 12 with molecular weight of 1355 (g/mol) and FITC-dextran molecules with number averaged molecular weights of 4000, 10000 and 40000 (g/mol) were chosen as the solute molecules. The feed solution was prepared by dissolving VB 12 and FITC-dextran molecules in DI water with a concentration of 0.4 mmol L −1 (VB 12 ) and 50 mg L −1 (FITC-dextrans). The diffusional permeation experiments of membranes were carried out by using a standard side-by-side diffusion cell with a thermostatic unit for controlling the environment temperature. Each test membrane was immersed in the permeant solution overnight before beginning the diffusion experiments. The concentration of VB 12 in the receptor cell at regular intervals was measured by using an UV-vis Spectrometer (UV-1700, Shimadzu) at a wave length of 361 nm. The concentration of FITC-dextran in the receptor cell at regular intervals was measured by using a fluorescent photometer (RF5301PC, Shimadzu), and the excitation and emission wavelength were 480 and 520 nm respectively. Each concentration of the solutes at regular intervals was measured three times, and the arithmetical mean value was calculated. The diffusivity of the solute across the membrane D, can also be calculated using a similar equation derived from Fick's first law of diffusion as follows 31 : where C i , C t and C f are the initial, intermediary (at time t), and final concentrations of the solute in the receptor cell; V 1 and V 2 are the volumes of the liquids in the donor cell and in the receptor cell, respectively; L represents the thickness of the dry membrane; and A is the effective diffusion area of the membrane.
2016-05-04T20:20:58.661Z
2015-10-05T00:00:00.000
{ "year": 2015, "sha1": "5c51f18bf8c9876e64474ddcdc408cc979a7b371", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep14708.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c51f18bf8c9876e64474ddcdc408cc979a7b371", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
15527317
pes2o/s2orc
v3-fos-license
New Gauge Supergravity in Seven and Eleven Dimensions Locally supersymmetric systems in odd dimensions whose Lagrangians are Chern-Simons forms for supersymmetric extensions of anti-de Sitter gravity are discussed. The construction is illustrated for D=7 and 11. In seven dimensions the theory is an N=2 supergravity whose fields are the vielbein ($e_{\mu}^{a}$), the spin connection ($\omega_{\mu}^{ab}$), two gravitini ($\psi_{\mu}^{i}$) and an $sp(2)$ gauge connection ($a_{\mu j}^{i}$). These fields form a connection for $osp(2|8)$. In eleven dimensions the theory is an N=1 supergravity containing, apart from $e_{\mu}^{a}$ and $\omega_{\mu}^{ab}$, one gravitino $\psi_{\mu}$, and a totally antisymmetric fifth rank Lorentz tensor one-form, $b_{\mu}^{abcde}$. These fields form a connection for $osp(32|1)$. The actions are by construction invariant under local supersymmetry and the algebra closes off shell without requiring auxiliary fields. The $N=2^{[D/2]}$-theory can be shown to have nonnegative energy around an AdS background, which is a classical solution that saturates the Bogomolnyi bound obtained from the superalgebra. Locally supersymmetric systems in odd dimensions whose Lagrangians are Chern-Simons forms for supersymmetric extensions of anti-de Sitter gravity are discussed. The construction is illustrated for D = 7 and 11. In seven dimensions the theory is an N = 2 supergravity whose fields are the vielbein (e a µ ), the spin connection (ω ab µ ), two gravitini (ψ i µ ) and an sp(2) gauge connection (a i µj ). These fields form a connection for osp(2|8). In eleven dimensions the theory is an N = 1 supergravity containing, apart from e a µ and ω ab µ , one gravitino ψµ, and a totally antisymmetric fifth rank Lorentz tensor oneform, b abcde µ . These fields form a connection for osp(32|1). The actions are by construction invariant under local supersymmetry and the algebra closes off shell without requiring auxiliary fields. The N = 2 [D/2] -theory can be shown to have nonnegative energy around an AdS background, which is a classical solution that saturates the Bogomolnyi bound obtained from the superalgebra. PACS numbers: 04.50.+h, 04.65.+e, 11.10.Kk. Introduction.-In recent years, M-Theory has become the preferred description for the underlying structure of string theories [1,2]. However, although many features of M-Theory have been identified, still no action principle for it has been given. Some of the expected features of M-theory are: (i)Its dynamics should somehow exhibit a superalgebra in which the anticommutator of two supersymmetry generators coincides with the AdS superalgebra in 11 dimensions osp(32|1) [3]; (ii)The low energy regime should be described by an eleven dimensional supergravity of new type which should stand on a firm geometric foundation in order to have an off-shell local supersymmetry [4]; (iii) The perturbation expansion for graviton scattering in M-theory has recently led to conjecture that the new supergravity lagrangian should contain higher powers of curvature [5]. Since supersymmetry and geometry are two essential ingredients, most practitioners use the standard 11-dimensional Cremer-Julia-Scherk (CJS) supergravity [6] as a good approximation to M-Theory, in spite of the conflict with points (i) and (iii). In this letter we present a family of supergravity theories which, for D=11, exhibits all of the above features. In spite of its improved ultraviolet behavior, the renormalizability of standard supergravity beyond the first loops has remained elusive. It is not completely absurd to speculate that this could be related to the fact that supersymmetry transformations of the dynamical fields form a closed algebra only on shell. An on-shell algebra might seem satisfactory in the sense that the action is nevertheless invariant under local supersymmetry. It is unsatisfactory, however, because it means that the propagating fields neither belong to an irreducible representation of the supergroup nor do they transform as gauge connections. This precludes a fiber-bundle interpretation of the theory, as is the case with standard Yang-Mills gauge theories, where this interpretation is crucial in proving renormalizability. In order to accomodate the supergravity multiplets in tensor representations, it is usually necessary to introduce a host of auxiliary fields . This is a highly nontrivial issue in general and has often remained an unsolvable problem [7,8]. Still, there is a handful of supergravities whose superalgebras close off shell without requiring auxiliary fields: Anti-de Sitter (AdS) in D = 3 [9], and D = 5 [10]; Poincaré in D = 3 [11], and in general for D = 2n − 1 [12]. These are genuine gauge systems for graded Lie algebras and therefore make interesting candidates for renormalizable theories of gravity. Here, we present a family of supergravity theories in 2n − 1 dimensions whose Lagrangians are Chern-Simons (CS) forms related to the n-th Chern character of a supergroup in 2n dimensions [13]. As anticipated in the pioneering work of Ref. [6], and underlined by other authors [14,3], D = 11 supergravity should be related to a gauge system for the group OSp(32|1), a symmetry which is not reflected in the CJS theory. The theory proposed here, for D = 11 turns out to be naturally a gauge system for this group. Gauge Gravity.-Our aim is to construct a locally supersymmetric theory whose generators form a closed off-shell algebra. One way to ensure this by construction is by considering a gauge theory for a graded Lie algebra, that is, one where the structure constants are independent of the fields, and of the field equations. In order to achieve this, we relax two implicit assumptions usually made about the purely gravitational sector: (i) gravitons are described by the Hilbert action, and, (ii) torsion does not contain independently propagating degrees of freedom. The first assumption is historical and dictated by simplicity but in no way justified by need. In fact, for D > 4 the most general action for gravity -generally covariant and with second order field equations for the metric-is a polynomial of degree [D/2] in the curvature, first discussed by Lanczos [15] for D = 5 and, in general, by Lovelock [16,17]. This action contains the same degrees of freedom as the Hilbert action [18] and is the most general low-energy effective theory of gravity derived from string theory [19]. Assumption (ii) is also motivated by simplicity. It means that the spin connection is not an independent field. Elimination of ω a b in favor of the remaining fields, however, spoils the possibility of interpreting the local translational invariance as a gauge symmetry of the action. In other words, the spin connection and vielbein -the soldering between the base manifold and the tangent space-cannot be identified as components of the connection for local Lorentz rotations and translations, respectively, as is the case in D = 3. Thus, the Einstein-Hilbert theory in D =≥ 4 cannot be formulated purely as a gauge theory on a fiber bundle. For a generic gravitational action in D > 4, δS δω = 0 cannot be solved for ω in terms of e. This implies that even classically, ω and e should be assumed as dynamically independent fields and torsion necessarily contains many propagating degrees of freedom [20]. These degrees of freedom are described by the contorsion tensor, k ab µ := ω ab µ −ω ab µ (e). Thus, the restriction to theories with nonpropagating torsion would be a severe truncation in general. The LL theory, which includes as a particular case the EH system, is by construction invariant under Lorentz rotations in the tangent space. For D = 2n − 1, however, there is a special choice of the coefficients in the LL lagrangian which extends this invariance into an AdS symmetry [21]: Here wedge product is understood and the subscript "G" denotes a Lagrangian for torsion-free gravity. The constant l has dimensions of length and its purpose is to render the action dimensionless allowing the interpretation of ω and e as components of the AdS connection [22] where A, B = 1, ...D + 1. The Lagrangian (1) is an AdS-CS form in the sense that its exterior derivative is the Euler class, where R AB is the AdS curvature and κ is quantized [23] (in the following we will set κ = l = 1). Although torsion in general appears in the field equations, it has not been necessary to introduce it explicitly in the action until now. As shown in [24], the LL actions can be extended to allow for torsion so that for each dimension there is a unique set of possible additional terms to be considered. Like in the pure LL theories, the most general action contains a large number of arbitrary constants, and again as in the LL case, their number can be reduced to two if the Lorentz invariance is enlarged to AdS symmetry. The key point, however, is that torsion terms are necessary in general in order to further extend the AdS symmetry of the action into a supersymmetry for D > 3. The reason for this is analysed in the discussion. Let us now briefly examine how torsion appears in an AdS-invariant theory. The idea is best understood in 2+1 dimensions. For D = 3, apart from the standard action (3), there is a second CS form for the AdS group. The exterior derivative of this "exotic" lagrangian is the Pontryagin form in 4 dimensions (2 nd Chern character for SO (4)). This alternative CS form is, [25] where dL * 2n−1 (ω a b ) = T r[(R a b ) n ] is the n-th Chern character (see, e.g. [26]). Similar exotic actions, associated to the Chern characters in 4k dimensions, exist in D = 4k − 1. Since the (2n + 1)-th gravitational Chern characters vanish, there are no exotic actions in D = 4k + 1. For D = 4k − 1, the number of possible exotic forms grows as the partitions of k. As we shall see below, we will be interested in one particular combination of these forms, which in the spinorial representation of SO(4k) can be written as [27] dL AdS It is important to note that in this Lagrangian, as well as in (4), torsion appears explicitly. For example, in seven dimensions one finds The CS lagrangian (5) represents a particular choice of coefficients so that the local Lorentz symmetry is enlarged to AdS invariance. In general, a Chern-Simons D-form is defined so that its exterior derivative is an invariant homogeneous polynomial of degree n in the curvature, that is, a characteristic class [28]. In the examples above, (3) is the CS form for the Euler characteristic class 2n-form, while the exotic lagrangians are related to different combinations of Chern characters. Thus, a generic CS action in 2n − 1 dimensions for a Lie algebra g can be written as where < > stands for a multilinear function in the Lie algebra g, invariant under cyclic permutations such as Tr or STr. The problem of finding all possible CS actions for a given group is equivalent to finding all possible invariant tensors of rank n in the algebra. This is in general an open problem, and for the groups relevant for supergravity discussed below (e.g., OSp(32|1)) the number of invariant tensors can be rather large. Most of these invariants, however, give rise to bizarre lagrangians and the real problem is to find the appropriate invariant that describes a sensible theory. The R.H.S. of (5) is a particular form of (6) in which < > is the ordinary trace over spinor indices. Other possibilities of the form <F n−p ><F p >, are not used in our construction as they would not lead to the minimal supersymmetric extensions of AdS containing the Hilbert action. In the supergravity theories discussed below, the gravitational sector is given by ± 1 2 n L AdS G 2n−1 − 1 2 L AdS T 2n−1 [29]. Gauge Supergravity.-The supersymmetric extension of a theory invariant under AdS requires new bosonic generators to close the superalgebra [14]. In standard supergravities, Lorentz tensors of rank higher than two were usually excluded from the superalgebra on the grounds that elementary particle states of spin higher than 2 are inconsistent [30]. However, this does not rule out the relevance of those tensor generators in theories of extended objects [31]. In [12], we discussed family of theories in odd dimensions whose algebra contains the Poincaré generators . The anticommutator of the supersymmetry generators is a combination of a translation plus a tensorial "central" extension, This algebra gives rise to supergravity theories with offshell Poincaré superalgebra. The existence of these theories suggests that there should be similar supergravities based on the AdS symmetry. It is our purpose here to present these theories. Superalgebra and Connection.-The smallest superalgebra containing the AdS algebra in the bosonic sector is found following the same approach as in [14], but lifting the restriction of N = 1 [27]. The result, for D > 3 is: In each of these cases, m = 2 [D/2] and the connection takes the form The generators J ab , J a span the AdS algebra, Q i α generate (extended) supersymmetry transformations, and [r] denotes a set of r antisymmetrized Lorentz indices. The Q ′ s transform as vectors under the action of M ij and as spinors under the Lorentz group. Finally, the Z's complete the extension of AdS into the larger algebras so(m), sp(m) or su(m). In (8)ψ i = ψ T j Cu ji (ψ i = ψ † j Cu ji for D = 4k + 1), where C and u are given in the table above. These algebras admit (m + N ) × (m + N ) matrix representations [32], where the J and Z have entries in the m × m block, the M ij 's in the N × N block, while the fermionic generators Q have entries in the complementary off-diagonal blocks. Under a gauge transformation, A transforms by δA= ∇λ, where ∇ is the covariant derivative for the connection A. In particular, under a supersymmetry transformation, λ =ǭ i Q i −Q i ǫ i , and where D is the covariant derivative on the bosonic connection, The smallest AdS superalgebra in seven dimensions is osp(2|8). The connection (8) where M ij are the generators of sp(2). In the representation given above, the bracket < > is the supertrace and, in terms of the component fields appearing in the connection, the CS form is Here the fermionic Lagrangian is where f i j = da i j +a i k a k j , and R = 1 4 (R ab +e a e b )Γ ab + 1 2 T a Γ a are the sp(2) and so(8) curvatures, respectively. The supersymmetry transformations (9) read In this case, the smallest AdS superalgebra is osp(32|1) and the connection is A = 1 2 ω ab J ab + e a J a + (Ω) + L F (Ω, ψ), where Ω ≡ 1 2 (e a Γ a + 1 2 ω ab Γ ab + 1 5! b abcde Γ abcde ) is an sp(32) connection. The bosonic part of (11) can be written as where R = dΩ + Ω 2 is the sp(32) curvature. The supersymmetry transformations (9) read δe a = 1 8ǭ Γ a ψ δω ab = − 1 8ǭ Γ ab ψ δψ = Dǫ δb abcde = 1 8ǭ Γ abcde ψ. Discussion.-The supergravities presented here have two distinctive features: The fundamental field is always the connection A and, in their simplest form, these are pure CS systems (matter couplings are discussed below). As a result, these theories possess a larger gravitational sector, including propagating spin connection. Contrary to what one could expect, the geometrical interpretation is quite clear, the field structure is simple and, in contrast to the standard cases, the supersymmetry transformations close off shell without auxiliary fields. A. Torsion. It can be observed that the torsion lagrangians (L T )are odd while the torsion-free terms (L G ) are even under spacetime reflections. The minimal supersymmetric extension of the AdS group in 4k − 1 dimensions requires using chiral spinors of SO(4k) [33]. This in turn implies that the gravitational action has no definite parity, but requires the combination of L T and L G as described above. In D = 4k + 1 this issue doesn't arise due to the vanishing of the torsion invariants, allowing constructing a supergravity theory based on L G only, as in [10]. If one tries to exclude torsion terms in 4k −1 dimensions, one is forced to allow both chiralities for SO(4k) duplicating the field content, and the resulting theory has two copies of the same system [34]. B. Field content and extensions with N>1.The field content compares with that of the standard supergravities in D = 7, 11 as follows: D Standard supergravity New supergravity 7 e a µ A [3] ψ αi µ a i µj λ α φ e a µ ω ab µ ψ αi µ a i µj 11 e a µ A [3] ψ α µ e a µ ω ab µ ψ α µ b abcde µ Standard seven-dimensional supergravity is an N = 2 theory (its maximal extension is N=4), whose gravitational sector is given by Einstein-Hilbert gravity with cosmological constant and with a background invariant under OSp(2|8) [35,36]. Standard eleven-dimensional supergravity [6] is an N=1 supersymmetric extension of Einstein-Hilbert gravity that cannot accomodate a cosmological constant [37]. An N > 1 extension of this theory is not known. In the case presented here, the extensions to larger N are straighforward in any dimension. In D = 7, the index i is allowed to run from 2 to 2s, and the Lagrangian is a CS form for osp(2s|8). In D = 11, one must include an internal so(N ) field and the Lagrangian is an osp(32|N ) CS form [27]. The cosmological constant is necessarily nonzero in all cases. C. Spectrum. The stability and positivity of the energy for the solutions of these theories is a highly nontrivial problem. As shown in Ref. [20], the number of degrees of freedom of bosonic CS systems for D ≥ 5 is not constant throughout phase space and different regions can have radically different dynamical content. However, in a region where the rank of the symplectic form is maximal the theory behaves as a normal gauge system, and this condition is stable under perturbations. As it is shown in [38], there exists a nontrivial extension of the AdS superalgebra with one abelian generator for which antide Sitter space without matter fields is a background of maximal rank, and the gauge superalgebra is realized in the Dirac brackets. For example, for D = 11 and N = 32, the only nonvanishing anticommutator reads where M ij are the generators of SO(32) internal group. On this background the D = 11 theory has 2 12 fermionic and 2 12 − 1 bosonic degrees of freedom. The (super)charges obey the same algebra with a central extension. This fact ensures a lower bound for the mass as a function of the other bosonic charges [39]. D. Classical solutions. The field equations for these theories in terms of the Lorentz components (ω, e, b, a, ψ) are spread-out expressions for <F n−1 G (a) >= 0, where G (a) are the generators of the superalgebra. It is rather easy to verify that in all these theories the anti-de Sitter space is a classical solution , and that for ψ = b = a = 0 there exist spherically symmetric, asymptotically AdS standard [22], as well as topological [40] black holes. In the extreme case these black holes can be shown to be BPS states. E. Matter couplings. It is possible to introduce a minimal couplings to matter of the form A·J. For D = 11, the matter content is that of a theory with (super-) 0, 2, and 5-branes, whose respective worldhistories couple to the spin connection and the b fields. F. Standard SUGRA. Some sector of these theories might be related to the standard supergravities if one identifies the totally antisymmetric part of the contorsion tensor in a coordinate basis, k µνλ , with the abelian 3-form, A [3] . In 11 dimensions one could also identify the antisymmetric part of b with an abelian 6form A [6] , whose exterior derivative, dA [6] , is the dual of F [4] = dA [3] . Hence, in D = 11 the CS theory possibly contains the standard supergravity as well as some kind of dual version of it.
2014-10-01T00:00:00.000Z
1997-10-23T00:00:00.000
{ "year": 1997, "sha1": "647f1b89c3b3b4c081ea78b2797322c794645751", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9710180", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e73b7553cfcb50a9bcefe9f8a7cab3d6350581aa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220883951
pes2o/s2orc
v3-fos-license
Gliomatosis cerebri mimicking diffuse demyelinating disease: Case Report Gliomatosis Cerebri (GC) is a rareand rapidly progressive pattern of growth of diffusely infiltrating gliomas with limited treatment options. Imaging findings are usually nonspecific and can mimic other neurologic disorders, including demyelination, encephalitis, and multicentric/multifocal glioma. In this report, we describe a case of a 53-year-old female who presented with left hemiparesis, global headache, and gait ataxia with imaging features initially thought to represent demyelinating disease. A combination of conventional and advanced imaging findings with brain biopsy was utilized to make the diagnosis of GC. In patients with widespread abnormalities on brain imaging, GC should strongly be considered when cortical expansion, involvement of the septum pellucidum and elevated myoinositol levels are observed and the clinical and laboratory findings are atypical for demyelination or infection. Considering GC in such cases can facilitate early biopsy with prompt diagnosis and avoid delay in appropriate treatment. Introduction Gliomatosis Cerebri (GC) is a rare and universally fatal pattern of growth of diffusely infiltrating glioma [1] . Prognosis is poor, with a 14-month median survival time from diagnosis, despite aggressive treatment. Characteristic imaging findings include confluent areas of abnormal signal on magnetic resonance imaging (MRI) that enlarge affected structures and involve 3 of underlying brain architecture [1] . While the differential diagnosiscan be broad, GC can more confidently be suggested over demyelination and inflammatory processes when certain features on MRI, MR perfusion and spectroscopy are identified in the appropriate clinical setting. Case report and imaging findings A 53-year-old woman without significant past medical history presented to the emergency department with left-sided hemiparesis for 3 months accompanied by headache and ataxia. Complete hematologic workup was within normal limits during the hospital stay with the exception of an isolated episode of leukocytosis to 15.9, which resolved uneventfully. Lumbar puncture and multiple cerebrospinal fluid (CSF) studies were performed, all of which were negative, including cell count, protein, cytology, flow cytometry, myelin basic protein, oligo-clonal bands, angiotensin-converting enzyme, herpes simplex virus and Epstein-Barr virus polymerase chain reaction (PCR), and titers for Lyme disease and venereal disease research laboratory. Noncontrast computed tomography (CT) of the head on presentation ( Fig. 1 ) demonstrated ill-defined areas of hypoattenuation in the right high parietal and left basal frontal white matter suspicious for vasogenic edema with generalized narrowing of the fissures and sulci. The basilar cisterns were patent with no midline shift or evidence of herniation. CT angiography of the head and neck revealed no stenosis, aneurysm, or vascular malformation. Contrast enhanced MRI of the of the brain demonstrated multiple confluent areas of hyperintense signal on fluid attenuation inversion recovery (FLAIR) and T2-weighted images (T2WI) in the white matter of the frontal and parietal lobes bilaterally, the overlying right parietal and inferior left frontal lobe cortex and extension into the corpus callosum, septum pellucidum, and fornix with mild expansion of these structures ( Fig. 2 ). Abnormal FLAIR and T2 signal also extended along the corticospinal tracts into the internal capsules and brainstem bilaterally. Post contrast T1WI demonstrated no abnormal enhancement ( Fig. 3 ). MRI of the whole spine revealed no associated cord lesions. Subsequent MR spectroscopy ( Fig. 4 ) within an area of abnormal signal in the right high parietal deep white matter revealed elevated choline (Cho) and markedly increased myoinositol peaks, decreased N-Acetylaspartate (NAA) and significantly decreased NAA/Creatine (Cr) and NAA/Cho ratios of approximately 0.8 (normal NAA/Cr > 1.6 and NAA/Cho > 1.2; Fig. 4 ). The combination of neurocognitive deficits and extensive FLAIR hyperintensities in the cerebral white matter and corpus callosum initially suggested a demyelinating disease, such as multiple sclerosis and progressive multifocal leukoencephalopathy. Cortical involvement and expansion of affected structures in this case also raised the possibility of an encephalitis or primary brain tumor. Absence of leukocytosis, fever, HIV, or other immunocompromised status and normal serologic and CSF studies lowered the suspicion for MS, PML, and encephalitis. Diffusely infiltrating tumor was then thought to be the most likely diagnosis; lack of enhancement, markedly elevated myoinositol on MR spectroscopy and widespread involvement of the brain pointed towards GC or low grade multicentric or multifocal glioma. As such, neu-rosurgery was consulted and performed a stereotactic brain biopsy of the left frontal lobe 3 weeks after initial presentation. Histopathological studies revealed hypercellular white matter with infiltrating astroglial tumor cells that were diffusely immunopositive for glial fibrillary acidic protein. Genetic tumor assay was negative for isocitrate dehydrogenase type 1 and 2 mutations, and positive for missense mutation of phosphatidylinositol 3-kinase catalytic subunit (PIK3CA). The invading tumor cells exhibited minimal pleomorphism with no mitosis, vascular hyperplasia, or necrosis observed. The Ki-67 labeling index was lower than 5%. Tissue was noted to have preserved underlying cytoarchitecture despite tumor cell invasion. These histopathologic findings and genetic markers combined with the morphologic and MR spectroscopy features of our case were consistent with diffuse low-grade glioma (World Health Organization type II) exhibiting GC pattern of growth. Radiation and medical oncology planned an outpatient treatment regimen consisting of whole brain radiation therapy with adjuvant procarbazine, lomustine, and vincristine. Unfortunately, the patient was lost to follow-up 1 month after diagnosis. Discussion and literature review As per the World Health Organization, GC is classified as a type of growth pattern within the category of a diffuse glioma involving at least 3 contiguous lobes [1] . GC has conventionally been classified as either primary or secondary. Primary GC arises de novo and is further classified into 2 types; type I is a diffuse neoplastic growth without a clear solid tumor component, whereas type II includes an obvious tumor mass in addition to its diffuse component [2] . Secondary GC arises from malignant transformation of a previously diagnosed glioma and is associated with prior radiation or antiangiogenic therapy [2] . GC is a very rare entity with an overall annual incidence rate of 0.15 cases per million, and comprises roughly 1/400 of all glial tumors with a slight male predominance (M/F 1.4) [2] . Prognosis is poor, with a 1-and 5-year overall survival rate of 50% and 18%, respectively, and a median survival time of 14.5 months [3] . Negative prognostic factors include increasing age and rural residence, and positive prognostic factors include tumor location restricted within the cerebral hemispheres [2] . Clinically GC patients will often complain of a variety of symptoms due to the multiple structures affected. A large case series by Georgakis et al. of 1648 patients found that seizures, associated with temporal lobe involvement, were present in roughly half of all patients; headache, associated with increased ICP, in 36% of patients; cognitive decline, associated with wide spread disease, in 32%; focal motor deficits, associated with motor cortex involvement, in 32%; and gait abnormalities, associated with cerebellar involvement, in 15% [4] . Rare, but important, complaints include blurred vision due to tumor involving of the optic nerves and pathways, cranial nerve palsies via involvement of the brainstem, and atypical parkinsonian syndromes via involvement of the basal ganglia [4 ,5] . The clinical exam findings and imaging/anatomical correlates of this case include left-sided weakness corresponding with involvement of the right paracentral lobule and corticospinal tract and global headache that was most likely related to increased ICP as evidence by diffuse effacement of the sulci and fissures. The molecular changes associated with GC are still being described in the literature. While negative in our case, a mutated isocitrate dehydrogenase gene is associated with a more favorable prognosis in gliomas across all histologic grades and subtypes and has been found in roughly 50% of GC cases [6] . Molecular genetic analysis on our patient's biopsy showed a missense mutation of PIK3CA, which is associated with tumorigenesis and is seen in most solid human cancers and some overgrowth syndromes [7] . The Ki-67 labeling index, a marker of tumor proliferation that is elevated in high-grade gliomas, was lower than 5%, consistent with low-grade tumor [8] . The invading tumor cells also exhibited minimal pleomorphism with no mitosis, vascular hyperplasia, or necrosis (consistent with low-grade tumor). Underlying cytoarchitecture was preserved, a finding specific for GC that helps differentiate it from mulicentric and multifocal glioma. MRI is the best imaging modality to demonstrate GC due to its greater contrast resolution and ability to more conspicuously depict the specific degree of anatomic involvement. GC typically appears iso-to hypointense on T1-weighted imaging (T1WI), hyperintense on FLAIR and T2WI, and results in expansion of involved structures with absent or minimal enhancement [9] . Areas of enhancement are suspicious for ei-ther foci of high-grade glioma or malignant progression when seen on follow-up studies. Bilateral hemispheric involvement is seen in 65% of patients, infratentorial infiltration in 30% of patients, and corpus callosum involvement is found in roughly half of patients [9] . Regarding the combination of a patient's clinical presentation and neuroimaging results, the differential diagnosis of GC can be broad. Neurocognitive deficits in conjunction with widespread T2 hyperintensities in the cerebral white matter can be seen in many diseases in addition to GC, including demyelinating diseases (eg, progressive multifocal leukoencephalopathy [PML], multiple sclerosis [MS], or acute disseminated encephalomyelitis [ADEM]), cerebral vasculitis and leukodystrophies [2] . Further, the differential diagnosis of a lesion involving the white matter, corpus callosum and brainstem includes tumors (ie, infiltrating glioma or lymphoma) in addition to demyelination. Several pertinent negative clinical and laboratory features of this case pointed against an encephalitis or PML, including lack of a fever or leukocytosis, lack of HIV or other cause for an immunocompromised state and normal CSF and serologic studies. Our case demonstrated several morphologic features on MRI favoring GC and tumor in general. While diffuse nonenhancing white matter changes and involvement of the corpus callosum can be seen in both glioma and demyelination, the presence of lesions extending to the cortex with parenchymal expansion as seen in our case ( Fig. 2 ) is more typical of tumor rather then demyelination. Confluent cortical involvement suggests tumor over tumefactive demyelination [10] . Conversely, MS is well known to involve the cortex and cause cognitive impairment, though cortical lesions in MS, whether focal or diffuse, typically result in atrophy rather than expansion [11] and are best seen on double inversion recovery sequences [12] . Tumefactive demyelination could also result in the expansile white matter lesions seen in our case, but typically demonstrates enhancement that often has an incomplete rim-like appearance (the "open ring sign") without significant cortical involvement [11] ; no enhancement was seen in our case despite the degree of abnormal signal and parenchymal expansion. Additionally, the involvement of the septum pellucidum and fornix with expansion seen in our patient is also highly suggestive of intra axial tumor [13] . MR spectroscopy can also aid diagnosis, grading, and biopsy planning of GC. Brain metabolites commonly evaluated on MR spectroscopy include NAA, choline, and creatine. NAA is a marker of neuronal integrity and is decreased in diseases that adversely affect the brain. Choline is a cell membrane marker and is increased in processes that result in increased cell turnover, including tumor, subacute infarction, or inflammatory diseases. Creatine provides a measure of energy stores. Myoinositol is a molecule found in astrocytes that functions as an osmolyte, is involved in the protein kinase C pathway and is elevated in low grade glial tumors and demyelination, especially PML [14] . MR spectroscopy results are commonly analyzed using concentrations of metabolites as well as ratios, such as NAA/Cr, NAA/Cho, and Cho/Cr. GC patients almost uniformly have decreased NAA/Cho and NAA/Cr ratios and commonly have elevated Cho/Cr ratio within areas of hyperintensity on FLAIR and T2WI [15] . MR spectroscopy of our patient ( Fig. 4 ) demonstrated expected decreased NAA/Cho and NAA/Cr ratios of approximately 0.8. Myoinositol was found to be significantly elevated corresponding to diffuse low-grade tumor. Cho/Cr ratio was not found to be elevated at 1.04 (normally < 1.5), though the combination of normal Cho/Cr ratio and elevated myoinositol seen in our case has previously been reported in cases of low grade gliomas and specifically in GC [16] . High lipid/lactate peaks on MR spectroscopy are observed in areas of necrosis in high-grade tumors and associated with a poorer prognosis; these peaks were not observed on our patient [17] . MR perfusion (MRP) imaging may be useful in differentiating high grade gliomas and lymphoma from tumefactive demyelinating lesions [18] . However, MRP is less useful in differentiating GC from demyelination as both usually demonstrate low relative cerebral blood volume due to relative lack of vascular angiogenesis [19 ,20] unless areas of areas of higher grade tumor are present. Treatment options for GC are poor with no standard trialbased recommendation for the initiation of therapy. Many institutions treat patients the same as a high-grade glioma patient with upfront radiation or chemo-radiation [9] . However, radiation therapy has shown no overall survival benefit on several trials [3 ,21] . There is also discussion surrounding the efficacy of the chemotherapeutic approach. NOA-5, the only prospective clinical trial published evaluating primary chemotherapy's efficacy in GC showed that initial treatment with procarbazine and lomustine may confer clinical benefit in patients with GC according to the NOA-5 trial [22] . Conclusion GC is a rare and universally fatal pattern of growth of diffuse gliomas with clinical and imaging findings that are nonspecific and can mimic several other more common conditions, such as demyelination (including MS, PML, and ADEM) and encephalitis, which may result in delay in diagnosis and improper treatment. Several features observed in our case may suggest the diagnosis of GC over demyelination, including diffuse expansion of affected structures without significant enhancement, cortical involvement with expansion and involvement of the septum pellucidum and fornix. Furthermore, a normal or elevated Cho/Cr ratio with increased myoinositol concentration on MR spectroscopy is suggestive of low-grade glioma and GC when suspicious findings are also seen on conventional MRI. Although prognosis is poor and GC is universally fatal there is evidence that treatment confers some short-term survival benefit. It is important, therefore, to include GC in the differential diagnosis in cases of widespread infiltrating brain lesions when the clinical features are atypical or unexpected for more common diseases, and to consider early brain biopsy in those cases.
2020-07-23T09:09:34.084Z
2020-07-22T00:00:00.000
{ "year": 2020, "sha1": "b7d1d4544300d262fa156eb42a092dc75b06323c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.radcr.2020.06.043", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1518601075ab17db821f72c41a438dd55cbb901f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208278739
pes2o/s2orc
v3-fos-license
Least General Generalizations in Description Logic: Verification and Existence We study two forms of least general generalizations in description logic, the least common subsumer (LCS) and most specific concept (MSC). While the LCS generalizes from examples that take the form of concepts, the MSC generalizes from individuals in data. Our focus is on the complexity of existence and verification, the latter meaning to decide whether a candidate concept is the LCS or MSC. We consider cases with and without a background TBox and a target signature. Our results range from CO NP-complete for LCS and MSC verification in the description logic EL without TBoxes to undecidability of LCS and MSC verification and existence in ELI with TBoxes. To obtain results in the presence of a TBox, we establish a close link between the problems studied in this paper and concept learning from positive and negative examples. We also give a way to regain decidability in ELI with TBoxes and study single example MSC as a special case. Introduction Generalization is a fundamental method in relational learning and inductive logic programming (Plotkin 1970;Muggleton 1991).Given a finite number of positive examples, one seeks a description in a logical language that encompasses all examples and in this sense provides a generalization.To ensure that the description is as informative as possible, one aims at obtaining least general generalizations, that is, generalizations that cannot be made more specific without losing at least one example.Note that computing least general generalizations is a form of supervised learning in which only positive, but no negative examples are given. In this paper, we study least general generalizations in the context of description logics (DLs), a widely known family of ontology languages that underpin the web ontology language OWL 2 (Baader et al. 2017).In DLs, concepts are the building blocks of an ontology and thus a prime target for being learned through generalization.There are in fact several applications in which this is useful, including ontology design by domain experts that are not sufficiently proficient in logical modeling (Baader and Küsters 1998;Baader, Küsters, and Molitor 1999;Baader, Sertkaya, and Turhan 2007;Donini et al. 2009), supporting the improvement and restructuring of an ontology (Cohen, Borgida, and Hirsh 1992;Küsters and Borgida 2001), and creative discovery of novel concepts through conceptual blending (Fauconnier and Turner 2008;Eppe et al. 2018).We focus on the two fundamental DLs EL and ELI, fragments of first-order Horn logic that can express positive conjunctive existential properties, ELI extending EL with inverse roles.Both DLs are natural choices for generalization as their limited expressive power helps to avoid overfitting, that is, we cannot generalize by disjunctively combining descriptions of each single example, but are forced to find a true generalization.In fact, least general generalizations in EL have received significant attention (Baader, Küsters, and Molitor 1999;Baader 2003;Zarrieß and Turhan 2013) while, somewhat surprisingly, there appears to be no prior work on DLs with inverse roles. There are two established notions of least general generalization in the DL context.When the examples are given in the form of concepts, the desired generalization is the least common subsumer (LCS), the least general concept that subsumes all examples (Cohen, Borgida, and Hirsh 1992).A natural alternative is to give examples using relational data, which in DLs are represented as an ABox.Traditionally, one uses only a single example, which takes the form of an individual in the data, and then asks for the most specific concept (MSC), that is, the least general concept that the individual is an instance of (Nebel 1990).However, there seems to be no good reason to restrict the MSC to a single example and thus we define it based on multiple examples.In this way, the LCS becomes a special form of MSC in which the data consists of a collection of trees.We remark that EL and ELI concepts can be viewed as natural tree query languages for graph databases and knowledge graphs and thus the MSC is useful for data exploration and comprehension, see e.g.(Colucci et al. 2016).It is also related to generating referring expressions (Borgida, Toman, and Weddell 2016). For both the LCS and the MSC, we study the two decision problems existence and verification.In fact, both the LCS and the MSC need not exist because there can be an infinite sequence of less and less general generalizations.In verification, one is given a candidate concept and the question is whether the candidate is the LCS or MSC.Verification is relevant, for example, in approaches that try to find the LCS or MSC by refinement operators that move towards less general generalizations in a step-wise fashion (Badea and Nienhuys-Cheng 2000;Lehmann and Hitzler 2010;Lehmann and Haase 2009) and check after each step whether the least general generalization has already been reached.We consider the case with and without a background TBox and with and without a target signature that the generalization should be formulated in.If the generalization does not exist, one can resort to approximations (Küsters and Molitor 2001;Baader, Sertkaya, and Turhan 2007). We now summarize our main complexity and undecidability results.They are based on characterizations in terms of simulations between products of universal models, mildly varying characterizations given in (Zarrieß and Turhan 2013;Funk et al. 2019).We start with the case without TBoxes, for which we find LCS and MSC verification in EL to be CONP-complete.It is well-known that the LCS in EL always exists (Baader, Küsters, and Molitor 1999), and we complement this by proving that MSC existence in EL is PSPACEcomplete.We then add inverse roles which introduce significant technical challenges.In particular, the structure of the relevant products from the mentioned characterizations is much more complex.As a consequence, the LCS in ELI is not guaranteed to exist.We prove that LCS and MSC existence and verification are PSPACE-hard and in EXPTIME.The lower bounds require a remarkably intricate construction and show as a by-product that the product simulation problem on trees (defined in the paper) is PSPACE-hard. We then switch to the case with TBoxes, starting with observing a connection to concept learning (Badea and Nienhuys-Cheng 2000;Lehmann and Hitzler 2010;Lehmann and Haase 2009;Lisi 2012;Bühmann et al. 2018;Sarker and Hitzler 2019) and in particular to the concept separability problem (Funk et al. 2019) which asks whether there is a concept that separates given positive examples from given negative examples.It turns out that its complement reduces in polynomial time to MSC existence.Using results from (Funk et al. 2019), this can be used to show that MSC existence is undecidable in ELI and EXPTIMEcomplete in EL.The same is true for verification as the two problems are mutually reducible in polynomial time when a TBox can be used.We consider it remarkable that inverse roles have such a dramatic computational effect.We also identify a way around undecidability, namely to consider for the generalization only symmetry free ELI concepts, that is, ELI concepts that do not admit a subconcept of the form ∃r.(C ∃r − .D).In this case, the complexity drops to EXPTIME again.Up to this point, all mentioned complexity lower bounds and undecidability results hold without a signature restriction on the target concept while all upper bounds apply also with such a restriction.We finally consider the MSC of single examples and show that existence and verification are in PTIME in EL while they are complete for EXPTIME and 2-EXPTIME in ELI, depending on whether or not we assume the signature to be full.Thus once more, adding inverse roles has a drastic effect. Note that in the literature, the LCS is sometimes restricted to only constantly many examples.In all of the above results, we do not assume a constant bound on the number of examples.We also make observations regarding that case, though.Without a TBox, the complexity typically drops to PTIME and the same is true for EL with TBoxes (Zarrieß and Turhan 2013).When both inverse roles and TBoxes are present, however, the complexity tends to not decrease.We remark that in the decidable cases, our constructions yield upper bounds on the role depth of the LCS and MSC, if they exists, which together with the characterizations can be used to actually construct them. A full version that contains all proof details is available at http://www.informatik.uni-bremen.de/tdki/research/. Preliminaries We introduce the basics of DLs as required for this paper, for full details see (Baader et al. 2017).Let N C be a set of concept names and N R a set of role names, both countably infinite.A role is either a role name or an inverse role r − , r a role name.For uniformity, we identify (r − ) − with r.An ELI concept is formed according to the syntax rule where A ranges over concept names and r over roles.An EL concept is an ELI concept that does not use inverse roles.The depth of a concept refers to the nesting depth of the operator ∃r.C.A signature Σ is a set of concept and role names.An L concept is an L(Σ) concept if it uses only concept and role names from Σ, and likewise for other syntactic objects such as TBoxes and ABoxes.The signature sig(O) of a syntactic object O is the set of concept and role names that occur in O.The Σ-reduct I |Σ of an interpretation I is obtained from I by setting A I = ∅ and r I = ∅ for all concept names A and role names r not in Σ. Each interpretation I gives rise to a directed graph G I = (∆ I , {(d, e) | (d, e) ∈ r I }) and a corresponding undirected graph G u I .We thus apply graph theoretic terminology directly to interpretations, speaking for example about their outdegree.An interpretation is tree-shaped (resp.ditreeshaped) if G u I (resp.G I ) is a tree without multiedges, that is, (d, e) ∈ r I ∩ s I implies r = s for all roles r, s.Each ELI (resp.EL) concept C can be viewed as a tree-shaped (resp.ditree-shaped) interpretation and vice versa.All this also applies to ABoxes, which are only a different way to present finite interpretations.We use A C to denote the ELI concept C viewed as a tree-shaped ABox and use ρ C to denote the root of A C .For example, C = A ∃r.B ∃r − .gives We introduce simulations, universal models, and direct products.Let I 1 and I 2 be interpretations. (e, e ) ∈ r I2 for some e ∈ ∆ I2 , for all role names r ∈ Σ. S is an ELI(Σ) simulation if Condition 2 also holds for inverse roles r − with r ∈ Σ.Let L ∈ {EL, ELI} and (d, e) ∈ ∆ I1 × ∆ I2 .We write (I 1 , d) L,Σ (I 2 , e) if there exists an L(Σ) simulation from I 1 to I 2 that contains (d, e).We omit Σ if it is the full signature N C ∪ N R , writing L and speaking of L simulations.It can be checked in polynomial time whether (I 1 , d) L,Σ (I 2 , e).The following lemma shows that L(Σ) simulations characterize preservation of L(Σ) concepts.Lemma 2 Let L ∈ {EL, ELI}, let I 1 , I 2 be interpretations with finite outdegree, and let Σ be a signature.The following are equivalent: 1. (I 1 , d) L,Σ (I 2 , e); 2. for all L(Σ) concepts C: if d ∈ C I1 , then e ∈ C I2 . Let K = (T , A) be a KB and sub(T ) be the set of all subconcepts of concepts that occur in T .A type for T is a subset t ⊆ sub(T ) such that T |= t D implies D ∈ t for all D ∈ sub(T ).Denote by T the set of all types for T .When a ∈ ind(A), t, t ∈ T , and r is a role, we write r0 t 1 , and t i T ri t i+1 for all i < n.Let tail(p) denote the last element of the path p. Define the universal model U K of K by taking as ∆ U K the set of all paths for K and setting for all concept names A and role names r: The universal model U T ,C of an ELI TBox T and an ELI concept C is defined as U K where K = (T , A C ). Lemma 3 For all ELI KBs K, ELI concepts C, and a The direct product n i=1 I i of interpretations I 1 , . . ., I n is defined by LCS and MSC: Basics We introduce least common subsumers and most specific concepts, discuss their relationship, and give modeltheoretic characterizations for verification and existence.The latter are mild extensions of characterizations established in (Zarrieß and Turhan 2013). If an L(Σ)-LCS w.r.t. a TBox T exists, then it is unique up to equivalence w.r.t.T .We thus speak about the L(Σ)-LCS.We omit Σ if it contains sig(T ∪{C 1 , . . ., C n }), speaking of the L-LCS w.r.t.T .Clearly, no L-LCS can contain symbols that are not in the TBox or the examples.Thus, all signatures between the finite sig(T ∪ {C 1 , . . ., C n }) and the full signature behave in the same way.We also omit T if it is empty, speaking of the L(Σ)-LCS.Example 1 (1) Let C 1 = ∃attend.MLConf and C 2 = ∃attend.KRConf.Then ∃attend. . is the EL (and ELI) (2) The L-LCS, L ∈ {EL, ELI}, of a single L concept C w.r.t. an L TBox T is just C. For Σ sig(C), however, the L(Σ)-LCS of C w.r.t.T does not always exist.Take, for example, T = {A ∃r.A} and Σ = {r}.Then neither the ELI(Σ)-LCS nor the EL(Σ)-LCS of A w.r.t.T exists as T |= A ∃r n .for all n ≥ 0, but there is no ELI(Σ) concept C with T |= A C and T |= C ∃r n .for all n. Definition 2 Let K = (T , A) be a KB, a 1 , . . ., a n ∈ ind(A) individuals called examples, L ∈ {EL, ELI}, and Like the LCS, the MSC is unique up to equivalence w.r.t.T (if it exists) and thus we speak of the MSC.We drop Σ if Σ ⊇ sig(K).As for the LCS, a symbol that does not occur in the KB cannot occur in the MSC. Example 2 (1) In contrast to the EL-LCS, the EL-MSC of a single example does not always exist, even when the TBox is empty, due to cycles in the ABox.For example, for A = {A(a), r(a, a)} the EL-MSC of a w.r.t.K = (∅, A) does not exist (use that K |= ∃r n .(a) for all n ≥ 0).In contrast, the EL-MSC of a w.r.t.K = ({A ∃r.A}, A) is A. (2) A common proposal to generalize from individuals is to compute the MSC of each individual separately and then generalize by applying the LCS, provided that all MSCs exist (Baader, Küsters, and Molitor 1999).It pays off, however, to directly apply the MSC to multiple individuals.Let, for example, K = (∅, A), A = {A(a), r(a, a), A(b), s(b, b)}.Then the EL-MSC of a alone w.r.t.K does not exist, and likewise for b.In constrast, the EL-MSC of a, b w.r.t.K is A. The following theorem, which is an immediate consequence of Lemma 1, shows that the LCS is a special form of MSC.Theorem 1 Let L ∈ {EL, ELI}, T be an L TBox, C 1 , . . ., C n L concepts, and Σ a signature.Then an LCS and MSC give rise to the four decision problems studied in this paper.Let L be a description logic.L-LCS existence w.r.t.TBoxes means to decide, given L concepts C 1 , . . ., C n , an L TBox T , and a finite signature Σ, whether the L(Σ)-LCS of C 1 , . . ., C n w.r.t.T exists.By the remark made after Definition 1, it is without loss of generality to consider only finite signatures.In particular, we can use sig(T ∪ {C 1 , . . ., C n }) instead of the full signature.L-MSC existence w.r.t.TBoxes is defined accordingly, the input consisting of a KB (T , A) with T an L TBox, a 1 , . . ., a n ∈ ind(A), and a finite signature Σ.In L-LCS (resp.L-MSC) verification w.r.t.TBoxes, we are given as an additional input a candidate L(Σ) concept C and the question is whether C is the L(Σ)-LCS of C 1 , . . ., C n w.r.t.T (resp.the L(Σ)-MSC of a 1 , . . ., a n w.r.t.K). Theorem 1 provides a reduction from L-LCS existence w.r.t.TBoxes to L-MSC existence w.r.t.TBoxes, and likewise for verification.In this reduction, neither the TBox nor the signature nor the number of examples change.We now present a converse reduction which, however, requires to modify the TBox.Theorem 2 Let L ∈ {EL, ELI}.Then L-MSC verification (resp.existence) w.r.t.TBoxes can be reduced in polynomial time to L-LCS verification (resp.existence).This also holds in the full signature case if there are at least two examples.Proof.Let T be an L TBox, A an ABox, a 1 , . . ., a n ∈ ind(A).We may assume w.l.o.g. that A is the disjoint union of ABoxes A 1 , . . ., A n such that a i ∈ ind(A i ) for i = 1, . . ., n.Let X a be a fresh concept name for every a ∈ ind(A) and let T be the extension of T with X a A for all A(a) ∈ A, X a ∃r.X a for all r(a, a ) ∈ A. (If L = ELI, then also add X a ∃r − .X a if r(a , a) ∈ A.) Then for every signature Σ that does not contain {X a1 , . . ., X an } and every L(Σ) concept D, D is the L(Σ)-MSC of a 1 , . . ., a n w.r.t.(T , A) iff D is the L(Σ)-LCS of X a1 , . . ., X an w.r.t.T . In the case of the full signature, we have to consider the L(Σ ∪ {X a1 , . . ., X an })-LCS in place of the L(Σ)-LCS. The assumption that there are at least two examples ensures that the concept names X a cannot occur in the LCS. J We next provide model-theoretic characterizations for MSC verification and existence based on products and simulations.Corresponding characterizations for LCS verification and existence can be obtained in a straightforward way via Theorem 1, see the appendix.Note that Point 1 below can also be viewed as a simulation condition. Theorem 3 (MSC Verification) Let L ∈ {EL, ELI}, K = (T , A) be an L KB, a 1 , . . ., a n ∈ ind(A), and Σ a signature.An L(Σ) concept C is the L(Σ)-MSC of a 1 , . . ., a n w.r.t.K iff the following conditions hold: for all i < k, each r i a (potentially inverse) role.Denote by tail(p) the last element of p.The ELI, k-unfolding of I at d 0 , denoted (I, d 0 ) ↓ELI,k , is the interpretation defined by taking ∆ (I,d0) ↓ELI,k to be the set of all d 0 -paths of length at most k and setting The EL, k-unfolding of I at d 0 , denoted (I, d 0 ) ↓EL,k , is defined accordingly, but only admitting role names in paths.For L ∈ {EL, ELI} and an L KB K, we use It can be verified that this interpretation is tree-shaped for L = ELI and ditree-shaped for L = EL and can thus be viewed as an L concept C k . Without TBoxes We start with studying least general generalizations in the case without TBoxes, beginning with verification in EL. Theorem 5 In EL, LCS and MSC verification w.r.t. the empty TBox are CONP-complete.The lower bounds apply even when the signature is full. Proof.(sketch) The upper bound uses Theorem 3, the fact that instance checking in EL is in PTIME, and the observation that the EL-product simulation problem is in CONP if the interpretation J is tree-shaped (here, it is even ditreeshaped).In fact, if (I, d) EL,Σ (J , e) with J tree-shaped, then there is a subinterpretation I 0 of I of polynomial size such that (I 0 , d) EL,Σ (J , e).The lower bound is proved by reducing the satisfiability problem for propositional logic to the complement of EL-LCS verification.It also establishes CONP-hardness of the EL-product simulation problem in the case that J is tree-shaped.J Regarding existence, a first well-known observation is that the EL-LCS always exists, even if the signature is not full.This follows from Theorem 4 and the fact that if |Σ , k the maximum depth of C 1 , . . ., C n .In contrast, the EL-MSC does not always exist even with the empty TBox, see Example 2. Theorem 6 In EL, MSC existence w.r.t. the empty TBox is PSPACE-complete.The lower bound applies even when the signature is full.Proof.(sketch) Using Theorem 4, one can show that the A that starts at (a 1 , . . ., a n )-we view ABoxes as finite interpretations here.We can thus decide existence of the EL(Σ)-MSC in polynomial space in the standard way: guess an element a of A n and, proceeding step by step, a path through A n that starts at (a 1 , . . ., a n ) and follows only role names from Σ. Reject if the element a is seen twice.The lower bound is established by reducing the word problem of deterministic polynomially space-bounded Turing machines.J We next turn to ELI.In contrast to EL, here the LCS does not always exist even when the TBox is empty. Example 3 Consider the following ELI concepts D 1 , D 2 over concept names A 1 , . . ., A 4 and a single role r: The interpretation U is the part of A D1 × A D2 that is reachable from its root •.One can show that the infinite path in U labeled with Proof.(sketch) The main ingredient to the PSPACE lower bounds is a rather intricate proof that the ELI-product simulation problem is PSPACE-hard already when restricted to tree-shaped interpretations.In fact, this is the case even when interpretations on the left-hand sides are trees of depth two and the interpretation on the right-hand side is fixed (and of depth eleven).It is interesting to contrast this with the fact that the EL-product simulation problem is CONP-complete on tree-shaped interpretations, see the proof of Theorem 5. To obtain a PSPACE lower bound for LCS verification and existence, we then use reductions from ELI-product simulation on tree shaped interpretations.The upper bound for MSC verification (and thus also for LCS verification) is obtained by recalling that ELI instance checking is EXPTIME-complete and adapting the EXPTIME upper bound from (Zarrieß and Turhan 2013) for the ELproduct simulation problem to ELI. The EXPTIME upper bound for MSC existence (and thus also for LCS existence) can be proved similarly to the upper bound in Theorem 6.The main difference is that we now work with ELI simulations rather than EL simulations and thus need to be more careful about the paths we consider.In fact, we use paths A that start at d 0 = (a 1 , . . ., a n ), follow only Σ-roles, and satisfy the following for all i ≥ 0: All problems studied in this section are solvable in PTIME if the number of examples is bounded by a constant.This follows from an analysis of the presented upper bound proofs and has in some cases also been established before (Baader, Küsters, and Molitor 1999;Zarrieß and Turhan 2013). With TBoxes We now add TBoxes to the picture.It turns out that, in this case, we can transfer results from the concept separabil-ity problem, which has been considered in concept learning from positive and negative examples (Funk et al. 2019). Definition 3 Let L ∈ {EL, ELI}.An L learning instance is a triple (K, P, N ) with K = (T , A) an L KB and P, N ⊆ ind(A) sets of positive and negative examples.Let Σ be a signature.An L(Σ) solution to (K, P, N ) is an L(Σ) concept C such that K |= C(a) for all a ∈ P and K |= C(a) for all a ∈ N . This definition gives rise to the decision problem of L concept separability: given an L learning instance (K, P, N ) and a signature Σ, decide whether it admits an L(Σ) solution.As the conjunction of L(Σ) solutions to (K, P, {b}), b ∈ N , is an L(Σ) solution to (K, P, N ), it suffices to consider instances with N singleton.Note that in (Funk et al. 2019) only the full signature case is considered. One can easily derive from (Funk et al. 2019) that (K, P, {b}) has an L(Σ) solution iff a∈P (U K , a) L,Σ (U K , b).By encoding b as a concept D as in the proof of Theorem 2, we can thus view L(Σ) concept separability as the problem to decide for an L KB K = (T , A), examples a 1 , . . ., a n ∈ ind(A), and an L concept D whether n i=1 (U K , a i ) L,Σ (U T ,D , ρ D ), which is exactly the negation of Condition 2 of the characterization of MSC verification in Theorem 3.This provides the basis for the following. Theorem 8 For L ∈ {EL, ELI}, the complement of L concept separability can be reduced in polynomial time to L-MSC verification and existence.This also holds for the full signature. Proof.(sketch) We consider EL and the full signature case.Given K, a 1 , . . ., a n , and D, we extend K by adding assertions v(ρ i , a i ), v(ρ i , b i ), D(b i ), where ρ i and b i are fresh individuals, v a fresh role name, and D(b i ) stands for . ., ρ n w.r.t. the extended KB (under mild assumptions).For the reduction to MSC existence, we additionally generate infinite r-chains starting at a i and b i using CIs X ∃r.X and adding X(a i ) and X(b i ) to the ABox, where the concept names X are distinct for distinct a i but coincide for all b i .If we assume w.l.o.g. that n ≥ 2, then It is shown in (Funk et al. 2019) that ELI concept separability is undecidable already in the full signature case and even with only two positive examples.We thus obtain the following from Theorems 8 and 2 and the fact that the number of examples remains unchanged under the reductions. Theorem 9 In ELI, MSC and LCS verification and existence are undecidable.This is already the case when the signature is full and there are at most two examples. It is also shown in (Funk et al. 2019) that EL concept separability is EXPTIME-hard.In this case the number of positive examples is not bounded by a constant. Theorem 10 In EL, MSC and LCS verification and existence are EXPTIME-complete.The lower bounds already apply when the signature is full. Proof.(sketch) The lower bounds come from Theorems 8 and 2. EXPTIME upper bounds for LCS existence and verification with the full signature are in (Zarrieß and Turhan 2013), the former explicitly and the latter implicitly.They extend to other signatures in a straightforward way.To lift these bounds to the MSC, we use Theorem 2. J When the number of examples is bounded, then all problems in Theorem 10 can be solved in PTIME (which was known for LCS existence (Zarrieß and Turhan 2013)). We close this section with observing that L-MSC verification can be reduced to the complement of concept separability, and thus, by Theorem 8, to L-MSC existence.Theorem 11 For L ∈ {EL, ELI}, L-MSC verification can be reduced in polynomial time to the complement of L concept separability.This also holds for the full signature. Proof.(sketch) Recall that Condition 2 of Theorem 3 is the complement of concept separability.By Lemmas 3 and 2, Condition 1 is equivalent to requiring U T ,C , ρ C L U K , a i , for all i.These simulation checks can be incorporated into Condition 2 by extending the ABox.J 6 Symmetry Free ELI An inspection of the proof of the undecidability results in Theorem 9 reveals that it crucially depends on the MSC and LCS to contain subconcepts of the form ∃r.(C ∃r − .D).Indeed, concept separability is decidable when the TBox is formulated in ELI while separating concepts are restricted to EL (Funk et al. 2019).We consider a more general case by restricting the MSC and LCS to symmetry free ELI concepts (ELI sf concepts for short), that is, ELI concepts that do not contain such subconcepts.With ELI sf -LCS and MSC verification and existence w.r.t.ELI TBoxes, we mean that the TBox is formulated in ELI while we seek a least general generalization formulated in ELI sf .In the case of the LCS, also the examples are formulated in unrestricted ELI. We start with providing a characterization of ELI sf (Σ)-MSC existence.To achieve this, we modify the notion of ELI, k-unfolding of an interpretation I at a d 0 ∈ ∆ I given in Section 3 by restricting the domain of the resulting interpretation to symmetry free d 0 -paths of length k, that is, As this interpretation is tree-shaped, it can be viewed as an ELI concept which is even an ELI sf concept. Theorem 12 (ELI sf -MSC Existence w.r.t.ELI TBoxes) Let K = (T , A) be an ELI KB, a 1 , . . ., a n ∈ ind(A), and Σ a signature.The following are equivalent, for Since Theorem 1 extends to the case considered in this section, Theorem 12 also yields a characterization for ELI sf LCS existence w.r.t.ELI TBoxes.Theorems 8 and 11 can also be adapted using a version of concept separability where the separating concepts are formulated in ELI sf .Thus verification reduces to existence in polynomial time and we refrain from giving an explicit characterization. Theorem 12 provides the basis for proving that symmetry freeness regains decidability. Theorem 13 ELI sf -MSC and LCS existence and verification with respect to ELI TBoxes are EXPTIME-complete.The lower bounds hold in the full signature case and with only one example. The lower bounds are easy to prove by reduction from the subsumption of concept names w.r.t.ELI TBoxes (Baader, Brandt, and Lutz 2008).For the upper bounds, we use an approach based on automata on infinite trees.Let K = (T , A) be an ELI KB, a 1 , . . ., a n ∈ ind(A), and Σ a signature.Theorem 12 suggests to test emptiness of two tree automata A and B where A accepts precisely the treeshaped interpretations that admit an ELI(Σ) simulation from U := (Π n i=1 (U K , a i )) ↓ELI sf and B accepts precisely the tree-shaped interpretations U T ,C k , ρ C k , k ≥ 0. In particular, the automaton A visits all elements of U using its states, assigning to each of them a simulating element in the input interpretation.Elements in U are represented by their type t and the role that led to it-note that these uniquely determine the successors, and that this is not the case without symmetry freeness.We thus have (at least) exponentially many states.To obtain an EXPTIME upper bound, we therefore use non-deterministic tree automata (NTA) rather than alternating ones.To avoid having a state for every set of types, we must further make sure that every element in U is simulated by a different element in the input tree.To have enough room when moving down in the input tree, we slightly refine our characterization. A simulation S from I 1 to I 2 is injective if for all e ∈ ∆ I2 , there is at most one d ∈ ∆ I1 with (d, e) ∈ S. We write . Let I × denote the interpretation that is obtained from a tree-shaped interpretation I by duplicating every successor in the tree so that it occurs times.Lemma 5 Let N be the outdegree of , we have: . Now, A accepts the tree-shaped interpretations that admit injective ELI(Σ) simulations from (Π n i=1 (U K , a i )) ↓ELI sf using exponentially many states.Further, B accepts interpretations of the form U ×N T ,D for some D as in the lemma. We first construct an automaton that works over pairs of tree-shaped interpretations and verifies that the first component represents a suitable D and the second component represents U T ,D .We then project to the latter and modify the automaton so as to accept all I ×N with I accepted before. Single Example MSC We consider the MSC of a single example, which is the case traditionally studied in the literature.A PTIME upper bound for EL was given in (Zarrieß and Turhan 2013).We show that adding a signature does not affect this result, and that it also holds for verification.The upper bounds are shown using an automata based approach that is in spirit similar to the approach taken in Section 6.The main difference is that the automaton A has to be two-way since it checks for ELI simulations from U K , a.In case of restricted signature, it has to store types in its states, while for the full signature ABox individuals suffice.J Discussion We have analyzed the complexity of LCS and MSC verification and existence in the DLs EL and ELI, obtaining various complexity results and establishing a close link to concept separability.Topics for future research include tight bounds on the size of the LCS and MSC and studying cases in which the TBoxes is formulated in an expressive DL such as ALC while the LCS and MSC are formulated in EL or ELI (to avoid overfitting).It would also be interesting to study DLs that admit role constraints such as transitive roles and expressive forms of role inclusion.Finally, it would be of interest to study the data complexity, under which the TBox is not regarded as part of the input. Notes for Section 3 For the convenience of the reader we formulate the modeltheoretic characterizations also for the verification and existence of the LCS.We start with LCS verification.The following characterization follows from Theorems 1 and 3. Theorem 16 (LCS Verification) Let L ∈ {EL, ELI}, T be an L TBox, C 1 , . . ., C n L concepts, and Σ a signature.An L(Σ) concept C is the L(Σ)-LCS of C 1 , . . ., C n w.r.t.T iff the following conditions hold: For LCS existence, the following characterization follows from Theorems 1 and 4. Theorem 17 (LCS Existence) Let L ∈ {EL, ELI}, T be an L TBox, C 1 , . . ., C n L concepts, and Σ a signature.The following are equivalent, for Proofs for Section 4 Theorem 6 In EL, MSC existence w.r.t. the empty TBox is PSPACE-complete.The lower bound applies even when the signature is full. Proof.We reduce the word problem for polynomially space bounded Turing machines (TMs), that is, given such a TM M with polynomial space bound p(n), we construct an ABox A with individuals a 1 , . . ., a n , such that the EL MSC of a 1 , . . ., a n w.r.t.A exists iff M accepts an input w.It is well-known that there is a deterministic polynomially space bounded TM whose halting problem is PSPACE-hard. For our purposes, a Turing machine M = (Q, Γ, q 0 , δ, F ) consists of a set of states Q, finite set of tape symbols Γ, an initial state q 0 , a set of final states F , and a (partial) transition function δ : Q × Γ → Q × Γ × {L, R}.There, L and R correspond to the head moving to the left and to the right, respectively.We assume that M halts once it reaches a state q ∈ F , and always continues otherwise. For the reduction, let M = (Q, Γ, q 0 , δ, F ) be a p(n)space bounded deterministic TM, and w an input of length n.We construct an ABox A without any concept assertions.Let us first fix the following individuals: Intuitively, an individual (q, a, i) represents that the content of cell i is a, that the head of the TM is on cell i and that the TM is in state q.Similarly, an individual (a, i) represents that the content of cell i is a (and that the head is not at position i).In the following description the cases i = 1 and i = p(n) are not treated in a special way, since we can assume that M does not move its head beyond cell 1 or p(n). As role names, we use r q,a,i , for all (q, a, i) ∈ ind(A).Informally, a role assertion r q,a,i (e, e ) is included in A if in state q with the head at cell i and reading tape symbol a, M will change the tape cell represented by e to e .Note that e and e may be identical, meaning that the TM transition does not affect the tape cell. Formally, we include the following role assertions for every q ∈ Q, a ∈ Γ, and i ∈ {1, . . ., p(n)} such that δ(q, a) = (q , b, D) is defined: 1. Role assertions that affect the direct environment of the head position i: r q,a,i ((q, a, i), (b, i)) if D = L r q,a,i ((a , i − 1), (q , a , i − 1)) if D = L r q,a,i ((a , i + 1), (a 2. Role assertions that do not affect the direct environment of the head position i: This finishes the construction of A. It remains to specify the individuals a 1 , . . ., a p(n) for the input where denotes the blank symbol. Claim.M accepts w iff the EL-MSC of a 1 , . . ., a n w.r.t.A exists. Proof of the Claim.We provide some insight into the construction of A. For this purpose, let us denote with A p(n) the p(n)-fold product of A. For a configuration α of M , let x α denote the element of A p(n) corresponding to this configuration in the natural way.The construction of A ensures the following: ( * ) if α is a successor configuration of α then x α has precisely one successor in A p(n) , namely x α . Thus, paths in A p(n) starting in (a 1 , . . ., a p(n) ) directly correspond to computations of M on input w. The claim now follows from ( * ) and the fact that the MSC exists iff all paths starting from (a 1 , . . ., a p(n) ) in A p(n) are finite.This finishes the proof of the claim, and in fact of the Theorem. J Theorem 5 In EL, LCS and MSC verification w.r.t. the empty TBox are CONP-complete.The lower bounds apply even when the signature is full. For the proof of Theorem 5 we require the following lemma.For a tree-shaped interpretation J and e ∈ ∆ J we denote by J e the subinterpretation of J induced by the subtree of J rooted at e. Lemma 6 Let I and J be interpretations with J treeshaped.If (I, d) EL,Σ (J , e), then there exists a set X with d ∈ X ⊆ ∆ I such that |X| ≤ |∆ Je | + 1 and (I |X , d) EL,Σ (J , e). Proof.The proof is by induction on the depth of J e .Assume first that J e has depth 0. If there exists a concept name A ∈ Σ with d ∈ A I but e ∈ A J , then X = {d} is as required.Otherwise there exists a role name r ∈ Σ and d with (d, d ) ∈ r I .Then X = {d, d } is as required.Now suppose that e has depth k + 1 and the lemma has been proved for all e with J e of depth ≤ k.Assume (I, d) EL,Σ (J , e).If there exists a concept name A ∈ Σ with d ∈ A I but e ∈ A J , then X = {d} is as required.Otherwise there exists a role name r ∈ Σ and d with (d, d ) ∈ r I such that for all e with (e, e ) ∈ r J , (I, d ) EL,Σ (J , e ).Fix d .By induction hypothesis, we can take for every e with (e, e ) ∈ r J a set X e with d ∈ X e ⊆ ∆ I such that |X e | ≤ |∆ J e | + 1 and (I |X e , d ) EL,Σ (J , e ).Let X be the union of {d} and the sets X e , (e, e ) ∈ r J .Then X is as required. J We now give the proof of Theorem 5. Proof. By • the simulation relation can be checked in polynomial time. For the lower bound, we reduce SAT to LCS verification.Let ϕ be a formula in CNF that consists of m clauses with n variables x 1 , . . ., x n .We will construct concepts C 1 , . . ., C n and a concept D such that the following are equivalent: ∃s.D for i = 1, . . ., n.For better readability, we define concepts C 1 , . . ., C n in terms of interpretations I 1 , . . ., I n as follows: Intuitively, each I i has a root d i with two successors d i0 , d i1 which "choose" a value for variable x i .Note that every successor of the root of the product n i=1 I i corresponds to a variable assignment.We define D via the interpretation J defined as follows: • ∆ J = {d, 1, . . ., m}; • s J = {(d, j) | j ∈ {1, . . ., m}}; • j ∈ X J i iff x i does not occur positively in clause j, for all j ∈ {1, . . ., m} and i ∈ {1, . . ., n}; • j ∈ X J i iff x i does not occur negatively in clause j, for all j ∈ {1, . . ., m} and all i ∈ {1, . . ., n}.Note that every element j in J corresponds to clause j in ϕ and is labeled with all negated literals from the clause, that is, a successor of the root in n i=1 I i maps to j iff the corresponding assignment makes the clause false.The concepts C 1 , . . ., C n , D thus satisfy the equivalence "1 ⇔ 2" above.For the equivalence "2 ⇔ 3", we use Theorem 3 (note that it applies to the LCS since the constructed ABoxes are essentially EL concepts).For "2 ⇒ 3", note that • ∃s.D satisfies Condition 1 of Theorem 3, since it is a conjunct in every C i ; • ∃s.D satisfies Condition 2 of Theorem 3: first note that every s-successor in the product Proof.Assume K = (∅, A), a 1 , . . ., a n ∈ ind(A), and a signature Σ are given.We show that the ELI(Σ)-MSC of a 1 , . . ., a n w.r.t.K exists iff there is no infinite path d 0 , r 0 , d 1 , r 1 , d 2 , . . . in I = n i=1 U K satisfying ( †) d 0 = (a 1 , . . ., a n ) and for all i ≥ 0: sig(r i ) ⊆ Σ and 1. if r i = r − i+1 , then (I, d i+2 ) ELI,Σ (I, d i ); 2. there is no e = d i+1 such that (d i , e) ∈ r I i , (I, d i+1 ) ELI,Σ (I, e), and (I, e) ELI,Σ (I, d i+1 ).To prove this characterization, recall that by Theorem 4 the ELI(Σ)-MCS of a 1 , . . ., a n with respect to K exists iff there exists k ≥ 0 such that for Assume first that there is an infinite path satisfying ( †).Then clearly the path cannot be ELI(Σ)-simulated by (U ∅,C k , ρ C k ) for any k because in any U ∅,C k the length of such paths starting at ρ C k does not exceed k.Conversely, assume there are no infinite paths satisfying ( †).Then let k be the length of the longest path satisfying ( †).It is readily shown that (S k ) holds, as required.As the universal model U K can be constructed in exponential time and the existence of infinite paths in n i=1 U K satisfying ( †) can also be checked in exponential time, the existence of the ELI(Σ)-MSC can be decided in EXPTIME.J For the lower bound, we first prove lower bounds for the ELI-product simulation problem for the case of tree-shaped interpretations. Theorem 18 The ELI-product simulation problem on treeshaped interpretations is PSPACE-hard. We reduce from a tiling problem where the input is a tiling system (T, H, V ), an initial tiling θ = t 1 , . . ., t n with tiles from T , and a final tile t F ∈ T .The goal is to tile a finite rectangle of size n × m, m ≥ 1 arbitrary, such that the first row is tiled with θ and t F occurs in the tiling.Formally, a solution to a tiling instance (T, H, V, θ, t F ), θ of length n, is a mapping τ : {1, . . ., m} × {1, . . ., n} → T , for some m ≥ 2, such that the following conditions are satisfied: Let a tiling instance (T, H, V, θ, t F ) be given.We construct tree interpretations I 1 , . . ., I 3n , M such that for suitably chosen d 0 and e 0 , ( 3n i=1 I i , d 0 ) ELI (M, e 0 ) iff there is no solution. We use the following signature: 1. a single role name r; 2. concept names T i t,j , t ∈ T , i ∈ {1, . . ., n}, j ∈ {1, 2} to express that position i is tiled with t; 3. concept names M 1 , . . ., M 5 representing different 'phases' we go through when following a path through the product; 4. concept names M ij with 1 ≤ i, j ≤ 5 and j ∈ {i − 1, i + 1}, representing transitions between these phases. The interpretations I i , 1 ≤ i ≤ n are defined as follows: • I i is a tree of depth two that branches only at the root d i 0 ; • for all τ ∈ T H , d i 0 has an r − -successor e i τ which has an r-successor d i τ ; , M 5 and with T i t, for every i ∈ {1, . . ., n}, t ∈ T , and ∈ {1, 2}; • each d i τ is labeled with M 1 , M 2 , with every concept name from S i τ,1 and with every concept name T i t,2 , i ∈ {1, . . ., n} and t ∈ T ; • each e i τ is labeled with M 12 , M 21 , M 23 , and M 32 ; • each d i jk is labeled with M jk .The interpretations I i , n + 1 ≤ i ≤ 2n are defined as follows: • I i is a tree of depth two that branches only at the root d i 0 ; • for all τ 1 , τ 2 ∈ T H , d i 0 has an r − -successor e i τ1,τ2 which has an r-successor d i τ1,τ2 ; • d i 0 is labeled with M 1 , M 5 and with T i t, for every i ∈ {1, . . ., n}, t ∈ T , and ∈ {1, 2}; • each d i τ1,τ2 is labeled with M 2 , M 3 , M 4 , with every concept name from S i τ1,1 and from S i τ2,2 ; • each e i τ1,τ2 is labeled with all concept names M jk .The interpretations I i , 2n + 1 ≤ i ≤ 3n are defined as follows: • I i is a tree of depth two that branches only at the root d i 0 ; • for all τ ∈ T H , d i 0 has an r − -successor e i τ which has an r-successor d i τ ; 3 and with T i t, for every i ∈ {1, . . ., n}, t ∈ T , and ∈ {1, 2}; • each d i τ is labeled with M 4 , M 5 , with every concept name from S i τ,2 and with every concept name T i t,1 , i ∈ {1, . . ., n} and t ∈ T . • each e i τ is labeled with M 34 , M 43 , M 45 , and M 54 ; • each d i jk is labeled with M jk . We are mainly interested in paths through 3n i=1 I i that are marked with the following pattern: We give an informal description of how the mentioned paths are related to rectangle tilings.Note first that, if an element of 3n i=1 I i satisfies some M i , then this has implications regarding its components.For instance, if an element (d 1 , . . ., d 3n ) satisfies M 3 , then d 1 , . . ., d n are all roots of their respective interpretations and so are d 2n+1 , . . ., d 3n , while d n+1 , . . ., d 2n are leaves.We sketch how to obtain a path through 3n i=1 I i that follows pattern ( * ) and represents any rectangle tiling.Let θ 1 , θ 2 , . . .be an enumeration of the rows of some the tiling.• Then proceed via an element labeled M 12 to an element labeled M 2 that represents θ 2 in the T i t,2 and θ 1 in the T i t,1 .The choice is in components 2n i=n+1 I i and we remain stationary in n i=1 I i and in 3n i=2n+11 I i . • In • Next proceed to the roots in n i=1 I i , remaining stationary in 3n i=n+1 I i (label M 3 , via M 23 ).We still represent θ 1 and θ 2 as before.As explained later, this transition serves to verify the vertical matching condition. • Next proceed to leaves in 3n i=2n+1 I i , remaining stationary in 2n i=1 I i (label M 4 , via M 34 ).Once more, θ 1 and θ 2 are represented as before.This transition serves no purpose as we move 'upwards' (towards higher indexes) in the M 1 , . . ., M 5 sequence.When moving downwards, this transition checks the vertical matching condition while the transition in the previous item serves no purpose. • Then proceed to the roots in 2n i=n+1 I i , remaining stationary in all other components (label M 5 , via M 45 ); this preserves the representation of θ 2 via T i t,2 , but 'forgets' the representation of θ 1 via T i t,1 .• Now do everything backwards, from M 5 towards M 1 ; first proceed via an element labeled M 54 to an element labeled M 4 that represents θ 3 in the T i t,1 and θ 2 in the T i t,2 .The choice is in components 2n i=n+1 I i and we remain stationary in all other components; then move to lavel M 3 via M 43 , and so on. • After reaching M 1 , proceed again in the forward direction, representing θ 4 , and so forth.Of course, there are paths through the product that do not follow this ideal pattern, for different reasons.For instance, the desired sequence of the M i is not followed, some element does not correspond to a valid row in the tiling, or the vertical matching condition is not met.These undesired paths are captured by 'traps' in the interpretation M that we construct next. We assemble M by starting with a path of length nine, connected by alternating between r − and r: e 1 r − e 12 r e 2 r − e 23 r e 3 r − e 34 r e 4 r − e 45 r e 5 such that each e i is labeled with M i and with all concept names T i t, , 1 ≤ i ≤ n, ∈ {1, 2}, and t ∈ T \ {t F }; the missing t F means that any 'proper' path reaching t F will result in non-simulation.Also, each e ij is labeled with M ij and M ji . We now add traps to make sure that undesired paths are simulated by M. We start with the case that the desired sequence of the M i is not followed: 1.To every e ij , we attach an r-successor that is labeled with M k for every k / ∈ {i, j} and with all concept names T i t, , and that has an r − -successor which has an r-successor that makes true all concept names (including all T i t F , ), acting as a well of positivity.2. To every e i , we attach an r − -successor that is labeled with M jk whenever jk / ∈ {ii − 1, ii + 1}, and that has an rsuccessor which is a well of positivity. Next, we add traps that address defects which concern a single row of the tiling: 3. To each e ij , we attach an r-successor for each k ∈ {1, . . ., n} and ∈ {1, 2}.No concept name T k t, is true there, but all concept names T j t,m with (j, m) = (k, ) and t ∈ T , and of course there is a well below it.This has two effects: (a) it enforces synchronization of the tiling of each ith column accross the n i=1 I i resp., and with T j t, for all j = i, ∈ {1, 2}, and t ∈ T ; 5. at e 34 , we attach a trap for each (t 2 , t 1 ) / ∈ V and each i ∈ {1, . . ., n}, labeled with M 4 , with T i t2,2 and T i t1,1 , and with T j t, for all j = i, ∈ {1, 2}, and t ∈ T .The initial tiling θ = t 1 , . . ., t n gives rise to a sequence of triples τ 1 , . . ., τ n in the obvious way.We are going to use ). as the starting point for the simulation. Proof."if".Assume that there is no solution for (T, H, V, θ, t F ).We prove the existence of a simulation from ( Let us first introduce some notation.We call a tuple t 1 , . . ., t n ∈ T possible if there is a mapping τ with τ (i, 1) = t 1 , . . ., τ (i, n) = t n , for some i and which satisfies Condition 1-3 of a solution (but does not necessarily mention t F ).As there is no solution for (T, H, V, θ, t F ), no tuple that is possible mentions t F .Now, we say that 1 represent a possible row, and the T i t,2 represent some row satisfying the horizontal tiling condition; • if k = 3, then the T i t, represent possible rows, for ∈ {1, 2}; • if k = 4, then the T i t,2 represent a possible row, and the T i t,1 represent some row satisfying the horizontal tiling condition; • if k = 5, then the T i t,2 represent a possible row. We claim that there is a simulation S from ( 3n i=1 I i , d 0 ) to (M, e 1 ) which relates all k-proper elements in 3n i=1 I i with e k .The statement then follows, because the initial tuple d 0 is 1-proper by construction. We show the arguments only for k = 2 because the other cases are similar.Thus, take any d that is 2-proper, and assume (d, e 2 ) ∈ S. We show how to continue the simulation from (d, e 2 ).To this end, let d be an r − -successor of d.We distinguish several cases that can arise by the construction of the I i : • if d does not satisfy one of M 21 or M 23 , then d is simulated by the trap of type 2 at e 2 . • if d satisfies M 21 , then we add (d , e 12 ) to the simulation.Now, let d be any r-successor of d .Since d satisfies M 21 , we have to be in d i 21 for all i ∈ {2n + 1, . . ., 3n} and thus d satisfies one of M 1 , M 2 , M 3 or no M i at all.We again distinguish cases: if d satisfies M 3 or no M i at all, then d is simulated by a trap of type 1 at e 12 ; if d satisfies M 2 , then, by construction of the I i , d is actually d and it is simulated by e 2 ; if d satisfies M 1 , the construction of the I i implies that the rows represented by T i t,1 , at d are the same as these rows at represented by T i t,1 at d. Since the latter is possible by assumption, so is the former.Thus, d is 1-proper and we know that (d , e 1 ) ∈ S. • if d satisfies M 23 , then we add (d , e 23 ) to the simulation and continue as in the previous case.More precisely, let d be any r-successor of d .Since d satisfies M 23 , we have to be in d i 23 for all i ∈ {2n + 1, . . ., 3n} and thus d satisfies one of M 1 , M 2 , M 3 or no M i at all.We again distinguish cases: if d satisfies M 1 or no M i at all, then d is simulated by a trap of type 1 at e 23 ; if d satisfies M 2 , then, by construction of the I i , d is actually d and it is simulated by e 2 ; if d satisfies M 3 , the construction of the I i implies that the rows represented by T i t, , ∈ {1, 2} at d are the same as these rows at represented by T i t, , ∈ {1, 2} at d. Let t 1 , . . .t n and t 1 , . . ., t n be the rows represented by T i t,1 and T i t,2 , respectively.If (t j , t j ) / ∈ V , for some j, then d is simulated by a trap of type 4 at e 23 .Otherwise, the row represented by the T i t,2 at d is a valid successor of the row represented by the T i t,1 at d. Overall, d is 3-proper and we know that (d , e 3 ) ∈ S. "only if".Assume that there is a solution τ for (T, H, V, θ, t F ), and let t F occur in the last row.Let d 0 , r − , d 0 , r, d 1 , . . ., d n be the path which follows the pattern ( * ) that is contained in the product and which reflects the solution τ .(We can obtain this path by letting τ guide the selection of successors as described above).By construction, the path satisfies the following properties, for all i ≥ 0: • if d i satisfies M 2 , then the T i t, represent rows θ , for ∈ {1, 2}, respectively, and and θ 1 is represented by the T i t,1 at d i−1 , and and θ 2 is represented by the T i t,2 at d i−1 ; • if d i satisfies M 3 , then the T i t,1 and the T i t,2 represent rows of τ , and in fact the same rows as the T i t,1 and T i t,2 at d i−1 ; • if d i satisfies M 4 , then the T i t, represent rows θ , for ∈ {1, 2}, respectively, and: if d i−1 satisfies M 3 , then θ 1 and θ 2 are also represented by the T i t,1 and In order to show that there is no simulation, we show that: Note that this is a contradiction since d n satisfies T i t F , for some i, , but none of the e i does. We argue inductively.The induction base is given by the fact that d 0 has to be simulated by e 0 in the lemma.In the induction step, we suppose that d i satisfies some M k and show that d i+1 can only be simulated by e k−1 or e k+1 , respectively, depending on whether d i+1 satisfies M k−1 or M k+1 . We show how to argue for k = 2, assuming that we are in the downward phase of the construction of the path, that is, d i+1 will be labeled with M 3 .Based on the invariants given above, it can be verified that d i and d i+1 cannot be simulated by any trap, and thus have to be simulated by e 23 and e 3 , respectively.J To show PSPACE hardness of LCS and MSC verification and existence we first observe that the tiling problem used in the proof of Theorem 18 is still PSPACE hard if one only considers tiling instances (T, H, V, θ, t F ) such that the initial tiling θ only occurs in the first row for any mapping τ : {1, . . ., m} × {1, . . ., n} → T , m ≥ 2, such that the first three conditions for a solution are satisfied: If this is the case then the pair ( 3n i=1 I i , d 0 ), (M, e 1 ) constructed above has the following property which we require in the reduction to LCS and MSC verification and existence.Let I and J be interpretations and d ∈ ∆ I .An ELI simulation S from I to J is called d-injective if there exists exactly one e ∈ ∆ J with (d, e) ∈ S. We write (I, d) d-inj ELI (J , e) if there exists a d-injective ELI simulation S from I to J that contains (d, e).We say that a pair (I, d), (J , e) is oblivious to d-injectivity if (I, d) d-inj ELI (J , e) iff (I, d) ELI (J , e).It is easy to show the following. Lemma 9 The pair ( 3n i=1 I i , d 0 ), (M, e 1 ) is oblivious to d-injectivity if the input tiling instance (T, H, V, θ 1 , t F ) is such that the initial tiling θ 1 can only occur in the first row for any mapping τ satisfying the first three conditions of solutions.Now let C 1 , . . ., C n and D be ELI concepts and assume that n ≥ 2. Consider the concepts D 1 , D 2 constructed in Example 3 and let where we assume that the signature of Let v be a fresh role name and E i = ∃v.C i ∃v.D for i = 1, . . ., n. PSPACE-hardness of LCS and MSC verification and existence now follow directly from the following reduction.Lemma 10 Assume that n i=1 (U Ci , ρ Ci ), (U D , ρ D ) is oblivious to (ρ C1 , . . ., ρ Cn )-injectivity.Then the following conditions are equivalent: 1. and the chain constructed in Example 3 shows that there is no ELI simulation between as the projections onto components are ELI simulations.By taking the composition of ELI simulations we obtain Condition 1 follows by construction.J Proofs for Section 5 Theorem 8 For L ∈ {EL, ELI}, the complement of L concept separability can be reduced in polynomial time to L-MSC verification and existence.This also holds for the full signature. Proof.Let L ∈ {EL, ELI}.Similarly to the proof of Theorem 2, one can show that it suffices to provide a reduction of the following problem: given an L TBox T , an ABox A with assertions A 1 (a 1 ), . . ., A n (a n ), B(b), where A 1 , . . ., A n , B are concept names, and a signature Σ containing B, is it the case that where K = (T , A)? We start with the reduction for EL. Assume an EL TBox T and an ABox A with assertions A 1 (a 1 ), . . ., A n (a n ), B(b), where A 1 , . . ., A n , B are concept names, are given.We may assume that n ≥ 2 and all a i , i = 1, . . ., n, and b are distinct.Define the relativisation C |E of a concept C to a concept name E inductively as follows: Theorem 11 For L ∈ {EL, ELI}, L-MSC verification can be reduced in polynomial time to the complement of L concept separability.This also holds for the full signature. Proof.Let L ∈ {EL, ELI}.Let K = (T , A) be an L knowledge base, a 1 , . . ., a n ∈ ind(A) individuals, Σ a signature, and C an L(Σ) concept.We construct a new ABox A as follows: • start with A extended with a disjoint copy of A where every individual a ∈ ind(A) is replaced with a ; • take a fresh role name s, and let A ij , with i, j ∈ {1, . . ., n}, be (disjoint) copies of A C with roots ρ ij .Then add the ABoxes B i , for every i ∈ {1, . . ., n}, A C (with root ρ C ), and B (also with root ρ C ) defined as follows: Intuitively, B adds an s-chain of length n to every a i in which every element satisfies C, and B adds an s-chain to the copies of the individuals a i .Let K = (T , A ) and Σ = Σ ∪ {s}.Moreover, let U denote the interpretation that is obtained by taking the union of n i=1 (U K , a i ) and U T ,B1 (the index is not important), identifying the root a 1 of U T ,B1 with the root (a 1 , . . ., a n ) of the product.Let ρ denote the new root of U. Note that we have: The second simulation exists since U is a sub-structure of the product.The first simulation exists because and because for elements in the product reachable via some s-successor of (a 1 , . . ., a n ), any projection is an L(Σ )simulation to U T ,B1 . Since s is fresh, we have that The former is just Condition 2 of Theorem 3 is satisfied.Moreover, the latter implies that (U T ,Aij , ρ ij ) L,Σ (U K , a i ), for all i ∈ {1, . . ., n}.Thus, we also have K |= C(a i ), for all i and hence also Condition 1 of Theorem 3 holds. For "only if", suppose that C satisfies Conditions 1 and 2 of Theorem 3. The former implies that K |= C(a i ) for all i ∈ {1, . . ., n}, and thus (U T ,A C , ρ C ) L,Σ (U K , a i ), for all i. The Claim establishes correctness of the reduction, so it remains to note that the construction of A' can be implemented in polynomial time. J Proofs for Section 6 To show Theorem 12, we first observe the following easily proved relationship between ELI simulations between (I 1 , d) ↓ELI sf and (I 2 , e) and preservation of ELI sf concepts from (I 1 , d) to (I 2 , e). Lemma 11 Let I 1 , I 2 have finite outdegree, and let Σ be a signature.The following conditions are equivalent: We also state and prove the characterization for MSC verification in ELI sf . Theorem 19 Let K = (T , A) be an ELI KB, a 1 , . . ., a n ∈ ind(A), and Σ a signature.An ELI sf (Σ) concept C is the ELI sf (Σ)-MSC of a 1 , . . ., a n with respect to K if, and only if, the following conditions hold: Proof. The proof is similar to the proof of Theorem 4. Proof."2 ⇒ 1" is trivial."3 ⇒ 2" is an immediate consequence of Theorem 19.For "1 ⇒ 3", let the L(Σ)-MSC D be of depth k.It follows from Theorem 19 that The lower bounds hold in the full signature case and with only one example. We show hardness for LSC verification and existence at the same time, by reducing from concept subsumption relative to general ELI TBoxes (Baader, Brandt, and Lutz 2008).Hardness for MSC verification and existence then follows from Theorem 1.Let T , A, B be an input to the subsumption problem.We define a TBox T by taking fresh role names r, s and fresh concept names E, F and setting for C := ∃r.This establishes the claimed lower bounds. Establishing the upper bounds requires more work.We start with Lemma 5. Lemma 5 Let N be the outdegree of , we have: , and let S be a witnessing injective simulation.Let h be the homomorphism from U ×N T ,D to U T ,D which maps every element to its "original".It should be clear the relation S defined by , it is also a sub-concept of C k where k is the role depth of D, and thus By Theorem 12, the MSC exists.Conversely, suppose the ELI sf (Σ)-MSC exists, and thus, there is a k ≥ 0 with Take D = C k and let S be the witnessing simulation.It is crucial to observe that, by the definition of ELI sf unfolding we have the following property: ( * ) for all (d, e), (d , e ) ∈ S: if d is a successor of d in the tree Π n i=1 (U K , a i )) ↓ELI sf then e is an successor of e in the tree U T ,D , ρ D . Intuitively, the simulation always goes "downwards" in the right tree.Based on this insight, we construct an injective simulation S to U ×N T ,D , ρ D inductively.During the construction, we maintain the invariant that (d, e) ∈ S implies (d, h(e)) ∈ S, where h is the homomorphism from U ×N T ,D to U T ,D which maps every element to its "original". • Start with S = {(a 1 , . . ., a n ), ρ D }; • For the inductive step, do the following for every (d, e) ∈ S : let d 1 , . . ., d n be the all Σ-successors of d in ) ∈ S, there are corresponding Σ-successors e 1 , . . ., e n in U T ,D such that (d, e i ) ∈ S for all i.Let e 1 , . . ., e n be pairwise distinct copies of these nodes in U ×N T ,D , and add (d i , e i ) ∈ S , for all i. It can be verified that the invariant is preserved and that S is an injective simulation because of ( * ).J We give now the automata-based approach to deciding the criterion in Lemma 5. We start with providing the necessary preliminaries.An n-ary tree is the set T = {1, . . ., n} * .For a node ui ∈ T , we identify ui•−1 with u.For an alphabet Θ, a Θ-labeled tree is a pair (T, L) with T a tree and L : T → Θ a node labeling function.We also recall the notion of nondeterministic parity tree automata (NTA).An NTA over Nary trees is a tuple A = (Q, Θ, q 0 , ∆, Ω) where Q is a set of states, Θ is the input alphabet, q 0 ∈ Q is the initial state, ∆ ⊆ Q × Θ × Q N is the transition relation, and Ω : Q → N is the priority function.The semantics of NTAs is defined as usual via runs.A run of an NTA A = (Q, Θ, q 0 , ∆, Ω) over an N -ary input (T, L) is a Q-labeled tree (T, r) such that: • r(ε) = q 0 , and • for all w ∈ T , (r(w), L(w), r(w1), . . ., r(wN )) ∈ ∆. Let γ = i 0 i 1 • • • be an infinite path in (T, r) and denote, for all j ≥ 0, with q j the state such that r(i j ) = (x, q j ).The path γ is accepting if the largest number m such that Ω(q j ) = m for infinitely many j is even.A run (T, r) is accepting, if all infinite paths in T r are accepting.The language accepted by A, denoted L(A), is the set of all trees (T, L) for which there is an accepting run. A Θ-labeled tree (T, L) represents the interpretation I L = (T, • I L ) given by for every concept name A ∈ sub(T ) and role name r that occurs in T .Note that the interpretation I L is not necessarily connected; however, we usually identify I L with its sub-interpretation induced by all elements reachable from the root.It should be clear that conversely, for every treeshaped interpretation I of outdegree ≤ n, there is an n-ary labeled tree (T, L) such that I and I L are ismorphic. We also use NTA over the alphabet Θ 2 in which case an input tree (T, L) is treated as two trees (T, L 1 ), (T, L 2 ) and thus encodes two interpretations I L1 and I L2 .Finally, we treat I L as a concept if it is finite and has no multiedges. Lemma 12 Let N be the outdegree of Π n i=1 U K .1.There is an NTA A such that, for all N 2 -ary Θ-labeled trees (T, L), we have: 2. There is an NTA B 0 over N 2 -ary Θ 2 -labeled trees such that for every (T, L) ∈ L(B 0 ), there is a subconcept D of (Π n i=1 (U K , a i )) ↓ELI sf |Σ such that: (a) I L1 is the concept D, and (b) Moreover, A and B 0 can be constructed in time exponential in |K|. In order to prove this lemma, we need a concrete definition of (Π n i=1 (U K , a i )) ↓ELI sf .Let tp(T ) denote the set of all types for T , and rol(T ) be the set of roles that occur in T .For each r ∈ rol(T ), we define a relation → r on the set by taking (x 1 , . . ., x n , x) → r (y 1 , . . ., y n , y) iff • y = r and x = r − , and • for every i ∈ {1, . . ., n}, one of the following is satisfied: (i) x i , y i ∈ ind(A) and r(x i , y i ) ∈ A (ii) x i ∈ ind(A), y i ∈ tp(T ), and a i T ,A r y i ; (iii) x i , y i ∈ tp(T ) and x i T r y i ; Some element (x 1 , . . ., x n , x) ∈ U satisfies a concept name A if for every i ∈ {1, . . ., n}, either x i ∈ tp(T ) and A ∈ x i , or x i ∈ ind(A) and U K |= A(x i ).A path is a sequence u 0 r 0 u 1 r 1 • • • r n−1 u n such that u 0 = (a 1 , . . ., a n , ε), u i−1 → ri−1 u i , for all i ∈ {1, . . ., n}.We denote with PATHS the set of all paths, and with tail(p) the last element in the sequence p.It can be verified that the interpretation (U, (a 1 , . . ., a n , ε)) with U defined below is isomorphic to (Π n i=1 (U K , a i )) ↓ELI sf : Construction of A Informally, the automaton A simulates the definition of U by keeping in its state only the tail of the current path.More formally, the set Q of states is the smallest set that contains q and (a 1 , . . ., a n , ε) and is closed under the relations → r defined above, that is, if u ∈ Q and u → r u for some r ∈ rol(T ), then u ∈ Q.The initial state is (a 1 , . . ., a n , ε), and the transition relation contains (q , θ, q N 2 ), for all θ ∈ Θ, and ((x 1 , . . ., x n , x), θ, q 1 , . . ., q N 2 ) whenever: • if (x 1 , . . ., x n , x) satisfies A then A ∈ θ, for all A ∈ Σ; • if x = ε, then x ∈ θ; • each (y 1 , . . ., y n , r) such that (x 1 , . . ., x n , x) → r (y 1 , . . ., y n , r) for some r ∈ rol(T ) occurs precisely once in q 1 , . . ., q N 2 ; all other q i are q . It is routine to verify that A is as required. Construction of B 0 The automaton B 0 is the intersection of three automata A 0 , A 1 , A 2 .Automaton A 0 verifies Condition (a) from the Lemma by ensuring that I L,1 is a Σ-concept, that is, finite and without multiedges, and in fact, a subconcept of Π n i=1 (U K , a) ↓ELI sf |Σ .Realizing this condition as an NTA is relatively straightforward; details are thus omitted. The automata A 1 , A 2 together verify Condition (b) from the Lemma, assuming that I L1 is some concept C. The first, A 1 , ensures that for all n ∈ ∆ I L 1 with L 1 (n) = ∅ and all D ∈ sub(T ), we have (2) r ∈ L 2 (n) iff r ∈ L 1 (n) (3) Thus, on the elements in I L1 , the interpretation I L2 is the universal model of I L1 and T .Based on this, A 2 just generates (in L 2 ) the trees induced in the universal model below the elements in ∆ I L 1 by simulating T r , which is again standard and omitted. It is rather tedious to specify the automaton A 1 ensuring (2) directly as an NTA.Instead, we specify A 1 as a twoway alternating tree automata (TWAPA) relying on the fact that every TWAPA can be transformed into an equivalent NTA under an exponential blowup (Vardi 1998). Two-way Alternating Tree Automata A two-way alternating parity tree automaton over k-ary trees (TWAPA) is a tuple A = (Q, Θ, q 0 , δ, Ω) where Q is a finite set of states, Θ is the input alphabet, q 0 ∈ Q is the initial state, δ is a transition function, and Ω : Q → N is a priority function (Vardi 1998).The transition function δ maps every state q and input letter θ ∈ Θ to a positive Boolean formula δ(q, θ) over the truth constants true and false and transition atoms of the form (i, q) ∈ [k]×Q, where [k] = {−1, 0, 1, . . ., k}.The semantics is given in terms of runs.More precisely, let (T, L) be a Θ-labeled tree and A = (Q, Θ, q 0 , δ, Ω) a TWAPA.A run of A over (T, L) is a (T × Q)-labeled tree (T r , r) such that: 1. r(ε) = (ε, q 0 ), and 2. for all y ∈ T r with r(y) = (x, q), there is a subset S ⊆ [k]×Q such that S |= δ(q, L(x)) and for every (i, q ) ∈ S, there is some successor y of y in T r with r(y) = (x•i, q ).Let γ = i 0 i 1 • • • be an infinite path in T r and denote, for all j ≥ 0, with q j the state such that r(i j ) = (x, q j ).The path γ is accepting if the largest number m such that Ω(q j ) = m for infinitely many j is even.A run (T r , r) is accepting, if all infinite paths in T r are accepting.A accepts a tree if A has an accepting run over it.Before we can give the TWAPA, we need some preliminary notions, in particular a syntactic characterization of whether T , A |= C(a), which can be easily be implemented in a TWAPA.Similar characterizations have been used before, e.g., in (Jung et al. 2017). Derivation Trees Fix an ELI knowledge base K = (T , A), a 0 ∈ ind(A), and C ∈ sub(T ).A derivation tree for an assertion C 0 (a 0 ) in A w.r.t.T is a finite ind(A)×sub(T )labeled tree (T, V ) such that: • V (ε) = (a 0 , C 0 ); a as an r-successor of a to I.This finishes the construction of I, and it can be verified that it indeed satisfies the requirements of the claim.This finishes the proof of the claim.Now suppose T , A |= C 0 (a 0 ).By the Claim, we have C 0 (a 0 ) ∈ A * .Exploiting that the two rules to construct A 0 , A 1 , . . .are in one-to-one correspondence with Conditions (i) and (ii) from the definition of derivation trees, we can inductively construct a derivation tree for C 0 (a 0 ) in A w.r.t.T .J We are now in the position to construct the automaton A 1 .It ensures that when a Θ 2 -labeled tree (T, L) is accepted, then for all n ∈ ∆ I L 1 with L 1 (n) = ∅, Condition (3) is satisfied, and for all concepts D ∈ sub(T ): ( * ) D ∈ L 2 (n) iff there is a derivation tree for D(n) in I L1 (viewed as ABox); By Lemma 13, this condition ensure that Equation (2) above is satisfied.We take A 1 = (Q, Θ, q 0 , δ, Ω) where Q = {q 0 , q 0 } ∪ {q D , q D | D ∈ sub(T )} ∪ {q r , q r , q r,D , q r,D | r ∈ rol(T ), D ∈ sub(T )} and Ω assigns zero to all states, except for states of the form q D , to which it assigns one.For Condition ( * ), we use states q D for the "⇒" part, and states q D for the "⇐" part.Intuitively, a state q D assigned to some node n is an obligation to verify the existence of a derivation tree for D(n).Conversely, q A is the obligation that there is no such derivation tree.The automaton starts with the following transitions for every θ = (θ 1 , θ 2 ): For states q D , we implement Conditions (i) and (ii) of derivation trees as transitions.Finiteness of the derivation tree is ensured by the priority assigned to these states.More For any DL L, an L TBox is a finite set of concept inclusions (CIs) C D, where C and D are L concepts.Let N I be a countably infinite set of individual names.An ABox A is a finite set of concept assertions A(a) and role assertions r(a, b) where A ∈ N C , r ∈ N R , and a, b ∈ N I .We often use r(a, b) to denote r − (b, a) if r is an inverse role.We use ind(A) to denote the set of all individual names that occur in A. An L knowledge base (KB) (T , A) consists of an L TBox T and an ABox A. The semantics of DLs is defined in terms of interpretations I = (∆ I , • I ), where ∆ I is a non-empty set and • I maps each concept name A ∈ N C to a subset A I of ∆ I and each role name r ∈ N R to a binary relation r I on ∆ I .We refer to (Baader et al. 2017) for details on how to extend • I to compound concepts.An interpretation I satisfies a CI C D if C I ⊆ D I , a concept assertion A(a) if a ∈ A I , and a role assertion r(a, b) if (a, b) ∈ r I .I is a model of a TBox, an ABox, or a knowledge base if it satisfies all inclusions and assertions in it.The CI C D is a consequence of the TBox T , in symbols T |= C D, if C I ⊆ D I for all models I of T .For a KB K = (T , A), a concept C, and an individual a ∈ ind(A), we write K |= C(a) if a ∈ C I for all models I of K.For a DL L, L instance checking is the problem to decide, given an L KB K = (T , A), an a ∈ ind(A), and an L concept C, whether K |= C(a). n i=1 I i , start at those d i τ that represent θ 1 by concept names T i t,1 .In 3n i=n+1 I i , we start at d n+1 0 , . . ., d 3n 0 .This point in the product is labeled M 1 . a, C) and neither C(a) ∈ A nor T |= C, one of the following holds:(i) n has successors n 1 , . . ., n k , k ≥ 1 with V (n i ) = (a, C i ), for 1 ≤ i ≤ k and T |= C 1 . . .C k C; (ii) n has a single successors n with V (n ) = (b, C ) such that r(a, b) ∈ A and T |= ∃r.C C.Lemma 13 T , A |= C 0 (a 0 ) iff there is a derivation tree for C 0 (a 0 ) in A w.r.t.T .Proof.(⇐) is clear.For (⇒), we construct a sequence of ABoxes A = A 0 , A 1 , . . ., by obtaining A i+1 from A i by applying one of the following two rules:1.if C 1 (a), . . ., C k (a) ∈ A and T |= C 1 . . .C k C for some C ∈ sub(T ), then add C(a) to A i ; 2. if r(a, b), C (b) ∈ A i and T |= ∃r.CC for some C ∈ sub(T ), then add C(a).Note that the sequence is finite, and denote with A * the final ABox.Claim.There is a model I of A * and T such that a ∈ C I implies C(a) ∈ A * , for all a ∈ ind(A) and C ∈ sub(T ).Proof of the Claim.Start with an interpretation I 0 defined by:∆ I0 = ind(A) A I0 = {a | A(a) ∈ A * } r I0 = {(a, b) | r(a, b) ∈ A} Denote with A * a the set of all C(a) in A * .Now extend I 0 as follows: For every a ∈ ind(A) and every C ∃r.C ∈ T such that T , A * a |= C(a), add the r-successor of a satisfying C in U T ,A * Thus, the ELI-LCS of D 1 , D 2 does not exist by Theorem 4. The next theorem summarizes our results regarding ELI.Theorem 7 In ELI, LCS and MSC existence and verifica- tion w.r.t. the empty TBox are PSPACE-hard and in EXP-TIME.The lower bounds apply when the signature is full. Theorem 14 In EL, single example MSC existence and verification are in PTIME.Proof.(sketch)This is a consequence of the proof of Theorem 13.Applying the constructions from that proof to an EL TBox instead of an ELI TBox has two effects: first, all involved automata can be constructed in polynomial time and are of polynomial size; and second Theorem 12 implies that if the ELI sf -MSC exists, it is actually an EL concept.JWe next show that the ELI case is dramatically different.In particular, the complexity is much higher and admitting nonfull signatures causes an exponential jump in complexity. Theorem 15 In ELI, single example MSC existence and verification are 2-EXPTIME-complete in general and EXPTIME-complete when the signature is full.Proof.(sketch) In the full signature case, the lower bound is by reduction from the subsumption of concept names w.r.t.ELI TBoxes.For unrestricted signatures, we reduce the complement of single example ELI concept separability, shown 2-EXPTIME-hard in (Gutiérrez-Basulto, Jung, and Sabellek 2018), similar to the proof of Theorem 8. Theorem 1, it suffices to give the CONP upper bound for EL(Σ)-MSC verification with empty TBox.Assume an ABox A, a 1 , . . ., a n ∈ ind(A), a signature Σ, and an EL(Σ) concept C are given.It can be checked in PTIME where A |= C(a i ) for i = 1, . . ., n.Thus it suffices to show that n i=1 (A, a i ) EL,Σ A C , ρ C is in NP, where we regard A, n i=1 A, and A C as interpretations in the obvious way.But this follows directly since • by Lemma 6, if involves D is trivially simulated by D; second note that the the ssuccessor corresponding to Π i I is simulated by D by Point 2 above.Conversely, that is, from "3 ⇒ 2", Point 2 above follows from Condition 2 of Theorem 3. Lemma 7 In ELI, MSC existence w.r.t the empty TBox is decidable in EXPTIME. JTheorem 7 In ELI, LCS and MSC existence and verification w.r.t. the empty TBox are PSPACE-hard and in EXP-TIME.The lower bounds apply when the signature is full.The EXPTIME upper bound for MSC existence in ELI is established in the following. 23 , we attach a trap for each (t 1 , t 2 ) / ∈ V and each i ∈ {1, . . ., n}, labeled with M 2 , with T i t1,1 and T i t2,2 and θ 2 is represented by the T i t,2 at d i−1 ; • if d i satisfies M 5 , then the T i t,2 represent a row of τ , and the T i t,2 represent the same row at d i−1 ; • d i satisfies M jk such that d i satisfies M j and d i+1 satisfies M k or d i+1 satisfies M j and d i satisfies M k . The proof if similar to the proof of Theorem 3. By Lemmas 3 and 4, Condition 1 is equivalent to Condition 1 of the definition of MSCs.For Condition 2 observe that by Lemmas 3, 4, and 11, Condition 2 is equivalent to Condition 2 of the definition of MSCs.J Theorem 12 (ELI sf -MSC Existence w.r.t.ELI TBoxes) Let K = (T , A) be an ELI KB, a 1 , . . ., a n ∈ ind(A), and Σ a signature.The following are equivalent, for C k = (Π n i=1 (U K , a i )) ↓ELI sf ,k C k is the ELI sf (Σ)-MSC of a 1 , . . ., a n w.r.t.K, for a k ≥ 0 |Σ:1. the ELI sf (Σ)-MSC of a 1 , . . ., a n w.r.t.K exists; 2.
2019-11-26T02:53:58.881Z
2020-04-03T00:00:00.000
{ "year": 2020, "sha1": "5e2059b0b82db0cbd8ad366a3df9962145989f56", "oa_license": null, "oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/5675/5531", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b90488ecd36a4ff6bad712a4e2b4e2c405436e2c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
225529006
pes2o/s2orc
v3-fos-license
Reconstruction of signal amplitudes in the CMS electromagnetic calorimeter in the presence of overlapping proton-proton interactions A template fitting technique for reconstructing the amplitude of signals produced by the lead tungstate crystals of the CMS electromagnetic calorimeter is described. This novel approach is designed to suppress the contribution to the signal of the increased number of out-of-time interactions per beam crossing following the reduction of the accelerator bunch spacing from 50 to 25 ns at the start of Run 2 of the LHC. Execution of the algorithm is sufficiently fast for it to be employed in the CMS high-level trigger. It is also used in the offline event reconstruction. Results obtained from simulations and from Run 2 collision data (2015–2018) demonstrate a substantial improvement in the energy resolution of the calorimeter over a range of energies extending from a few GeV to several tens of GeV. The CMS collaboration E-mail: cms-publication-committee-chair@cern.ch A : A template fitting technique for reconstructing the amplitude of signals produced by the lead tungstate crystals of the CMS electromagnetic calorimeter is described. This novel approach is designed to suppress the contribution to the signal of the increased number of out-of-time interactions per beam crossing following the reduction of the accelerator bunch spacing from 50 to 25 ns at the start of Run 2 of the LHC. Execution of the algorithm is sufficiently fast for it to be employed in the CMS high-level trigger. It is also used in the offline event reconstruction. Results obtained from simulations and from Run 2 collision data (2015-2018) demonstrate a substantial improvement in the energy resolution of the calorimeter over a range of energies extending from a few GeV to several tens of GeV. K : Large detector-systems performance; Pattern recognition, cluster finding, calibration and fitting methods A X P : 2006.14359 Introduction The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate (PbWO 4 ) crystal electromagnetic calorimeter (ECAL), which is the focus of this paper, and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid. A more detailed description of the CMS detector is given in ref. [1]. -1 -with the 8.226 [7] package and its CUETP8M1 [8] tune for parton showering, hadronization, and underlying event simulation. These events are used to study the performance of the algorithm when the showering of an electromagnetic particle spreads across more than a single crystal, which is typical of most energy deposits in the ECAL. The second set of MC samples is produced by a fast stand-alone simulation, where the single-crystal amplitudes are generated by pseudo-experiments using a parametric representation of the pulse shape and the measured covariance matrix. Energy deposits typical of the PU present in Run 2 are then added to these signals. Additional pp interactions in the same or adjacent BXs are added to each simulated event sample, with an average number of 40. The electromagnetic calorimeter readout The electrical signal from the photodetectors is amplified and shaped using a multigain preamplifier (MGPA), which provides three simultaneous analogue outputs that are shaped to have a rise time of approximately 50 ns and fall to 10% of the peak value in 400 ns [2]. The shaped signals are sampled at the LHC bunch-crossing frequency of 40 MHz and digitized by a system of three channels of floating-point Analog-to-Digital Converters (ADCs). The channel with the gain that gives the highest nonsaturated value is selected sample-by-sample, thus providing a dynamic range from 35 MeV to 1.7 TeV in the barrel. A time frame of 10 consecutive samples is read out every 25 ns, in synchronization with the triggered LHC BX [2]. The convention used throughout this report is to number samples starting from 0. The phase of the readout is adjusted such that the time of the in-time pulse maximum value coincides with the fifth digitized sample. The first three samples are read out before the signal pulse rises significantly from the pedestal baseline (presamples). The 50 ns rise time of the signal pulse after amplification results from the convolution of the 10 ns decay time of the crystal scintillation emission and the 40 ns shaping time of the MGPA [1,2,5]. The multifit method 4.1 The Run 1 amplitude reconstruction of ECAL signals During LHC Run 1, a weighting algorithm [5] was used to estimate the ECAL signal amplitudes, both online in the HLT [9] and in the offline reconstruction. With that algorithm the amplitude is estimated as a linear combination of 10 samples, s i : where the weights w i are calculated by minimizing the variance ofÂ. This algorithm was developed to provide an optimal reduction of the electronics noise and a dynamic subtraction of the pedestal, which is estimated on an event-by-event basis by the average of the presamples. The LHC Run 2 conditions placed stringent requirements on the ECAL pulse reconstruction algorithm. Several methods were investigated to mitigate the effect of the increased OOT pileup, to achieve optimal noise performance. The methods that were studied included: using a single sample at the signal pulse maximum, a deconvolution method converting the discrete time signal into the -3 -frequency domain [10], and the multifit. The first one uses a minimal information from the pulse shape and, although being robust against OOT pileup, results in a degradation of energy resolution for most of the energy range below ≈100 GeV. The second was the subject of a pilot study and was never fully developed. The last one is the subject of this paper. The multifit algorithm The multifit method uses a template fit with N BX parameters, comprising one in-time (IT) and up to nine OOT amplitudes, up to five occurring before, and up to four after the IT pulse: N BX ∈ [1][2][3][4][5][6][7][8][9][10]. The fit minimizes the χ 2 defined as: where the vector ì S comprises the 10 readout samples, s i , after having subtracted the pedestal value, ì p j are the pulse templates for each BX, and A j , which are obtained by the fit, are the signal pulse amplitudes in ten consecutive BXs, with A 5 corresponding to the IT BX. The pulse templates ì p j for each BX have the same shape, but are shifted in time by j multiples of 25 ns. The pulse templates are described by binned distributions with 15 bins of width 25 ns. An extension of five additional time samples after the 10 th sample (the last digitized one) is used to obtain an accurate description of the contribution to the signal from early OOT pulses with tails that overlap the IT pulse. The total covariance matrix C used in the χ 2 minimization of eq. (4.2) includes the correlation of the noise and the signal between the different time samples. It is defined as the quadratic sum of two contributions: where C noise is the covariance matrix associated with the electronics noise and C j pulse is the one associated with the pulse shape template. Each channel of the ECAL, i.e., a crystal with its individual readout, is assigned its own covariance matrix. Quadratic summation of the two components is justified since the variance for the pulse templates is uncorrelated with the electronic noise. In fact, the uncertainty in the shape of the signal pulses for a given channel is dominated by event-by-event fluctuations of the beam spot position along the z-axis, of order several cms [11], which affect the arrival time of particles at the front face of ECAL. The C pulse matrix is calculated as: where thes i (n) are the pedestal-subtracted sample values, s i (n) − P, scaled for each event n, such thats 5 (n) = 1. The value of P equals the average of the three unscaled presamples over many events. Both the templates and their covariance matrices are estimated from collision data and may vary with time, for reasons described in section 5.1. The electronics noise dominates the uncertainty for low-energy pulses, whereas the uncertainty in the template shape dominates for higher energies. The determination of C noise , which is calculated analogously as C pulse , but with dedicated data, is described in section 5.2. The minimization of the χ 2 in eq. (4.2) has to be robust and fast to use both in the offline CMS reconstruction and at the HLT. In particular, the latter has tight computation time constraints, especially in the EB, where the number of channels that are read out (typically 1000 and as high as 4000) for every triggered BX, poses a severe limitation on the time allowed for each minimization. Therefore, the possibility of using minimization algorithms like [12] to perform the 10×10 matrix inversion is excluded. Instead, the technique of nonnegative least squares [13] is used, with the constraint that the fitted amplitudes A j are all positive. The χ 2 minimization is performed iteratively. First, all the amplitudes are set to zero, and one nonzero amplitude at a time is added. The evaluation of the inverse matrix C −1 , which is the computationally intensive operation, is iterated until the χ 2 value in eq. (4.2) converges (∆ χ 2 < 10 −3 ) [14]. Usually the convergence is reached with fewer than 10 nonzero fitted amplitudes, so the system is, in general, over-constrained. Examples of fitted pulses in single crystals of the EB and EE are shown in figure 1 (right) and (left), respectively. They are obtained from a full detector simulation of photons with transverse momentum p T = 10 GeV. Since the only unknown quantities are the fitted amplitudes, the minimization corresponds to the solution of a system of linear equations with respect to a maximum of 10 nonnegative A j values. The implementation uses a C++ template linear algebra library, [15], which is versatile and highly optimized. The time required to compute the amplitude of all the channels in one event is approximately 10 ms for typical Run 2 events where the bunch spacing was 25 ns and there is an average of 40 PU interactions per BX. The timing has been measured on an Intel Xeon E5-2650v2 processor, which was used for the benchmark tests of the CMS HLT farm at the beginning of Run 2 -5 -in 2015 [16]. The CPU time needed is about 100 times less than that which was used to perform the equivalent minimization using , and for all events is much less than the maximum time of 100 ms/event allowed for the HLT. The algorithm implementation has also been adapted for execution on GPUs for the new processor farm, which will be used for LHC Run 3, which is planned to begin in 2022. Pulse shape templates The templates for the ì p j term in eq. (4.2) are histograms with 15 bins, and represent the expected time distribution of a signal from an energy deposit in the ECAL crystals. The first 10 bins correspond to the samples that are read out in a triggered event. Bins 10-14 describe the tail of the signal pulse shape. The pulse template differs from crystal to crystal, both because of intrinsic pulse shape differences and, more importantly, because of differences in the relative time position of the pulse maximum, T max , between channels. The pulse templates have also been found to vary with time, during Run 2, as a result of crystal irradiation. Both of these effects must be corrected for, and the time variation requires regular updates to the pulse shape templates during data taking. The pulse templates are constructed in situ from collision data acquired with a zero-bias trigger, i.e., a beam activity trigger [9], and events recorded with a dedicated high-rate calibration data stream [17]. In the calibration data stream, the ten digitized samples from all single-crystal energy deposits above a predefined noise threshold are recorded, while the rest of the event is dropped to limit the trigger bandwidth. The energy deposits in these events receive contributions from both IT and OOT interactions. In a fraction of the LHC fills, the circulating beams are configured so that a few of the bunch collisions are isolated, i.e., occur between bunches that are not surrounded by other bunches. In these collisions, the nominal single-bunch intensity is achieved without OOT pileup, so a special trigger requirement to record them was developed. This allows a clean measurement of the templates of IT pulses only. An amplitude-weighted average pulse template is obtained, and only hits with amplitudes larger than approximately five times the root-mean-square spread of the noise are used. During 2017, the pulse templates were recalibrated about 30 times. The LHC implemented collisions with isolated bunches only when the LHC was not completely filled with bunches, during the intensity ramp up, typically at the beginning of the yearly data taking and after each technical stop, i.e., a scheduled period of several days without collisions exploited by the LHC for accelerator developments. For all other updates, normal bunch collisions were used. For these, a minimum amplitude threshold was imposed at the level of 1 GeV, or 5σ noise when this was greater, and the amplitude-weighted average of the templates suppressed the relative contribution of OOT PU pulses. It was verified that the pulse templates derived from isolated bunches are consistent with those obtained from nonisolated bunches. Anomalous signals in the APDs, which have a distorted pulse shape, are rejected on the basis of the single-crystal timing and the spatial distribution of the energy deposit among neighboring crystals [17,18]. The average pulse shape measured in the digitized time window of 10 BXs is extended by five additional time samples to model the falling tail of the pulse, which is used to fit for the -6 -2020 JINST 15 P10002 contribution of early OOT pileup. This is achieved by fitting the average template with a function of the form [19]: where A represents the hit amplitude, ∆t = t − T max the time position relative to the peak, and α, β are two shape parameters. Examples of two average pulse shapes, obtained using this method, are shown in figure 2. The extrapolation of the pulse templates outside of the readout window was checked by injecting laser light into the crystals, with a shifted readout phase. The tail of the pulse, measured in this way, agrees with the extrapolated templates. The covariance matrix associated with the pulse template, C pulse , is computed using eq. (4.4), with the same sample of digitized templates used to determine the average pulse template and with the same normalization and weighting strategy. The correlation matrix of the pulse template, ρ pulse , shown in figure 3, is defined as ρ i,k pulse = C i,k pulse /(σ i pulse σ k pulse ), where σ i,k pulse is the square root of the variance of the pulse shape for the i, k bin of the template. The values of σ i pulse are in the range 5×10 −4 -1×10 −3 , the largest values relative to samples in the tail of the pulse template. The elements of the covariance matrix outside the digitization window, C pulse i,k with i > 9 or k > 9, are estimated from simulations of single-photon events with the interaction time shifted by an integer number of BXs. It was checked that this simulation reproduces well the covariance matrix for the samples inside the readout window. The C pulse matrix shows a strong correlation between the time samples within either the rising edge or the falling tail of the pulse. An anti-correlation is also observed between the time samples of the rising edge and of the falling tail that is mostly due to the spread in the particle arrival time at the ECAL surface, which reflects the spatial and temporal distribution of the LHC beam spot in CMS [20]. For the measured samples, the correlations between S 9 and S 8 , S 7 , S 6 , are all close to 1, with values in the range (0.90-0.97). For the extrapolated samples, the correlations change from bin to bin: between S 14 and S 13 , S 12 , S 11 they are 0.69, 0.56, 0.45, respectively, in the case of the barrel. Pedestals and electronic noise The pedestal mean is used in the multifit method to compute the pedestal-subtracted template amplitudes A j in eq. (4.2). A bias in its measurement would reflect almost linearly in a bias of the fitted amplitude, as discussed in section 6. The covariance matrix associated with the electronic noise enters the total covariance matrix of eq. (4.3). It is constructed as C noise = σ 2 noise ρ noise , where σ noise is the measured single-sample noise of the channels, and ρ noise is the noise correlation matrix. The C noise is calculated with eq. (4.4), where i, k are the sample indices,s i ands j are the measured sample values, normalized tos 5 , and P is the expected value in the absence of any signal, calculated, as for eq. (4.4), by averaging the three unscaled presamples over many events. Each element of the noise covariance matrix is the mean over a large number of events. The noise correlation matrix is defined as: The average pedestal value and the electronic noise are measured separately for the three MGPA gains. For the highest gain value, data from empty LHC bunches [3,4] are used. These are obtained -8 -by injecting laser light into the ECAL crystals in coincidence with the bunch crossings. This gain value is used for the vast majority of the reconstructed pulses (up to 150 GeV), and is very sensitive to the electronics noise. One measurement per channel is acquired approximately every 40 minutes. For the other two MGPA gains, the pedestal mean and its fluctuations are measured from dedicated runs without LHC beams present. The time evolution of the pedestal mean in the EB during Run 2 is shown for the highest MGPA gain in figure 4 (left). A long-term, monotonic drift upwards is visible. Short term (interfill) luminosity related effects are also visible. The short-term variations are smaller when the LHC luminosity is lower. The long-term drift depends on the integrated luminosity, while the shortterm effects depend on the instantaneous luminosity, and related to variations inside the readout electronics. The behavior of the variation of the pedestal value with time is similar at any |η| of the crystal, while the magnitude of it increases with the pseudorapidity, reflecting the higher irradiation. The evolution of the electronic noise in the barrel is shown in figure 4 (right). It shows a monotonic increase with time, related to the increase of the APD dark current due to the larger radiation dose; no short-term luminosity-related effects are visible. For the barrel, where 1 ADC count 40 MeV, this translates to an energy-equivalent noise of about 65 MeV at the beginning of 2017 and 80 MeV at the end of the proton-proton running in the same year. A small decrease in the noise induced by the APD dark current is visible after long periods without irradiation, i.e., after the year-end LHC stops. For the endcaps, the single-channel noise related to the VPT signal does not evolve with time, and is approximately 2 ADC counts. Nevertheless, the energyequivalent noise increases with time and with absolute pseudorapidity |η| of the crystal because of the strong dependence of the crystal transparency loss on |η| and time, due to higher irradiation level. Consequently, the average noise at the end of 2017 in the endcaps translates to roughly 150 MeV up to |η| ≈ 2, whereas it increases to as much as 500 MeV at the limit of the CMS tracker acceptance (|η| ≈ 2.5). Thus, the relative contribution of C noise in the total covariance matrix -9 -strongly depends on |η|. For hits with amplitude larger than ≈20 ADC counts, equivalent to an energy ≈1 GeV before applying the light transparency corrections, C pulse dominates the covariance matrix for the whole ECAL. The covariance matrix for the noise used in the multifit is obtained by multiplying the time independent correlation matrix in eq. (5.2) by the time dependent squared single sample noise, σ 2 noise . The time evolution is automatically accounted for by updating the values in the conditions database [21], with the measurements obtained in situ. Correlations between samples exist because of (1) the presence of low-frequency (less than 4 MHz) noise that has been observed during CMS operation [19], and (2) the effect of the feedback resistor in the MGPA circuit [22]. The correlation matrix of the electronic noise was measured with dedicated pedestal runs; it is very similar for all channels within either the EB or the EE, and stable with time. Consequently, it has been averaged over all the channels within a single subsystem. The matrix for the highest gain of the MGPA is shown in figure 5. The MGPA component of the noise is such that the correlation depends almost solely on the time distance between the two samples, following an exponential relationship. For ∆t > 100 ns, it flattens to a plateau corresponding to the low frequency noise. Sensitivity of the amplitude reconstruction to pulse timing and pedestal drifts The multifit amplitude reconstruction utilizes as inputs pedestal baseline values and signal pulse templates that are determined from dedicated periodic measurements. Thus, it is sensitive to their possible changes with time. Figure 6 shows the absolute amplitude bias for pulses corresponding to a 50 GeV energy deposit (E) in one crystal in the barrel, as a function of the pedestal baseline shift. The dependence for -10 - JINST 15 P10002 deposits in the endcaps is the same. A shift of ±1 ADC count produces an amplitude bias up to 0.3 ADC counts in a single crystal, corresponding, in the barrel, to an energy-equivalent shift of about 300 MeV in a 5×5 crystal matrix. Since the drift of the pedestal baseline with time can be as much as 2 ADC counts in one year of data taking, as shown in figure 4 (left), and is coherent in all crystals, the induced bias is significant, in the range ≈(0.5-1)%, even in the typical energy range of decay products of the W, Z, and Higgs bosons. Therefore, it is important to monitor and periodically correct the pedestals in the reconstruction inputs. The IT amplitude resulting from the χ 2 minimization of eq. (4.2) is also more sensitive to a shift in the position of the maximum, T max of the signal pulse, compared to that obtained from the weights method [5]. This timing shift can be caused by variations of the pulse shapes over time, both independently from crystal to crystal and coherently, as discussed in section 5.1. A difference in the pulse maximum position between the measured signal pulse and the binned template will be absorbed into the χ 2 as nonzero OOT amplitudes, A j , with j 5. To estimate the sensitivity of the reconstructed amplitude to changes in the template timing ∆T max , the amplitude of a given pulse is reconstructed several times, with increasing values of ∆T max . The observed changes in the ratio of the reconstructed amplitude to the true amplitude, A /A true , as a function of ∆T max , for single-crystal pulses of 50 GeV in the EB and EE, are shown in figure 7 (left) and (right), respectively. The difference in shape for positive and negative time shifts is related to the asymmetry of the pulse shape with respect to the maximum: spurious OOT amplitudes can be fitted more accurately using the time samples preceding the rising edge, where pedestal-only samples are expected, compared to using those on the falling tail. For positive ∆T max , the net change is positive because the effect of an increase in the IT contribution is larger than the decrease in the signal amplitude caused by the misalignment of the template. The change in reconstructed amplitude at a given ∆T max is similar for the barrel and the endcaps. Small differences -11 -arise mostly from the slightly different rise time of the barrel and endcap pulses and the difference in energy distributions from PU interactions in a single crystal in the two regions. For the endcaps, the residual offset of ≈0.2% for ∆T max = 0 has two sources. First, the larger occupancy of OOT pileup amplitudes per channel contributes energy coherently to all of the samples within the readout window. Second, the higher electronics noise leads to a looser amplitude constraint in the χ 2 minimization of eq. (4.2), allowing a larger amplitude to be fitted. This offset is reabsorbed in the subsequent absolute energy calibration and it does not affect the energy resolution. The effects of small channel-dependent differences between actual pulse shapes and the assumed templates are absorbed by the crystal-to-crystal energy intercalibrations. However, any changes with time in the relative position of the template will affect the reconstructed amplitudes, worsening the energy resolution. This implies the need to monitor T max and periodically correct the templates for any observed drifts. The average correlated drift of T max was constantly monitored throughout Run 2, measured with the algorithm of ref. [23]. Its evolution during 2017 is shown in figure 8. The coherent variation can be up to 1 ns. The repeated sharp changes in T max occur when data taking is resumed after a technical stop of the LHC. They are caused by a partial recovery in crystal transparency while the beam is off, followed by a rapid return to the previous value when irradiation resumes. A similar trend was measured in the other years of data-taking during Run 2. The measured time variation is crystal dependent, since the integrated radiation dose depends on the crystal position, and since there are small differences in the effect between crystals at the same η. For this reason the pulse templates are measured in situ multiple times during periods with collision data, and a specific pulse template is used for each channel. The measurement described in section 5.1 is repeated after every LHC technical stop, when a change of the templates is expected because of partial recovery of the crystal transparency, or when the |∆T max | was larger than 250 ps. [23]. For each point, the average of the hits reconstructed in all barrel and endcaps channels is used. The sharp changes in T max correspond to restarts of data taking following LHC technical stops, as discussed in the text. At the beginning of the yearly data taking, the timing is calibrated so that the average T max = 0. Performance with simulations and collision data In this section, the performance of the ECAL local reconstruction with the multifit algorithm is compared with the weights method [5]. Simulated events with a PU typical of Run 2 (a Poisson distribution with a mean of 40) and collision data collected in 2016-2018 are used. The data comparisons are performed for low-energy photons from π 0 → γγ decays, and for high-energy electrons from Z → e + e − decays. Suppression of out-of-time pileup signals The motivation for implementing the multifit reconstruction is to suppress the OOT pileup energy contribution, while reconstructing IT amplitudes as accurately as possible. To show how well the multifit reconstruction performs, the resolution of the estimated IT energy is compared for single crystals, as a function of the average number of PU interactions. This study was performed using simple pseudo-experiments, where the pulse shape is generated according to the measured template for a barrel crystal at |η| ≈ 0. The appropriate electronics noise, equal to the average value measured in Run 2, together with its covariance matrix, is included. The effect of the PU is simulated assuming that the number of additional interactions has a Poisson distribution about the mean expected value and that these interactions have an energy distribution corresponding to that expected for minimum bias events at the particular value of η of the crystal. The pseudo-experiments are performed for two fixed single-crystal energies: 2 and 50 GeV. For a single crystal, the amplitude is related directly to the energy only through a constant calibration factor, thus the resolution of the uncalibrated amplitude equals the energy resolution. The resolution of a cluster receives other contributions that may degrade the intrinsic single-crystal energy measurement precision, such as a nonuniform -13 - JINST 15 P10002 response across several crystals, within the calibration uncertainties. These considerations are outside the scope of this paper. The amplitude resolution is estimated as the effective standard deviation σ eff , calculated as half of the smallest symmetrical interval around the peak position containing 68.3% of the events. The PU energy from IT interactions constitutes an irreducible background for both energy reconstruction methods. It is expected that event-by-event fluctuations of this component degrade the energy resolution in both cases as the PU increases. On the other hand, the fluctuations in the energy from all the OOT interactions are suppressed significantly by the multifit algorithm, in contrast to the situation for the weights reconstruction, where they contribute further to the energy resolution deterioration at large average PU. This is shown in figure 9, for the two energies considered in this study. The reconstructed energy is compared with either the true generated energy (corrected for both IT and OOT PU) or the sum of the energy from the IT pileup and the true energy (corrected only for the effect of OOT PU). In the latter case, the amplitude resolution for the multifit reconstruction does not depend on the number of interactions, showing that this algorithm effectively suppresses the contributions of the OOT PU. The offset in resolution in the case of no PU between the two methods, in this ideal case, is due to the improved suppression of the electronic noise resulting from the use of a fixed pedestal rather than the event-by-event estimate used in the weights method. In the data, additional sources of miscalibration may further worsen the energy resolution. Such effects are considered in the full detector simulation used for physics analyses, described below, but are not included in this stand-alone simulation. Simulations performed for an upgraded EB, planned for the high-luminosity phase of the LHC [24], have shown that the multifit algorithm can subtract OOT PU for energies down to the level of the electronic noise, for σ noise > 10 MeV, for PU values up to 200 with 25 ns bunch spacing. This future reconstruction method will benefit from a more frequent sampling of the pulse -14 -shape, at 160 MHz, and from a narrower signal pulse to be achieved with the upgraded front-end electronics [25]. Energy reconstruction with simulated data The ability of the multifit algorithm to estimate the OOT amplitudes and, consequently, to estimate the IT amplitude is demonstrated in figure 10 (left). Simulated events are generated with an average of 40 PU interactions, with an energy spectrum per EB crystal as shown in figure 10 (right). The reconstructed energy assigned by the multifit algorithm to each BX from −5 to +4 is compared with the generated value. The IT contribution corresponds to BX = 0. Amplitudes are included with energy larger than 50 MeV, a value corresponding approximatively to one standard deviation of the electronic noise [26]. The mode of the distribution of the ratio between the reconstructed and true energies of OOT PU pulses and true energies, A PU BX /A true BX , with BX in the range [−5, . . . , +4], is equal to unity within ±2.5% for all the BXs. The OOT interactions simulated in these events cover a range from 12 BXs before to 3 BXs after the IT interaction, as is done in the full simulation used in CMS. The distribution of the measured to true energy becomes asymmetric at the boundaries of the pulse readout window (BX = −5, −4, and −3), because the contributions of earlier interactions cannot be resolved with the information provided by the 10 digitized samples. However, this does not introduce a bias in the IT amplitude since the energy contribution from very early BXs below the maximum of the IT pulse is negligible. The remaining offset of ≈0.2% in the median of A PU BX /A true BX for BXs close to zero is due to the requirement that all the A j values are nonnegative, i.e., any spuriously fitted OOT pulse can only subtract part of the in-time amplitude. This offset is absorbed in the absolute energy scale calibration and does not affect the energy resolution. The energy from an electromagnetic shower for a high-momentum electron or photon is deposited in several adjacent ECAL crystals. A clustering algorithm is required to sum together the -15 - JINST 15 P10002 deposits of adjacent channels that are associated with a single electromagnetic shower. Corrections are applied to rectify the cluster partial containment effects. In the present work, we use a simple clustering algorithm that sums the energy in a 5×5 crystal matrix centered on the crystal with the maximum energy deposit. This approach is adequate for comparing the performance of the two reconstruction algorithms, especially in regions with low tracker material (e.g., |η| < 0.8), where the fraction of energy lost by electrons by bremsstrahlung (and subsequent photon conversions) is small. Here, more than 95% of the energy is contained in a 5×5 matrix. To reduce the fraction of events with partial cluster containment caused by early bremsstrahlung and photon conversion, a selection is applied to the electrons and photons. In the simulation, events with photon conversions are rejected using Monte Carlo information, whereas in data a variable that uses only information from the tracker is adopted, as described later. The relative performance of the two reconstruction algorithms is evaluated on a simulated sample of single-photon events generated by G 4 with a uniform distribution in η and a flat transverse momentum p T spectrum extending from 1 to 100 GeV. The photons not undergoing a conversion before the ECAL surface are selected by excluding those that match geometrically electron-positron pair tracks from conversions in the simulation. For the retained photons, the energy is mostly contained in a 5×5 matrix of crystals, and no additional corrections are applied. The ratio between the reconstructed energy in the 5×5 crystal matrix and the generated photon energy, E 5×5 /E true , for nonconverted photons with a uniform distribution in the range 1 < p true T < 100 GeV is histogramed. For both reconstruction algorithms, the distributions show a non-Gaussian tail towards lower values, caused by the energy leakage out of the 5×5 crystal matrix, which is not corrected for. To account for this, σ eff , as defined in section 7.1, is used to quantify the energy resolution. The average energy scale of the reconstructed clusters is shifted downwards for the multifit method, whereas it is approximately unity for the weights reconstruction. As stated earlier, this is because the amplitudes for the OOT pulses (A j with j 5) are constrained to be positive. In the reconstruction of photons used by CMS such a shift is corrected for, a posteriori, by a dedicated multivariate regression, which simultaneously corrects the residual dependence of the energy scale on the cluster containment and IT pileup. This correction is applied in the HLT and, with a more refined algorithm, in the offline event reconstruction. This type of cluster containment correction was developed in Run 1 [26,27] and has been used subsequently. In this approach, the shift of the E 5×5 /E true distribution is corrected by rescaling the resolution estimator, σ eff , by m, estimated as the mean of a Gaussian function fitting the bulk of the distribution, and expressed in percent. The variation of σ eff as a function of the true p T of the photon, is shown in figure 11. The improvement in the precision of the energy measurement is significant for the full range of p T considered. Expressed as a quadratic contribution to the total, it varies from 10 (15)% in the barrel (endcaps) for photons with p T < 5 GeV, to 0.5 (1.0)% at p T = 100 GeV. The improvement is larger at low p T , since the relative contribution of the energy deposits from PU interactions, which have the characteristic momentum spectrum shown in figure 10 (right), is relatively larger. This is particularly relevant for suppressing the PU contribution to low-p T particles that enter the reconstruction of jets and missing transverse momentum with the particle-flow algorithm used in CMS [28], thus preserving the resolution achieved during Run 1 [29-31]. The improvement grows with |η| both within the EB and within the EE, because of the increasing probability of overlapping pulses from PU. The improvement is larger in the barrel, even though the PU contribution is smaller than in the endcaps, because the lower electronic noise allows a more stringent constraint of the amplitudes in the multifit. For photons, the improvement extends above p T ≈ 50 GeV, because of the higher number of digitized samples of the pulse shape used, and the suppression of the residual OOT PU contribution. The energy resolution becomes constant at very high energies, above a few hundred GeV, where it is dominated by sources other than the relatively tiny contribution of OOT pileup energy, such as nonuniformities in the energy response of different crystals belonging to the same cluster. The improvement in energy resolution is also expected to be valid for electrons with p T > 20 (10) GeV in the barrel (endcaps), since the electron momentum resolution is dominated by the ECAL cluster measurement above these p T values [27]. Effect on low energy deposits using π 0 → γγ The improvement in the energy resolution for low-energy clusters is quantified in data using π 0 mesons decaying into two photons. The p T spectrum of the photons, selected by a dedicated calibration trigger [17], falls very fast and most of the photons have a p T in the range of 1-2 GeV. The photon energy in this case is reconstructed summing the energy of the crystals in a 3×3 matrix. Figure 12 shows the diphoton invariant masses when both clusters are in the EB (left) and when both are in EE (right). The invariant mass distributions obtained with the weights and the multifit methods are compared, using a subset of the π 0 calibration data collected during 2018. The position of the peak, M, is affected by OOT PU differently in the multifit method and in the weights algorithm. Since the π 0 → γγ process is only used to calibrate the relative response of a crystal with respect to others, the absolute energy scale is not important here. The energy scale is determined separately by comparing the position of the Z → e + e − mass peak in data and simulation. On the other hand, the improvement in mass resolution, σ/M, is significant, 4.5% (8.8%) in quadrature in the barrel (endcaps). At the end of 2017, the LHC operated for a period of about 1 month with a filling scheme with trains of 8 bunches alternated with 4 empty BXs. The resilience of the multifit method to OOT pileup had a particularly positive effect in this period, since the bunch-to-bunch variations in OOT PU are larger than with the standard LHC filling schemes used in Run 2. All the bunches of a given train provide approximately the same luminosity, about 5.5×10 27 cm −2 s −1 , so the average number of PU interactions is the typical one of Run 2 (about 34, with peaks up to 80). Data from this period is used to assess the sensitivity of the algorithms to OOT interactions by estimating the invariant mass peak position of the π 0 mesons as a function of BX within each LHC bunch train. The measured invariant mass, normalized to that measured in the first BX of the train, is shown in figure 13 (left). The peak position, estimated with the weights algorithm, increases for BXs towards the middle of the bunch train, where the contribution from OOT collisions is larger, and then decreases again towards the end of the train. In contrast, for the multifit reconstruction, the peak position remains stable within ±0.4% with respect to the value observed in the first BX of the train. The overall resolution in the diphoton invariant mass improves significantly using the multifit algorithm, and, within the precision of the measurement, is insensitive to the variations of OOT PU for different BX within the train. This is shown in figure 13 (right). Effect on high energy deposits using Z → e + e − The performance of the two algorithms for high-energy electromagnetic deposits is estimated using electrons from Z → e + e − decays. Electrons with p T > 25 GeV are identified with tight electron -18 - identification criteria, using a discriminant based on a multivariate approach [27]. To decouple the effects of cluster containment corrections from the single-crystal resolution, 5×5 crystal matrices are used to form clusters. The sample is enriched in low-bremsstrahlung electrons by selecting with an observable using only tracker information, f brem , which represents the fraction of momentum, estimated from the track, lost before reaching the ECAL. It is defined as f brem = (p in − p out )/p in , where p in and p out are the momenta of the track extrapolated to the point of closest approach to the beam spot and estimated from the track at the last sensitive layer of the tracker, respectively. The variable f brem is required to be smaller than 20%. In the range 0.8 < |η| < 2.5 [27], the resolution is dominated by the incomplete containment of the 5×5 crystal matrix caused by the larger amount of tracker material in this region. Therefore, detailed performance comparisons are restricted to events with electromagnetic showers occurring in the central region of the EB. Figure 14 shows the invariant mass of 5×5 cluster pairs, for a portion of the 2016 data, selecting pairs of electrons, e 1 and e 2 , that lie within a representative central region of the barrel (0.200 < max(|η 1 |, |η 2 |) < 0.435). The outcome is similar in other regions with low tracker material. The shift in the absolute energy scale for the simplified 5×5 clustering, caused by the multifit A j being nonnegative for each BX, is not corrected for. The improvement is still significant for the p T range characteristic of Z → e + e − decays, matching the expectation from the simulation, shown in figure 11, namely an improvement in resolution of ≈1% in quadrature, after unfolding the natural width of the Z boson, for electrons and photons with 30 < p T < 100 GeV. A full comparison of the performance of the multifit algorithm in Run 2 with that of the weights algorithm in Run 1 would require a reanalysis of the Run 1 data, applying the more sophisticated clustering techniques used in Run 2. Nevertheless, it is instructive to make a straightforward comparison. For Run 1, where the crystal energy was reconstructed with the default weights method, the electron energy was estimated with the simple 5×5 crystal cluster, and using the optimal calibrations of the 2012 data set ( √ s = 8 TeV and 50 ns LHC bunch spacing) [27]. The effective resolution of the dielectron invariant mass distribution, normalized to its peak, is σ eff /m = 4.59%. This is consistent with the value of 4.56% obtained in Run 2 with the multifit algorithm, shown in figure 14. This indicates that the multifit method can maintain the ECAL performance obtained during Run 1, in the p T range ≈(5-100) GeV, relevant for most data analyses performed with CMS, despite the substantially larger PU present in Run 2. Effect on jets The contribution to the average offset of the jet energy scale, from the reconstructed electromagnetic component of each additional PU interaction, was estimated in a simulated sample of pure noise in the CMS detector by considering the energy contained in cones randomly chosen within the detector acceptance. This shows that the contribution to the offset from ECAL signals is reduced to a value of less than 10%, similar to that obtained in Run 1. Further details are given in ref. [30]. Reconstruction of cluster shape variables The relative contribution of the PU energy within a cluster for electrons from Z boson decays is less than for clusters from π 0 meson decays, and the sample of events is smaller. For these reasons, it is difficult to estimate the variation of the energy scale within one LHC fill arising from this contribution. The effect on the cluster shapes is still significant, since they are computed using -20 -all the hits in a cluster, including the low-energy ones. One example is provided by the evolution, within an LHC fill, of the variable R 9 , defined as the ratio of the energy in a 3×3 crystal matrix centered on the seed hit of the cluster, divided by the total energy of the cluster. This variable is an important measure of cluster shape, since it is often used to distinguish between showering or converted photons, and those not undergoing a bremsstrahlung process or conversion within the tracker. For example, in studies of Higgs boson physics, it is used to separate H → γγ events into categories with different m γγ effective mass resolutions. Thus it is important that the R 9 variable remains stable over time. Figure 15 shows the median of the R 9 distribution for clusters from electron pairs in the barrel having a mass consistent with that of the Z boson, during an LHC fill in 2016 with an average PU decreasing from a value of 42 at the beginning of the fill to a value of 13 at the end. The stability of the cluster shape as a function of instantaneous luminosity, obtained with the multifit algorithm, is clearly better than the one obtained with the weights reconstruction. The main reason the median R 9 values drift up during a fill is that the denominator of the R 9 ratio, which includes contributions from low-energy hits located outside of the 3×3 matrix, decreases in the weights algorithm when the instantaneous luminosity (and the PU) decreases. Another effect that has been checked in data is the rejection power for anomalous signals ascribed to direct energy deposition in the APDs [18] by traversing particles. Unlike the hits in an electromagnetic shower, the anomalous signals generally occur in single channels of the calorimeter. They are rejected by a combination of a topological selection and a requirement on the hit timing. The topological selection rejects hits for which the value of the quantity (1 − E 4 /E 1 ) is close to 1, where E 1 is the energy of the crystal and E 4 is the energy sum of the four nearest neighboring -21 -2020 JINST 15 P10002 crystals. A simulation of anomalous signals in the APDs is used, and the efficiency is defined as the fraction of the reconstructed hits in crystals with anomalous signals identified as such by the offline reconstruction. The rejection efficiency obtained when using the multifit reconstruction is improved by as much as 15% compared to the weights method for hits with E < 15 GeV. The probability of rejecting hits from genuine energy deposits has been checked on data with hits within clusters of Z → e + e − and is lower than 10 −3 over the entire p T spectrum of electrons from Z boson decays for both methods. Summary A multifit algorithm that uses a template fitting technique to reconstruct the energy of single hits in the CMS electromagnetic calorimeter has been presented. This algorithm was implemented before the start of the Run 2 data taking period of the LHC, replacing the weights method used in Run 1. The change was motivated by the reduction of the LHC bunch spacing from 50 to 25 ns, and by the higher instantaneous luminosity of Run 2, which led to a substantial increase in both the in-time and out-of-time pileup. Procedures have been developed to provide regular updates of input parameters to ensure the stability of energy reconstruction over time. Studies based on π 0 → γγ and Z → e + e − control samples in data show that the energy resolution for deposits ranging from a few to several tens of GeV is improved. The gain is more significant for lower energy electromagnetic deposits, for which the relative contribution of pileup is larger. This enhances the reconstruction of jets and missing transverse energy with the particle-flow algorithm used in CMS. These results have been reproduced with simulation studies, which show that an improvement relative to the weights method is obtained at all energies, including those relevant for photons from Higgs boson decays. Simulation studies show that the new algorithm will perform successfully at the high-luminosity LHC, where a peak pileup of about 200 interactions per bunch crossing, with 25 ns bunch spacing, is expected. Acknowledgments We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: [28] CMS collaboration, Particle-flow reconstruction and global event description with the CMS detector, 2017 JINST 12 P10003 [arXiv: 17 6. 4965].
2020-10-30T08:06:03.281Z
2020-07-15T00:00:00.000
{ "year": 2020, "sha1": "3042653413ee08398e5f378da60696ce8ba68d6f", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1748-0221/15/10/P10002/pdf", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "2b5a558044424489f8e08f932ebf516ccfe544d0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
32211766
pes2o/s2orc
v3-fos-license
Kinetic Parameters of Thermal Decomposition Process Analyzed using a Mathematical Model The purpose of this study was to show a mathematical analysis model for understanding kinetic parameters of thermal decomposition process. The mathematical model was derived based on phenomena happen during the thermal-related reaction. To get the kinetic parameters (i.e. reaction order, activation energy, and Arrhenius constant), the model was combined with the thermal characteristics of material gained from the thermal gravity (TG) and differential thermal analysis (DTA) curves. As an example, the model was used for analyzing the kinetic properties of trinitrotoluene. Interestingly, identical results gained from the present model with current literatures were obtained; in which these were because the present model was derived directly from the analysis of stoichiometrical and thermal analysis of the ideal chemical reaction. Since the present model confirmed to have a good agreement with current theories, further derivation from the present mathematical model can be useful for further development. Introduction Analysis of kinetic parameters has been reported as one of the important factors for understanding type of reaction. [1,2] These parameters are required for optimizing the process condition to get the best product. [3][4][5] Many studies have reported how to identify the kinetic parameters, as shown by Kissinger [6], Huang et al. [7], Huang and Wu [8], and Lou [9]. Although their models have been referred by many reports, their methods have still limitations, especially for recognizing in detail what values for the reaction order, the activation energy, and the Arrhenius constant. Based on our previous studies on the material properties [10][11][12][13], here, the purpose of this study was to show a mathematical analysis model for understanding the kinetic parameters based on thermal gravity (TG) and differential thermal analysis (DTA) curves. To confirm that the model is effective, the results gained from the present mathematical approach was compared with current literatures. As an example, kinetic parameters of thermal decomposition of trinitrotoluene were analyzed. Trinitrotoluene is well-known as a basic material that is used in wide range of applications, especially for mining uses. [14] Since the present model is in a good agreement with the current theories and literatures in the thermal decomposition process, further studies gained from this model can be useful for further development. Experimental method Mathematical model was derived based on the specific condition of the thermal decomposition process. The model was then applied for analyzing thermal-related reaction parameters of trinitrotoluene and compared with literatures, such as Kissinger [6], Huang et al. [7], Huang and Wu [8], and Lou [9]. In short, the calculation was obtained by adopting thermal characteristics (i.e. the ending temperature (Tend), the inflection temperature (Ti1), and the maximum temperature (Tm)) and heating condition (i.e. flow rate). Derivation model for kinetic parameters gained from TG-DTA curves The expression of kinetic reaction induced by thermal decomposition is described as where x, t, k(T), and f(x) is the fraction of reactive material, the reaction time, the reaction constant, and the type of reaction model, respectively. The f(x) was assumed as , where A and E are the Arrhenius constant and the activation energy, respectively. R is the Boltzmann constant (8.314 J/mol.K). T is the process temperature (in K) that depends on the heating rate (; in K/s) and is approximated as a function of initial temperature (To) (expressed as t To T .    ). In this model, we also assumed that the temperature deviation is proportional to the decomposition rate of material ( and β are the temperature deviation and the proportional constant, respectively). To solve equation (1), three boundary conditions were used: (1) Boundary 1: In the end time, shown as t = tend and T = Tend, the result of Tend will be Tend = 0. (2) Boundary 2: In the time when reaching maximum temperature, shown as t = tm and T = Tm, the first derivative of T is zero ( ). (3) Boundary 3: When the process reaches inflection time (t = ti), the second derivative of T is zero ). Finally, by integrating equation (1) Further, by substituting the value of n, E and A can be obtained as Detailed information on derivation of the above mathematical equations is reported in our previous report [15]. Simplification of theoretical model from TG and DTA curves The reaction order n is a function of the measurable characteristic process temperatures, i.e., Tend, Ti1, and Tm. However, the above equations are inconvenient since solving this equation needs a trial and error approach. To simplify the above approximation, Figure 1 shows n versus characteristic temperatures curve based on equation (2). The curve showed that the increases in The regression result in equation (5) can be used for approximating the value of n. However, the correlation is effective for n of less than 1 and Table 1 shows kinetic parameters of trinitrotoluene results based on above analysis compared with previous studies. Since the present model is effective in the specific condition, the calculation was limited to the approximation of material with n of less than 1 and values of E and A were calculated based on equations (3) and (4), respectively. The result showed that the present model successfully predicted the kinetic parameters in detail including n, E, and A, while other reports [6][7][8][9] have some limitations. For instance, one literature can predict the n value, while the other can not estimate E or A. In addition, our present study for the approximation of n, E, and A values is better than our previous study [15], in which more detailed values can be obtained. Note: The present model used R = 8.314 J/K.mol and 1 J = 0,000239006 cal. Conclusion The present study has successfully derived the mathematical analysis model for understanding the kinetic parameters based on TG dan DTA curves. The accuracy of the present model was confirmed by the identical results with current literatures. The analysis of the present model was also done for calculating the kinetic parameters of trinitrotoluene. Since the mathematical approximation confirmed that the TG and DTA analysis can be used for analyzing the kinetic parameters (i.e. reaction order, activation energy, and Arrhenius constant), further derivation from the present mathematical model can be useful for further development.
2018-01-15T18:52:55.991Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "8b5855cd3d2be9270e1ab1080f4215f86ad1b1c4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/947/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "cd7228c05014eab8bb3173c814df2878acf51304", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
225047305
pes2o/s2orc
v3-fos-license
Analysis of changes in sodium and chloride ion transport in the skin The measurement of electric potential and resistance reflect the transport of sodium and chloride ions which take place in keratinocytes and is associated with skin response to stimuli arising from external and internal environment. The aim of the study was to assess changes in electrical resistance and the transport of chloride and sodium ions, under iso-osmotic conditions and following the use of inhibitors affecting these ions’ transport, namely amiloride (A) and bumetanide (B). The experiment was performed on 104 fragments of rabbit skin, divided into three groups: control (n = 35), A—inhibited sodium transport (n = 33) and B—inhibited chloride transport (n = 36). Measurement of electrical resistance (R) and electrical potential (PD) confirmed tissue viability during the experiment, no statistically significant differences in relation to control conditions were noted. The minimal and maximal PD measured during stimulation confirmed the repeatability of the recorded reactions to the mechanical and mechanical–chemical stimulus for all examined groups. Measurement of PD during stimulation showed differences in the transport of sodium and chloride ions in each of the analyzed groups relative to the control. The statistical analysis of the PD measured in stationary conditions and during mechanical and/or mechanical–chemical stimulation proved that changes in sodium and chloride ion transport constitute the physiological response of keratinocytes to changes in environmental conditions for all applied experimental conditions. Assessment of transdermal ion transport changes may be a useful tool for assessing the skin condition with tendency to pain hyperactivity and hypersensitivity to xenobiotics. www.nature.com/scientificreports/ an ENaC results in the efflux of water from the cell to balance osmolality. Substances altering the functioning of ENaCs lead to changes in the hydration of keratinocytes and the surrounding environment [3][4][5] . Secretion of chloride ions by the keratinocyte membrane occurs through CFTR (cystic fibrosis transmembrane regulator) channels 7 and other chloride channels, e.g., CLCA (chloride channel accessory) 8 . Moreover, the presence of CFTR channels has been demonstrated on cells lining sweat ducts 9 . A change in the functioning of chloride channels on keratinocytes and/or sweat duct cells may cause water flow and dehydration or overhydration of cells constituting the appropriate skin layer or the surrounding environment 6,9,10 . Moreover, CFTR channels act as cell regulators that affect, e.g., the functioning of ENaCs [7][8][9] . Changes in the functioning of sodium and/or chloride channels can underlie problems with regeneration and healing 10 , onset of hypersensitivity and/or allergies 3,5 , atopic dermatitis 8 and hypersensitivity to pain 11,12 . Few studies on this subject have been published to date 3,4,[7][8][9][10] . The aim of the study was to assess changes in electrical resistance and the transport of chloride and sodium ions, measured as transepithelial electrical potential in fragments of the skin of experimental animals, under iso-osmotic conditions and following the use of inhibitors of these ions' transport. Results Transmembrane PD measured in stationary conditions for tissues incubated in RH was − 0.25 mV, while the median measurements for solutions A and B were − 0.31 and − 0.23 mV, respectively. No statistically significant differences were demonstrated between the measured parameters ( Table 1, see Supplementary Figs. 1S and 2S online). This result indicates that incubation of the tissues in conditions inhibiting the transport of one of the ion types did not cause a significant change in the generation of electric field by the tissues. Mechanical (RH) and mechanical-chemical (A, B) stimulation resulted in repeated changes in ion transport measured as PDmin and PDmax during the 15-s stimulation (Fig. 1). As expected in the case of electrophysiological studies of living skin fragments, different patterns of response to the stimuli were observed. However, for each stimulation (RH, A, B), the statistical analysis indicated the hyperpolarization reaction as significantly predominant (Table 1). Depolarization was found in 24% of the response to stimulation with RH, 20% of the response to A and 10% of the response to B. Incubation in RH induced a minimum potential of − 0.15 mV. Specimens stimulated with solution A had a PDmin of − 0.45 mV, while those stimulated with solution B had a PDmin of − 1.16 mV. PDmax was 0.15 mV for the control specimens and 0 mV for those incubated with either inhibitor. All measured PDmax and PDmin values were different in a statistically significant manner from the potential values in stationary conditions for each investigated group ( www.nature.com/scientificreports/ measured in the solutions inhibiting ion transport were statistically different from each other (p = 0.007), but did not demonstrate differences in relation to the control ( Table 1). The described electrophysiological parameters were measured twice. There were no statistically significant differences between the PD and R values recorded in the same conditions. Discussion The Ussing chamber was originally used to analyze epithelium-lined organs, such as the airways 13,14 or the digestive tract 15 , and to elucidate the pathomechanisms of diseases associated with disrupted function of ion transporters and/or channels 7,[13][14][15] . The modification of the Ussing chamber involved positioning of the analyzed specimen horizontally and its mechanical stimulation with a nozzle located 10 mm from the surface of the analyzed tissue 14,15 . The proposed experimental model for the assessment of electrophysiological parameters in the modified Ussing chamber is based on an analysis of full-thickness skin fragments with preserved layered structure and nerve endings 1,2 . It also allows assessing the functioning of transporters and channels maintaining a constant flow of ions in conditions both similar to physiological or altered 1,2,16 . Tissue samples taken from experimental animals have preserved nerve endings, layered structure and the ability of cells to respond to incubation with the selected substances 2 . Current studies put emphasis on the assessment of changes in the microenvironment of cells, as well as those occurring intracellularly 8,[10][11][12]17,18 . Any disruption of homeostasis may result in a response of the body, including changes in the composition of the extracellular matrix and in the quantity or differentiation of cells 4,11,17 . The proposed model reflects changes in the transport of ions in cells arranged in layers, as well as in intercellular spaces. Electrical resistance was calculated based on the change in potential after passing electric current of a constant intensity through the tissue 2 . Therefore, changes in resistance are due to changes in ion transport, including the functioning of channels and transporters, as well as due to the structure and compactness of the analyzed tissues, including the degree of cell adhesion, tissue damage and deformation 1,2,16 . The values of resistance obtained for tissues incubated in an iso-osmotic environment after adding inhibitors of sodium and chloride ion transport indicate that full vitality and integrity along with tight junctions between cells were preserved 1,16 . Obtained resistance values proved that all specimens were alive during experiment and tested substances did not affect their vitality and tightness. Resistance values obtained for tissue fragments treated with sodium and chloride ion transport inhibitors were different from each other in a statistically significant manner, which may indicate a change in the permeability of cells to each ion type (see Supplementary Fig. 1S online). However, they were not different from those obtained for the control specimens, as the use of iso-osmotic Ringer's solution and ion transport inhibitors did not cause changes in cell-to-cell and cell-to-matrix adhesion and/or in the permeability of the tested specimens to ions 2,9 . Similar research was described by Barker et al. 19 , who studied electrophysiological parameters of glabrous and gland-free skin parts of guinea pig. Interestingly, they found a resistance values significantly higher than the ones reported in the present study. It should be noted that the authors used different www.nature.com/scientificreports/ measurement techniques. First of all, they made measurements in skin incisions of live animals, both in hairless and hairy skin parts. In the case of hairy skin, they obtained significantly lower resistance values compared to the hairless parts, which results are similar to those detected in the present study. In our experiment, a model of hairy, undamaged skin was used and, what is even more important, the electrophysiological parameters were measured through the entire skin fragments, not across the epidermis, as in the Barker et al. study 19 . The lack of differences in potential measured in stationary conditions constitutes evidence of the preserved functioning of ion pumps, channels and co-transporters in all studied specimen groups (Table 1, see Supplementary Fig. 1S online). Transmembrane potential was stable and depended on the flow of sodium and chloride ions in tissues incubated in RH, increased efflux of chloride ions in tissues treated with amiloride solution and increased influx of sodium ions in tissues treated with bumetanide solution. Stable PD proved that no morfological changes occured on specimen surface and the activity of the cells were preserved. It can be presumed that the tissues managed to adapt to the changed environmental conditions, similarly as has been observed for the airways 13,14 and the digestive tract 15 . The electric potential on the surface of hairless mice skin in organ culture was the subject of the study by Denda et al. 6 . In their experiment, the potential values were lower (approximately − 3 mV) than those measured in the present study (− 0.31 to − 0.23 mV). Also in the study by Barker et al. 19 , the values of the potential measured in the epidermis of the guinea pig's abdominal skin were significantly lower, amounting to about − 6 mV. The differences between the methods used in the experiments may explain such substantial discrepancies in potential values measured. In the Barker et al. 19 experiment, live animals with skin incisions were examined and transepithelial potentials were measured using appropriate electrodes in the Ringer solution at pH 5.8. In Denda et al. 6 study, skin samples were placed in Dulbecco's modified Eagle medium (DMEM), used for cell culture, and the experiment was conducted at 37 °C. Thus, in both studies the cellular metabolism rate in the tested skin parts was high. It was proven in the Denda et al. 6 study that the value of the potential is greatly influenced by the intact production of ATP. Due to the disruption of mitochondrial function, the transepithelial potential in their study increased to about − 0.8 mV. In the present experiment, the examined skin samples were well-preserved and alive, but their metabolic rate was reduced due to the lack of an external source of energy substrates and the use of a temperature lower than optimal for metabolic processes. Thus, a limited amount of ATP was avalaible for ion transport. Additionally, the rabbit skin, used in our experiment, has a different thickness, as well as nervous, hormonal and immunological regulation compared to mouse or guinea pig skin 1 , which could also affect the results. In the present study the use of a 15-s stimulus caused repeatable changes in the transport of ions measured as minimum and maximum potential (Fig. 1, Table 1, see Supplementary Fig. 2S online). The applied mechanical and mechanical-chemical stimulations caused different directions of the electrophysiological response in the examined skin fragments, which could be explained by diverse reactions in the transport of sodium, chloride and potassium ions. The most frequently observed reaction pattern was hyperpolarization, which could be associated with the predominance of the sodium ion absorption from the tissue surface and/or chloride ion secretion. However, a depolarization reaction was observed in some skin samples, most likely due to the inhibition of sodium ion absorption or chloride ion secretion and the initiation of potassium ion uptake from the cell surface. Such differences in responses may result from specific characteristics of the tested tissue samples, such as the level of epithelial hydration, the presence of minimal scarring and/or local dermal inflammation, as well as the chemical composition of skin extracellular matrix, including the presence of specific proteins, lipid barrier components or inflammatory cytokines 3,4,10 . However, despite the diversity of the observed responses, hyperpolarization was estimated as a statistically significant direction of changes in the transepithelial potential for each type of stimulation. Both PDmin and PDmax measured with inhibited transport of chloride and sodium ions were different in a statistically significant manner from the control. Hence, the flow of fluid resulted in an increased efflux of chloride ions with inhibited transport of sodium ions and an increased influx of sodium ions with inhibited transport of chloride ions. However, these responses were more intense from those observed for mechanical stimulation only. The transport of sodium ions during incubation and stimulation with bumetanide was intensified. This phenomenon can be explained by an increased sensitivity of cells to factors modulating the transport of sodium ions compared to chloride ions 9 . ENaCs are present in large numbers on keratinocytes and take part in the regulation of cell hydration 3 , efflux of small-molecule substances 11 and proinflammatory factors 3 , as well as initiation of cell differentiation and migration 5 . It has been shown that even minimal changes in the transport of sodium ions in the skin may be associated with the onset of inflammation and migration of immunocompetent cells 3 , onset of hypersensitivity reaction and/or allergies 17 , hypersensitivity to pain 11,12 , slowed regeneration process 5 and exacerbations of skin lesions in many diseases 17 . The presence of substances altering sodium transport may be of importance in changing the approach to treating difficult-to-heal wounds and ulcerations and in an increased incidence of hypersensitivity reactions and allergies, in particular to drugs 17 . In the experiment, the use of amiloride, an inhibitor of sodium transport, caused an increased efflux of chloride ions, and the response to the flushing of the external surface of the skin was the most intense. Median PDmin and PDmax were significantly higher than those measured during RH stimulation (Table 1, Mann-Whitney test, see Supplementary Fig. 2S online). Preserved physiological efflux of chloride ions is important for the proper flow of water and dissolved substances between the layers of the skin 10 , as well as for the processes of perspiration, cooling of the skin surface and excretion of metabolites or xenobiotics 7,8 . Intensification of chloride transport can also contribute to facilitated cell migration and, consequently, can initiate the processes of healing and regeneration 10 . However, alteration of the functioning of the CFTR channel may contribute to the observed changes in drug metabolism, as well as occurrence of adverse reactions or hypersensitivity to drugs or light, which is of particular importance for the treatment of patients with cystic fibrosis and difficult-to-heal wounds 7,18 . The effect of amiloride on the skin electric potential was investigated by Barker et al. 19 , who observed significant decreases in the potential across the glabrous and gland-free guinea pig epidermis when exposed to a Scientific Reports | (2020) 10:18094 | https://doi.org/10.1038/s41598-020-75275-3 www.nature.com/scientificreports/ 5 mM amiloride solution for 2 min. It should be noted that the skin studied in their experiment was sweatless. It could therefore be assumed that, contrary to our study, the modifications in the chloride ion transport were not involved in the skin response to the amiloride exposure. This is why the results obtained by Baker et al. 19 were analogous to the electrophysiological properties of amphibian skin. Denda et al. 6 , in their experiment on electrophysiological parameters of hairless mice skin, also investigated the effect of sodium transport inhibition on the skin electric potential. Tetrodotoxin, an inhibitor of voltage-gated sodium channels (NaV channels) characteristic of excitable cells 20 , was used in their study. ENaC channels, which are amiloride-sensitive, are not inhibited by tetrodotoxin. The significant decreases in the transcutaneous potential after 1 h of incubation in a 50 µM tetrodotoxin solution resulted from the inhibition of NaV channels, which were not examined in the present study. It is worth adding that Denga et al. 6 used a much longer incubation time in the inhibitor solution. What is also important, the skin of a rabbit and a rodent has different characteristics 21 . All of these factors may contribute to the discrepancies between the results of both studies. Conclusions The study demonstrated that the use of inhibitors of sodium and chloride ion transport causes changes in electrical resistance and transmembrane potential which can be measured using a modified Ussing chamber. It has been shown that measuring the electrophysiological parameters of the skin in mammals can be a valuable tool to assess homeostasis in the skin tissue. Use of simple inhibitors of sodium or chloride ion transport may be the basis for establishing which ion transport system has been altered by the analyzed factor. It seems that determination, which of the medicines, xenobiotics or toxins used disturb this delicate mechanisms and how it happens, may be the background for drawing conclusions regarding the observed symptoms, proposed treatment or therapeutic approach in many diseases occurring with skin disruption. Materials and methods Study design. Isolated skin specimens (n = 104) were derived from five adult, albino, New Zealand rabbits, 2-3 months old, of both sexes, with a body weight of 3.5-4.0 kg. The animals were housed in disposable cages and allocated two rabbits per cage, in the 12/12 light/dark cycle, with water and food available ad libitum. The rabbits were killed by asphyxiation using CO 2 (approx. 60% in the inhaled air). After animal death, skin samples from the abdomen were taken, with hair shaved mechanically. Subsequently, the skin was severed, and the membranous part, muscle, fat and vessels were discarded. The skin specimens were collected from dead animals. The presented experiments did not include living animals and according to the European Union law did not require bioethical committee agreement. The animals were kept and killed in accordance with the guidelines and regula- Subsequently, the specimens were horizontally mounted in a modified Ussing chamber. The modification allowed mechanical stimulation of the stratum corneum of the skin with fluid using a peristaltic pump with a flow of 0.06 ml/s (1 ml/15 s). The stimulation nozzle was placed at a distance of 4-6 mm from the tested tissue surface. Below the level of the stimulation nozzle, on the other side of the chamber, there were vent holes allowing the excess liquid to flow freely, which eliminated the pressure difference. The fluid administration in a manner imitating drops falling on the skin surface, was considered a mechanical stimulation of the tested sample. Constant current electrodes and the measuring electrode were placed at a distance of 10 mm from the tested skin surface. Scheme of the measuring system is avalaible in Supplementary Information (see Supplementary Fig. 3S online). After the electrophysiological parameters were stabilized for all fragments, series of mechanical (RH) and mechanical-chemical (A, B) stimulation were applied (Fig. 2). The experiments consisted of measuring twice the following parameters: -transepithelial potential difference-changes in transepithelial electrical potential in stationary conditions (PD, mV), -minimum and maximum transepithelial electrical potential difference during 15-s stimulation (PDmin, PDmax, mV), -transepithelial electrical resistance measured in stationary conditions (R, Ω*cm 2 ). PD was recorded continuously, while R was determined by stimulating the tissue with a current intensity of ± 10 µA. Subsequently, the corresponding voltage change was measured, and resistance was counted according to Ohm's law. Chemicals and solutions. The following chemicals and solutions were used in the experiment: Ethical approval. No experiments involving human participants were performed in the study. The present experiment did not include living animals and according to the Polish and European Union law, the bioethical committee agreement was not required. Animal care was in accordance with the guidelines and regulations as stipulated by the Polish Animal Protection Act and the European Directive on the Protection of Animals Used for Scientific Purposes (2010/63/EU). All applicable institutional and national guidelines for the care and use of animals were followed.
2020-10-24T05:06:04.296Z
2020-10-22T00:00:00.000
{ "year": 2020, "sha1": "27576eb045e5904b16cc8f6f67a440c01b198e5e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-75275-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "27576eb045e5904b16cc8f6f67a440c01b198e5e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
17863399
pes2o/s2orc
v3-fos-license
Distributed Energy Storage Control for Dynamic Load Impact Mitigation The future uptake of Electric Vehicles (EV) in low-voltage distribution networks 1 can cause increased voltage violations and thermal overloading of network assets, especially 2 in networks with limited headroom at times of minimum or peak demand. To address the 3 problem, this paper proposes a distributed battery energy storage solution, controlled using 4 an Additive Increase Multiplicative Decrease (AIMD) algorithm. The proposed AIMD+ 5 algorithm uses local voltage measurements and a reference voltage threshold to determine 6 the Additive Increase parameter and to control the charging and discharging of the battery. 7 The voltage threshold used is dependent on the network topology and is calculated using 8 power flow analysis, with peak demand equally allocated between loads. Simulations were 9 performed on the IEEE European test case and a number of real UK suburban networks, 10 using European demand data and a realistic electric vehicle travel model. The performance 11 of the standard AIMD algorithm with fixed voltage threshold and the proposed AIMD+ 12 algorithm with reference voltage profile are compared. Results show that, compared to the 13 standard AIMD case, the proposed AIMD+ algorithm improves the voltage profile, reduces 14 thermal overloads and ensures fairer battery utilisation. 15 Introduction The adoption of electric vehicles (EV) is seen as a potential solution to the decarbonisation of future transport networks, offsetting emissions from conventional internal combustion engine vehicles. The current rate of EV uptake is anticipated to increase with improved driving range, reduced cost of purchase and greater emphasis on leading an environmentally-friendly lifestyle [1]. It is predicted that by 2030, there will be three million plug-in hybrid electric vehicles (PHEV) and EVs sold in Great Britain and Northern Ireland [2], and it is expected that by 2020, every tenth car in the United Kingdom will be electrically powered [3]. It is anticipated that the majority of PHEV/EV will be charged at home, putting additional stress on the existing local low voltage distribution network, which must then cater for the increased demand in energy [4,5]. Uncontrolled charging of multiple PHEV/EV can raise the daily peak power demand, which leads to: increased transmission line losses, higher voltage drops, equipment overload, damage and failure [6][7][8][9]. Accommodating the increased demand and mitigation of such failures is a major area of research interest, with the focus mainly placed on the coordinating and support of home charging. relies on bus voltage and network load measurements to prevent system overloads. Yet, these kinds of storage control systems do require communication infrastructures to relay the network information and control instructions. This requirement has also been addressed in the comprehensive review on storage allocation and application methods by Hatziargyriou et al. [23]. In the presented work, a control algorithm is proposed that removes the need for such an inter-BESS communication, since it only uses local voltage measurements to infer the network operation. Yet, to prevent conflicting device behaviour, the underlying coordination mechanism is of particular importance. Assuring convergence, the AIMD algorithm is perfectly suited for such coordinated control. Originally, AIMD algorithms were applied to congestion management in communications networks using the TCP protocol [24], to maximise utilisation while ensuring a fair allocation of data throughput amongst a number of competing users [25]. AIMD-type algorithms have previously been applied to power sharing scenarios in low voltage distribution networks, where the limited resource is the availability of power from the substation's transformer. For instance, such an algorithm was first proposed for EV charging by Stüdli et al. [26], requiring a one-way communications infrastructure to broadcast a "capacity event" [27,28]. Later, their work was further developed to include vehicle-to-grid applications with reactive power support [29]. The battery control algorithm proposed in this paper builds upon the algorithm used by Mareels et al. [30], where EV charging was organised by including bidirectional power flow and the use of a reference voltage profile derived from network models. Similar to the work by Xia et al. [31], who utilised local voltage measurements to adjust the charging rate, only voltage measurements at the batteries' connection sites were used in this work to control the batteries' operations. Previous research is therefore extended by the work presented here, as previous work has only utilised common set-point thresholds for controlling each of the DERs. The approach proposed in this paper ensures that unavoidable voltage drops along the feeder do not skew the control decisions, and voltage oscillations caused by demand variation are taken into control considerations. In contrast to previous work, where substation monitoring was used to inform control units of the transformer's present operational capacity, the proposed AIMD+ algorithm does not require this information and, hence, does not require such an extensive communications infrastructure. System Modelling In this section, the underlying assumptions to validate the research are addressed. Next, a model to describe EV charging behaviour is explained. This is followed by a model of the BESS. Finally, the network models used to simulate the power distribution networks are explained. Assumptions For this work, several underlying assumption were made to obtain the models: 1. The uptake of EVs is assumed to increase and, hence, to have a significant impact on the normal operation of the low voltage distribution network. This assumption is based on a well-established prediction that the majority of EV charging will take place at home [32]. 2. The transition from internal combustion engine-powered vehicles to EVs is assumed to not impact the users' driving behaviour. Similar to [33], this assumption allows the utilisation of recent vehicle mobility data [34] to generate leaving, driving and arriving probabilities, from which the EV charging demand can be determined. 3. The transition to low carbon technologies will increase the variability of electricity demand, and therefore, grid-supporting devices, such as BESS, are anticipated to play a more important role [35]. Hence, alongside a high uptake of EVs, an increased adoption of distributed BESS devices is assumed. 4. It is assumed that BESS solutions, or more specifically battery energy storage solutions, start the simulations at 50% SOC and are not 100% efficient at storing and releasing electrical energy, as in [36]. Additionally, its utilisation will degrade the energy storage capability and performance over time, as shown in [37]. Therefore, the requirements for equal and fair storage usage is of high importance. 5. It is assumed that the load profiles provided by the IEEE Power and Energy Society (PES) are sufficient as base load profiles for all simulations. Electric Vehicle Charging Behaviour From publicly-available car mobility data [33,34] an empirical model was developed to capture the underlying driving behaviour. The raw data, n r (t), represents the probabilities of starting a trip during a 15-min period of a weekday. Three continuous normal distribution functions, each defined as: were used to represent vehicles leaving in the morning,n m (t), lunch time,n l (t), and in the evening,n e (t). The aggregate probability of these three functions was optimised using a Generalised Reduced Gradient (GRG) algorithm to fit the original data. In order to represent a symmetric commuting behaviour, i.e., vehicles departing in the morning and returning during the evening, an equality amongst the three probabilities was defined as follows: The resulting parameters from the GRG fitting of the three distribution functions are tabulated in Table 1. Additionally, the resulting departure probabilities, as well as the reference data n r (t) are shown in Figure 1. Statistical data capturing the probability distribution of a trip being of a certain distance were also extracted from the dataset. This was done for both the weekdays w wd (d) and weekends w we (d). The Weibull function was chosen to be fitted against the extracted probability distributions and is defined as: Performing the curve fitting using the GRG optimisation algorithm, a weekday trip distance distribution,ŵ wd (d), and a weekend trip distribution,ŵ we (d), could be estimated. The computed function parameters for these two estimated distribution functions are tabulated in Table 2. Their resulting probability distributions are plotted for comparison against the real data, w wd (d) and w we (d), in Figure 2. 15.462 0.6182 w we (t) 38.406 0.4653 Figure 2. The probability of a trip being of a particular distance during a weekday, extrapolated into a Weibull distribution (RMS error: 3.791%). In addition to these probabilities, an average driving speed of 56 kmh (35 mph) and an average driving energy efficiency of 0.1305 kWh/kmh (0.21 kWh/mph) are taken from [38]. Using the predicted driving distance and average driving speed with the driving energy efficiency, it is possible to estimate an EV's energy demand upon arrival. Starting to charge from this arrival time until the energy demand has been met allows the generation of an estimated charging profile of a single EV. To do this, a maximum charging power of the U.K.'s average household circuit rating (i.e., 7.4 kW) and an immediate disconnection of the EV upon charge completion were assumed [39]. Generating several of those charging profiles and aggregating them produces an estimated charging demand for an entire fleet of EVs. To provide an example, charge demand profiles for 50 EVs were generated, aggregated and plotted in Figure 3. This plot shows the expected magnitude and variability in energy demand that is required to charge several EVs at consumers' homes based on the vehicles' daily usage. This model's EV charging behaviour has been implemented to reflect EV demand if applied today without widespread smart charging infrastructure. It does therefore reflect the worst case scenario. Future smart-charging schemes would mitigate the currently present collective EV charging spike, yet the implementation and validation of available smart-charging schemes lies beyond the scope of this paper. This model's data were used to feed additional demand into the power network models, which are outlined in the next section. Battery Modelling For this work, a well-established model that has been used in previous publications by this research group was used [36,40,41]. This model consists of a battery with a self-discharge loss that is dependent on the current battery's State Of Charge (SOC) and an energy conversion loss to represent the energy lost when charging or discharging this battery. A complete list of all notations that are used for this battery model is included in Table 3. When an ideal battery charges or discharges, the change in SOC is related by the battery power, P bat . When sampling battery operation at a regular period, τ, then the energy transferred into the battery can be described as P bat (t)τ. The change in SOC for this ideal battery, δ SOC , is therefore defined as: The self-discharge loss is added to this ideal battery model to represent the continual loss of energy in the battery typical of chemical energy storage. This self-discharge loss, δ SOC,sel f -discharge , is proportional to the current SOC and is determined using the self-discharge loss factor, µ: Additionally, to represent the losses in the power electronics and energy conversion process, an energy conversion loss, δ SOC,conversion , is defined. This loss is proportional to the rate at which the battery's SOC changes, by using the energy conversion efficiency,η as follows: Here, the conversion losses in the power electronics are reflected as an asymmetric efficiency, which depends on the direction of the flow of energy. This is done by charging the battery at a lower power when consuming energy and discharging it more quickly when releasing energy. Mathematically, this can be represented as: When substituting the self-discharge loss and conversion losses, respectively δ SOC,sel f -discharge and δ SOC,conversion , into the SOC evolution equation, the full battery model can be summarised as follows: In addition, both the SOC and the P bat are constrained due to the device's maximum and minimum energy storage capabilities, respectively SOC max and SOC min , and maximum charge and discharge rate, P max . These limitations are captured in Equations (9) and (10), respectively. Network Models To simulate the low-voltage energy distribution networks, the Open Distribution System Simulator (OpenDSS) developed by the Electronic Power Research Institute (EPRI) was used. It requires element-based network models, including line, load and transformer information, and generates realistic power flow results. Simulations were conducted using the IEEE's European Low Voltage Test Feeder [42] and six detailed U.K. feeder models, that are based on real power distribution networks and provided by Scottish and Southern Energy Power Distribution (SSE-PD). The SSE-PD circuit models were provided as Common Information Models (CIM) during the collaboration on the New Thames Valley Vision Project Project (NTVV) [43]. An example of the IEEE EU LV Test feeder and a U.K. feeder provided by SSE-PD are shown in Figure 4a,b, respectively. A summary of these model's parameters is given in the Table 4. 1 These networks are shown in Figure 4. Throughout this paper, all excerpt and time series results were extracted from experiments with the IEEE EU LV Test feeder (i.e., Network No. 1). All concluding results are based on an aggregation of all networks to include network diversity in the analysis. The model-derived EV data and IEEE EU LV Test feeder consumer demand profiles were used in all simulations. The resultant demand profiles represent the total daily electricity demand of households with EVs. These profiles were sampled at τ = 1 min. The OpenDSS simulation environment was controlled using MATLAB, achieved through OpenDSS's Common Object Model (COM) interface and accessible using Microsoft's ActiveX server bridge. Storage Control In this section, the control of the energy storage system is explained. Firstly, the additive increase multiplicative decrease algorithm is presented, and its decision mechanism is explained in full. Then, the voltage referencing, used for AIMD+, is outlined. Additive Increase Multiplicative Decrease The proposed distributed battery storage control is shown in Algorithm 1. The parameter α denotes the size of the power's additive increase step, and β denotes the size of the multiplicative decrease step. It is worth mentioning that α linearly increases and β exponentially decreases, both charging and discharging powers, where discharging power is represented as a negative power flow, i.e., energy released by the battery. The constants V max and V thr are the maximum historic voltage value and the set-point threshold used to regulate the total demand. In the case when the total demand is too high, the local voltages will fall below V thr , and the batteries reduce their charging power and start discharging. This behaviour reduces total demand on the feeder. At simulation start, V max is set to the nominal voltage of the substation transformer, i.e., 240 V, and V thr is set to a fraction of V max , which was found by solving a balanced power flow analysis. The variable V(t) is the battery's local bus voltage, and P max denotes the maximum charging/discharging power of the battery. The charging and discharging power of the batteries is increased in proportion to the available headroom on the network, which is inferred from the local voltage measurement V(t), to avoid any sudden overloading of the substation transformer. Algorithm 1 Compute battery power. 1: Defines the rate for the current voltage reading 2: if V(t) ≥ V thr then Given the voltage levels are nominal... 3: if SOC < SOC max then ...and the battery is not fully charged... 4: ...increase the charging power 5: else If the battery has fully charged... 6: end if 8: if P(t) < 0 then If the battery has been discharging... 9: ...reduce the discharging power by β 10: end if 11: else If voltage levels are not nominal... 12: if SOC > SOC min then ...and battery is charged sufficiently... 13: ...increase discharge power 14: else If the battery is not sufficiently charged... 15: ..shut off 16: end if 17: if P(t) > 0 then If the battery has been charging... 18: ...reduce the charging power by β 19: end if 20: end if 21: P(t) = signum(P(t)) × min{|P(t)|, P max } Limit the power to battery specifications The algorithm itself, as shown in Algorithm 1, contains two decision levels. The first determines whether the network is over-or under-loaded by comparing the local bus voltage, V(t), to the battery's set-point threshold, V thr . In the event that the network is not under high load, the battery's SOC is compared to its operation limit to check whether the battery can charge, i.e., SOC < SOC max . If there is enough charging capacity left, then the battery's charging power is linearly increased following Line 4. If the battery was previously discharging, the related discharging power is exponentially reduced (Line 9) to reflect the multiplicative decrease. The second decision level is entered when the network is under load. Here, the discharging power is linearly increased if the battery has enough energy stored, i.e., SOC > SOC min (Line 13). Additionally, if the battery was previously charging, then its charging power is multiplicatively reduced (Line 18). The direction of the charging/discharging power adjustment is determined by the first decision level, as well as the threshold proximity ratio R(t). As the battery's bus voltage, V(t), approaches the threshold voltage, V thr , this ratio tends to zero and, hence, stops the battery operation. Therefore, oscillatory hunting is effectively mitigated. The last step of the algorithm (Line 21) assures that the battery charge/discharge power is within its device rating. Reference Voltage Profile When using a fixed voltage threshold, the difference in the location and load of each customer results in the over-utilisation of batteries located at the feeder end. Similar to Papaioannou et al. [44], yet for the control of BESS instead of EV charging, a reference voltage profile is proposed, which is produced by performing a power flow analysis of the network under maximum demand. An example of a fixed threshold and reference voltage profile is shown in Figure 5. In the AIMD+, consumers located at the head of the feeder are allocated a higher voltage threshold, while those towards the end of the feeder have similar voltage thresholds to that of the fixed threshold. This replicates the expected voltage drop along the length of the feeder, hence resulting in a more equal utilisation of battery storage units that are located at those distances. The voltage threshold is set in such a way as to limit the maximum voltage drop to 3% at the end of the feeder. Scenarios and Comparison Metrics In this section, several scenarios are explained that were used to test the performance of the battery control algorithm. Following that is the definition of three comparison metrics. These metrics quantify the improvements caused by the different algorithms in comparison to the worst case scenario. Test Cases and Scenarios In all simulations, the EVs plug-in on arrival and charge at their nominal charging rate until fully charged. The BESS devices were chosen to have a capacity of 7 kWh with a maximum power rating of 2 kW (battery specifications are based on the Tesla Powerwall [45]). Four excerpt cases were defined with different levels of EV and storage uptakes, these are as follows: A A baseline scenario, where only household demand is used. B A worst case scenario, in which EV uptake is 100% and no BESS is used. C An AIMD scenario, in which EV uptake is 100% and each household has a battery energy storage device. Here, each battery was controlled using the AIMD algorithm using a fixed voltage threshold. D An AIMD+ scenario, in which EV uptake is 100%, and each household has a battery energy storage device. Here, each battery was controlled using the AIMD+ algorithm using the optimised reference voltage profile. A storage uptake of 100% was adopted to represent the worst case scenario. In addition to the four defined scenarios, a full set of simulations was performed with EV and storage uptake combinations of 0% to 100% in steps of 10%. Performance Metric Definition To obtain comparable performance metrics, three parameters are defined. These parameters capture the improvements in voltage violation mitigation, line overload reduction and the equality of battery usage. All excerpt performance metrics were calculated based on simulations from the IEEE EU LV Test feeder for reproducibility. Parameter for Voltage Improvement The first parameters are ζ * C and ζ * D for, respectively, Cases C and D, and calculate the magnitude of the voltage level improvement by comparing two voltage frequency distributions. More specifically, they find the difference between these probability distributions and compute a weighted sum. Here, the weighting, δ * (v), emphasises the voltage level improvements that deviate further from the nominal substation voltage V ss . If the resulting weighted sum is negative, then the obtained voltage frequency distribution was improved in comparison to the associated worst case scenario. In contrast, a positive number would indicate a worse outcome. The performance metric ζ * C is defined as follows. Here, V min is the lowest recorded voltage, and V max is the highest recorded voltage. P B (v) is the voltage probability distribution of the worst case scenario (Case B), and P C (v) is the voltage probability distributions of Case C (i.e., the case with maximum EV and AIMD storage uptake). Similarly, the parameter ζ * D therefore compares Case D, i.e., the AIMD+ case, with Case B. The aforementioned factor, δ * (v), scales down the summation in Equation (11) for voltages within the nominal operating band, where no voltage violations take place. Voltage violations on the other hand are scaled up to increase their impact on the summation. This scaling was produced using a linear function, with its minimum at V ss , that is defined as: V low and V high are defined as the lower and upper limits of the nominal operation voltage band, respectively. In general, the proposed voltage comparison parameter, ζ * , shows an improvement in voltage distribution when it is negative, whereas a positive value implies a voltage distribution with more voltage violations. Parameter for Line Overload Reduction Similar to measuring the voltage level improvements, all line utilisation probability distributions between the storage and worst case scenarios were compared. This follows a similar equation to before, but uses a different scaling factor, as described in Equation (11): Here, C max is the highest line utilisation. P B (c) and P C (c) present the line utilisation probability distributions for Cases B and C, respectively, and δ * * (c) is the associated scaling factor. Since the relationship between line current and ohmic losses is quadratic, this scaling factor is defined as an exponential function that amplifies the impact of line currents beyond the line's nominal rating. The capacity scale modifier, C min , defines from where the scaling should start and has been set to 0.5 for this work as only line utilisation above 0.5 p.u. was considered. Therefore, a reduction in line overloads would give a negative ζ * * , whereas a positive value implies a higher line utilisation, i.e., worse results. Parameter for the Improvement of Battery Cycling The final metric, ζ * * * , gives an indication of the inequality of battery cycling (one battery cycle is defined as a full discharge and charge of the battery at maximum operating power, i.e., P max ) across all battery units. It does this by computing the the ratio between the peak and mean battery cycling. This Peak-to-Average Ratio (PAR) of batteries' cycling is defined in the following equation. Here, B is the number of batteries, and c b C is the total cycling of battery b during Scenario C. C C is a vector of R B ≥0 that contains all batteries' cycling values, i.e., c b C ∈ C C . Equally, the battery cycling for Scenario D would be captured by ζ * * * D . In the unlikely event of an equal cycling of all batteries, ζ * * * will have a value of one. Yet, as batteries are operated differently, the value of ζ * * * is likely to be greater than one. Therefore, a resulting PAR closer to one implies a more equal and therefore fairer utilisation of the deployed batteries. Results and Discussion In this section, the results are outlined that were generated from all simulations. In each of the three subsections, the performances of the AIMD and AIMD+ algorithm are compared against each other. To do so, the performance metrics outlined in Section 5.2 were used. In the following subsections, results from the four test cases defined as A, B, C and D in Section 5.1 are explained first, then the results from the full analysis over the large range of EV and battery storage uptake is presented. In the end, these results are summarised and discussed. Voltage Violation Analysis For the comparison of voltage improvements, results compared the algorithms' performances at reducing bus voltage variation; particularly by increasing the lowest recorded bus voltage. Each load's bus voltage was recorded, from which a sample voltage profile, Figure 6, was extracted, where the bus voltage fluctuation over time becomes apparent. It can be seen that the introduction of EVs has significantly lowered the line-to-neutral voltage. Adding energy BESS devices did raise the voltage levels during times of peak demand, as can be seen between 17:00 and 21:00, where the AIMD+ algorithm has elevated voltages further than the AIMD scenario. To obtain a better understanding of the level of improvement, the voltage frequency distribution of all buses along the feeder was generated and plotted in a histogram in Figure 7. Recorded voltage profile at the bus of the customer closest to the substation over the period of one day with a certain uptake in EV and battery storage devices using a moving average over a window of 5 min. Here, Case A is blue; Case B is red; Case C is yellow; and Case D is violet. In this histogram, the voltage probability distributions for all four cases were normalised and plotted against each other. Here, the previously seen drop in voltages by introducing EVs is recorded as a shift in the voltage distribution. This voltage drop is mitigated by the introduction of the storage solutions, since the probability distribution is shifted towards higher voltage bands. For the IEEE EU LV Test feeder, the AIMD+-controlled batteries outperform the AIMD devices as the resulting ζ * C is greater than ζ * D . To gain a full understanding of the performance of the AIMD and AIMD+ algorithms, a full sweep of EV and BESS uptake combinations was simulated on all available power distribution networks. The resulting parameters were averaged and plotted in Figure 8. These figures show that the AIMD+ control algorithm reduces voltage deviation more effectively as the uptake in storage and EVs increases. For low storage uptake, the AIMD algorithm does not perform as strongly since more ζ * C values are positive and larger than their corresponding ζ * D value. This becomes more apparent when averaging all ζ * C and ζ * D values for their common storage uptake and across all EV uptakes. The resulting averaged metrics are plotted in Figure 9. In this last figure, it can be seen how the sole impact of BESS uptake reflects in a continuing improvement of voltage levels. In fact, both compared algorithms improved the bus voltage, which coincides with the findings in the case studies. On average, this is the case for all BESS uptakes, as ζ * C ≈ ζ * D . Nonetheless, it should be noted that the AIMD+ algorithm had reduced the frequency of severe voltage deviations in comparison to the AIMD algorithm and is more effective during scenarios with lower BESS uptake. Line Overload Analysis Similar to the voltage improvement analysis, a frequency distribution of the line utilisation was generated. Figure 10 shows a probability distribution of the per unit (1 p.u. represents a 100% line usage, i.e., a line current of the same value as the line's nominal current rating) current in all lines, for each of the four scenarios. The corresponding ζ * * C and ζ * * D values for the AIMD and AIMD+ storage deployment have also been included in the figure's caption. In this figure, the observed high probability of line over-utilisation confirms that the used test network is of insufficient capacity to cater for the chosen EV uptake. Here, the AIMD+ controlled storage devices yielded a noticeable reduction in line overloads. This improvement is apparent through the compressed width of the probability distribution and the negative ζ * * D value. In contrast, the AIMD controlled storage devices do not fully utilise the line capacity as effectively, which leads to a positive value of ζ * * C . To evaluate the line utilisation improvement across all simulations, the full range of EV and storage uptake was evaluated. The resulting plots are shown in Figure 11. In these figures, it can be seen how the performance metrics change as EV uptake and storage uptake increase. For the AIMD-controlled BESS, the resulting ζ * * C values are distributed around zero, whereas the AIMD+ algorithm achieved mostly negative values of ζ * * D . These negative values confirm the better usage of available line capacity. This becomes particularly noticeable for scenarios where very low EV uptake is combined with larger BESS uptake. Here, AIMD-controlled storage devices commence their initial charge simultaneously. As they are located closer to the substation, they do not measure a sufficient bus voltage offset to regulate down their charging power. This behaviour causes a number of line overloads at the very beginning of the simulated days. The AIMD+ algorithm on the other hand, with its adjusted thresholds, is more responsive to non-optimal network operation and, therefore, increases the charging rate gradually. This gradual adjustment is based on the fact that the bus voltages in the AIMD+ algorithm are closer to their nominal voltages (i.e., bus voltages found by simulating the feeder with its equally-distributed nominal load) than they are in the conventional AIMD case. A greater voltage disparity, which is the case in AIMD, causes a prolonged additive adjustment to the battery's power. This prolonged adjustment is particularly apparent for batteries situated at the bottom of the feeder, as their voltage measurements deviate the furthest from the substation voltage level. AIMD+ on the other hand prevents this behaviour by setting the voltage threshold based on the network's nominal voltage drop, which is dependent on the distance between the BESS and its feeding substation. As a result, the set-point voltage thresholds at the bottom of the feeder are lower than those closer to the substation. Hence, the additive power adjustment is equalised along the entire feeder. Therefore, by applying these individualised control thresholds, the sensitivity of the algorithm is corrected, whilst successfully mitigating the severity of line overloads. Averaging the ζ * * C and ζ * * D values over all EV uptakes gives a clearer indication of performance, as this is now the only variable in the performance analysis. The result is plotted in Figure 12. Here, the hypothesis that AIMD-controlled energy storage devices do not improve line utilisation is confirmed. In contrast, the AIMD+-controlled devices succeed at effectively reducing line overloads. This is also demonstrated by the values of ζ * * C , which remain positive yet close to zero, whereas ζ * * D decreases with increasing uptake of battery storage devices. Whilst the deployment of energy storage has often been seen as a possible solution to defer network reinforcements, the presented results show that this is not always the case. In fact, the importance of choosing an appropriate control algorithm outweighs the availability of the energy storage itself. This becomes particularly apparent when energy storage devices need to recharge their injected energy for times of peak demand. For the AIMD case, this recharging was not controlled sufficiently, which led to higher line currents. The proposed AIMD+ algorithm was not as susceptible to this kind of behaviour, as it has been designed to take battery location into account. This immunity and well-controlled power flow caused little to no additional strain on the network's equipment, allowing the deployed storage devices to also provide voltage support. Battery Utilisation Analysis In this part of the analysis, the batteries' fairness of usage was evaluated. The battery power profiles were recorded; excerpts are plotted in Figure 13 and are arranged by distance from the substation. In this figure, it can be seen that only half of the deployed storage devices were active in Case C (AIMD control), whereas all devices are utilised in Case D (AIMD+ control). From the recorded battery SOC profiles, the net cycling of each battery was computed and divided by the duration of the simulation, giving an average daily cycling value. This value is plotted for each load in Figure 14a. The corresponding statistical analysis is presented in Figure 14b. These two plots show the under-usage of AIMD controlled batteries, as well as the variance in battery usage under AIMD and AIMD+ control. In fact, under AIMD control, 20 out of 55 batteries experienced a cycling of less than 10% per day, whereas the remaining devices were utilised fully. This discrepancy causes the ζ * * * C value to be noticeably larger than ζ * * * D . A more detailed comparison is given when plotting the Peak-to-Average Ratios (PAR) of the batteries' daily cycling over the full range of EV and storage uptake scenarios; these plots are shown in Figure 15. Section 5.2.3 gives the detail on the PAR, ζ * * * . The figure shows that for any EV uptake scenario, AIMD-controlled energy storage units were cycled less equally than the AIMD+ controlled devices. Results also show that with a low EV uptake, both the AIMD and AIMD+ algorithm performed worse; yet improved as EV uptake increased. Averaging the PARs for all batteries' SOC profiles over all EV uptake percentages yields a clear performance difference between AIMD and AIMD+. These resulting PARs, i.e., the ζ * * * C and ζ * * * D values for their corresponding storage uptake percentages, are presented in Figure 16. Although the AIMD controlled batteries were, on average, cycled less than the batteries controlled by the proposed AIMD+ algorithm, looking at the average produces a distorted understanding of the performance. In fact, as more than half of the assigned AIMD BESS devices never partook in the network control, a lower average cycling was expected to begin with. The variation in cycling across all batteries, or the cycling PAR, reveals the difference between usage and effective usage. A lower ratio indicates a better usage of the deployed batteries. Conclusions In this paper, an algorithm is proposed for distributed battery energy storage, in order to mitigate the negative impact of highly variable uncontrolled loads, such as the charging of EVs. The improved AIMD algorithm uses local bus voltage measurements and implements a reference voltage profile, derived from power flow analysis of the distribution network, for its set-point control. Taking the distance to the feeding substation into account allowed optimising the algorithm's parameters for each BESS. Simulations were performed on the IEEE EU LV Test feeder and a set of real U.K. suburban network models. Comparisons were made of the standard AIMD algorithm with a fixed voltage threshold against the proposed AIMD+ algorithm using a reference voltage threshold. A set of European demand profiles and a realistic EV travel model were used to feed load data into the simulations. For all conducted simulations, the performance of the energy storage units was improved by using the proposed AIMD+ algorithm instead of traditional AIMD control. The improved algorithm resulted in a reduction of voltage variation and an increased utilisation of available line capacity, which also reduced the frequency of line overloads. Additionally, the same algorithm equalised the cycling and utilisation of battery energy storage, making better use of the deployed battery assets. To take this work further, future work will also consider distributed generation, such as photovoltaic panels, smart-charging EV uptake, as well as decentralised methods for determining voltage reference values, so no prior network knowledge is required.
2016-08-24T23:09:51.855Z
2016-08-17T00:00:00.000
{ "year": 2016, "sha1": "02081d2aaf6c56a33e89743aa88faafa64171819", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/9/8/647/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1fd1dd2ef737d413da91880be64e29ec7bd10b72", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
1518398
pes2o/s2orc
v3-fos-license
Computer Simulation and Emergent Reliability in Science While the popular image of scientists portrays them as objective, dispassionate observers of nature, actual scientists rarely are. It is not really known to what extent these individual departures from the scientific ideal effects the reliability of the scientific community. This paper suggests a number of concrete projects which help to determine this relationship. Introduction 1.1 Science aims at developing true, or at least very useful, theories about how the world works.It does this through the efforts of many individuals, each of whom objectively and dispassionately develops, tests, and refines theories about the world.Each scientist employs a method which ensures that neither personal bias nor political pressure will color her results.So as to ensure the process of science is most efficient, scientists communicate their results honestly to one another.They fairly evaluate one another's achievements, rewarding those who are particularly efficient and quickly removing those who do not adhere to the scientific ideal.Through their efforts, large and small, we slowly accumulate better and better theories and learn more and more about the world. 1.2 This description of science should seem familiar.No doubt we have all heard it, and many of us have reproduced parts of this characterization in unreflective moments.Perhaps we believe it to be at least approximately true of the scientific culture (if not accurate in its full generality).A version of this hopeful view of science was recorded in Robert Merton's seminal work The Sociology of Science (1973).Merton discusses the norms of scientific practice, but does little to describe the scientific method.This later task has been taken up by a number of philosophers and statisticians who seek to both describe the scientific method as it is now practiced and to refine the practice to make individual scientists better guides to the truth. 1.3 Of course, scientists are not so perfect as this image suggests.They are not always objective, dispassionate, and fair.Bad science escapes punishment and good science goes unnoticed.Scientists are human and are subject to many of the same cognitive biases that plague all human reasoning.Because the pollyanna view of science is so pervasive, many view these departures from it to be a symptom of pathology in science.Should a scientist fail to approach a problem dispassionately, we take this as a sign of science gone wrong.This diagnostic procedure has led a number of science commentators to conclude that science has no special claim to validity over any other knowledge-generating cultural practice-a conclusion disputed vehemently by a number of scientists and philosophers.Often these science defenders seek to point out how science in general comes very close to the pollyanna view presented above. 1.4 There is a third way, which retains the hopeful image of science but acknowledges that many individuals depart from the traditional model of dispassionate seekers of the truth.This third way views the objectivity and reliability of science as an emergent property.It comes about not because every individual in the community is objective and reliable, but because the community is structured in such a way to ensure that, in the long run at least, only the best theories survive. [1] 1.5 Taking this third path, one might view departures from the traditional model of perfect scientists in any number of ways.One might view them as unfortunate, but counteracted by a social system designed to minimize the impact of their imperfection.Alternately, one might view these apparent imperfections as integral parts of the system of science.David Hull, perhaps the strongest advocate of this view, says, ... some of the behavior that appears to be the most improper actually facilitates the manifest goals of science.Mitroff ... remarks that the "problem is how objective knowledge results in science not despite bias and commitment but because of them."Although objective knowledge through bias and commitment sounds as paradoxical as bombs for peace, I agree that the existence and ultimate rationality of science can be explained in terms of bias, jealousy, and irrationality (1988,32) 1.7 Hull's work, and others, has been primarily historical, focusing on specific cases in the history of science and generalizing from them.Unfortunately, historical observation is subject to the vicissitudes of a variety of cultural factors which constrain which alternative situations arise and how groups of scientists perform in their tasks.As an alternative, I suggest this set of questions is best tackled using mathematical and simulation modeling because this methodology allows for the exact comparison of a number of different possible individual behaviors. 1.8 Those who pursue the third way, between the traditional image of science and science skepticism, must grapple with one central question.What is the relationship between individual scientific behavior and the reliability of a scientific community?In order to convincingly argue that science is objective as a community even though its members are not objective one must develop a theory of how this objectivity arises.Developing a detailed theory of this relationship serves two purposes.First, it allows us to determine the degree to which one ought to trust the results of the scientific enterprise.Second, it provides direction for science policy makers to improve the structure of science so as to maximize its ability to seek the truth despite the "imperfections" of its practitioners. 1.9 This very general question is likely far too broad to be of any real use.In the paragraphs that follow, I will suggest a number of particular questions that have arisen already in philosophical literature but need further investigation. Question 1: How can we make the best out of the limited abilities of individual scientists? 2.1 Sophisticated theories of proper scientific method are now common in philosophy and statistics.They often feature complex mathematical operations which require detailed background assumptions and significant computational power.While many scientific journals require rigorous treatment of the data presented in individual articles, a number of important scientific decisions are made more informally, utilizing the much more unsophisticated decision making and inference tools from everyday life.While a scientist must do statistical analysis on an individual dataset, the choice of experiment is often made without any significant calculation contrary to the recommendation of some theories of scientific method. 2.2 That scientists sometimes make decisions on the basis of simple heuristics rather than complex calculations might have significant impact on what sorts of social arrangements are best for science.For example, it is usually the case that if individuals are choosing their experiments optimally more information is always better.However, if scientists are choosing their experiments according to a simpler (and non-optimal) heuristic, significantly limiting information can be productive (Bala and Goyal 1998, Ellison and Fudenberg 1993, Zollman 2007,2010). 2.3 Scientists are often required to make assessment about what theory a body of evidence supports.This task can be quite difficult because often not all relevant evidence is published.Results which are not statistically significant, but are nonetheless relevant, are usually left in a scientist's "file-drawer."A significant amount of work in statistics has been done about this problem, known as the file-drawer problem (Rosenthal 1979), but it is unclear whether scientists are always employ proper statistical reasoning.What would be the impact if scientists ignore the file-drawer problem and treat the published evidence as if it was the only evidence?Does this effect the reliability of individual scientists, and should it impact the way we select papers to publish (Zollman 2009)? Question 2: In what context and to what extent is heterogeneity in science beneficial? 3.1 Most philosophical accounts of scientific method found both in philosophical and statistics literatures are individualistic-they focus on the proper behavior of an individual scientist.Often these theories leave little room for heterogeneity.Some heterogeneity is undoubtedly good in science-we wouldn't want everyone to work on the same problem.Beyond the division of labor among different disciplines, we might also prefer that individuals pursue different avenues for solving the same problem.In this way the community hedges its bets.Thomas Kuhn (1977) suggests that the only way this can be achieved is by allowing individual scientists to have different "scientific methods." Before the group accepts [a scientific theory], a new theory has been tested over time by research of a number of [people], some working within it, others within its more traditional rival.Such a mode of development, however, requires a decision process which permits rational men to disagree, and such disagreement would be barred by the shared algorithm which philosophers have generally sought.If it were at hand, all conforming scientists would make the same decision at the same time (332). 3.2 Kuhn is perhaps too quick in assuming that only heterogeneous scientific standards can produce heterogeneity in a group, however.The study of symmetry breaking in complex systems has shown how groups of homogeneous individuals might nonetheless differentiate themselves.Closer to our topic, Kitcher (1993) and Strevens (2003aStrevens ( ,2003b) ) have shown how the drive to be the first discoverer of a scientific result can cause identical individuals to choose diverse scientific projects.Similarly, I have shown that in certain types of problems limiting access to information or encouraging stubbornness can help maintain diversity even when individuals share a single scientific method (Zollman 2007(Zollman ,2010) ) .There remain many questions, however.Is Kuhn right, that the best way to maintain diversity is with individuals who have diverse standards?What standards?How diverse? 3.3 Beyond the issue of diversity in standards, one can consider diversity in educational background or conceptual "schemes."Hong andPage (2001,2004) have argued that groups which are made up of worse, but more diverse, individuals are better at solving certain types of problems than groups that are made up of the best individuals.As applied to science, this suggests that one might not always want to choose the "best and brightest" without concern for diversity in the group. 3.4 All of the above results are limited to particular sets of problems with particular assumptions about the background information available to scientists.While many of us might see the benefit of pursuing several different fundamental physical theories (like string theory, loop quantum gravity, etc.), we would not see a similar benefit in entertaining the flat earth hypothesis.Even if diversity is beneficial, it will not be beneficial in all contexts (Zollman 2011).More investigation is needed to determine what types of diversity might be beneficial and under what conditions it should be sought out. 3.5 Heterogeneity in science raises an additional important question, how scientists should respond to diversity of opinion.There is an extensive literature which tackles this from the perspective of a single individual in economics (see Aumann 1976, and the resulting literature) and philosophy (see Feldman 2004, and the resulting literature).More recently there has been interest in using simulation to evaluate this question from the perspective of a scientific group (Hegselmann and Krause 2006,Douven and Riegler 2010). Question 3: How robust is the scientific community to intentional misconduct like selective reporting, misreporting, and falsification? 4.1 With some regularity, a case of serious misconduct occurs which makes national attention: a scientist seriously fudges or outrightly fabricates data.In the most serious cases, these scientists have been influential.The social system of science is supposed to contain policing mechanisms to prevent this sort of behavior.Experiments are supposed to be repeatable and data is (at least occasionally) provided on request.But, nonetheless, one might ask whether the extant mechanisms are sufficient to catch fudging and outright fabrication.If not, one might question the widespread assumption that these cases are anomalous.Even if the current system is sufficient, knowing what features of the system are most effective can ensure that future science policy changes do not act to (unintentionally) disrupt these parts of science. 4.2 Beyond serious misconduct, there are a number of small scale deviations from perfect behavior that are undoubtedly widespread.Scientist will choose not to publish data which does not conform to their preferred theory.Individuals might make small changes in the design of an experiment to influence the results.Rarely are failures to replicate a result published, because negative results are regarded as less valuable than positive results.One might wonder how much these small-scale instances of misconduct influence the collective behavior of science?Is science designed to counteract these small fudges, and if so, how? Question 4: What is the effect of social biases like conformist bias, social power, etc. on the outcome of scientific practice? 5.1 Volumes of psychological and sociological studies have demonstrated that individual humans are subject to all sorts of individual biases.We tend to pay attention to outliers, to data that confirms our beliefs, and to those in positions of power.These biases are seen, from the individual perspective, as erroneous.We do not perform appropriate statistical tests when thinking informally about random events, we don't give equally reliable data equal weight, and we don't question the reliability of those we trust. 5.2 David Hull, in the quote above, seems to suggest that these sorts of biases help to contribute to the effectiveness of the scientific society.Karl Popper, for instance, argued that refusing to abandon a theory in light of its refutation was inconsistent with his normative standard for individual scientific behavior, but was nonetheless useful from a community perspective (1975). 5.3 Different biases might influence individuals in different directions, and as a result it might be that biases "cancel out."Alternatively, social biases might help scientists make proper decisions because the bias points in the right direction (Brock and Durlauf 2002).Of course this need not the case, and understanding what sorts of social circumstances lead to these nonharmful effects would help us determine when to trust science what sorts of science policy ought to be adopted. Question 5: How does the system of scientific reward (publication, tenure, grants, etc.) influence scientists choices?Is it helpful or harmful in tenure, grants, etc.) influence scientists choices?Is it helpful or harmful in produces socially desirable outcomes? 6.1 Along with the professionalization of science in the 19th century came a new set of incentives for scientists.A scientist is not solely concerned with discovering truth, but must also worry about securing tenure, promotion, grants, and awards.The metric of scientific success is often the peer-reviewed scientific paper which correlates imperfectly with the discovery of important truths.More recently, the assent of citation metrics as a method for scientific evaluation have evoked calls for reconsidering how good science is evaluated and rewarded. 6.2 Of primary concern has been the degree to which rewards in the sciences discourages "high-risk/high-reward" scientific research.The fact that many great scientific successes of the past were of this type leads many to suggest that our current system for rewarding scientific success is broken.Much of this discussion has been informal and the examples tend to be anecdotal.There is, however, a significant theoretical apparatus developed in economics which can tackle these sorts of problems. 6.3 The work of Kitcher and Strevens provides one illuminating example.They suggest that the "priority rule" reward system, which gives sole credit to the first discoverer of a particular result, is socially optimal because it encourages an appropriate allocation of scientists among different research programs each that might achieve a desired result.Their model relies on a number of strong assumptions about the degree of information available to the scientists, and it has been shown via computer simulation that their results depend critically on these assumptions (Muldoon and Weisberg 2009).Regardless of how one views this particular model, these results illustrate how further studies of this research program might be carried out. Conclusion 7.1 Developing a full understanding of the relationship between individual scientific behaviors and the property of scientific groups will undoubted take a significant amount of research by a number of individuals.Ultimately, however, undertaking such a project will help us to understand the importance (or lack thereof) of individual virtues like objectivity and neutrality.Knowing this relationship will guide both the study of science in philosophy and other social sciences, and it will help to guide science policy in productive directions by focusing attention on those pathological behaviors which serve to threaten science.
2015-07-13T20:20:30.000Z
2011-10-31T00:00:00.000
{ "year": 2011, "sha1": "529b964d281729e49ff3337ac75866b4a23a39f7", "oa_license": "CCBY", "oa_url": "https://www.jasss.org/admin/get_pdf.php?source=https://jasss.org/14/4/15.html", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "fc52dbd187bc2e64df5446a11abc7611a1bd65ed", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
28058744
pes2o/s2orc
v3-fos-license
Asymmetric Type F Botulism with Cranial Nerve Demyelination We report a case of type F botulism in a patient with bilateral but asymmetric neurologic deficits. Cranial nerve demyelination was found during autopsy. Bilateral, asymmetric clinical signs, although rare, do not rule out botulism. Demyelination of cranial nerves might be underrecognized during autopsy of botulism patients. The Patient While traveling in Connecticut, a man from North Carolina in his mid-60s was admitted to a community hospital for new onset of diplopia, vertigo, truncal ataxia, and vomiting. Approximately 10 days before admission, the patient had been prescribed doxycycline for sinusitis. The physical examination at admission was notable for dilated, asymmetric (5 mm on the right and 4 mm on the left), sluggishly reactive pupils; cranial nerve IV palsy; bilateral proptosis (right more than left); bilateral peripheral facial weakness; and proximal left upper extremity weakness. Routine blood test results and imaging studies of head, brain, and orbits were unremarkable. During hospital day 1, the patient experienced hypophonia, complete ophthalmoplegia, bilateral ptosis (right more than left), pupils unresponsive to light, dysphagia, and bilateral limb-girdle muscular weakness. His cognition remained intact, and no sensory defi cits were documented. Cerebrospinal fl uid contained 4 leukocytes/mm 3 , 70 mg glucose/dL, and 43 mg protein/dL; bacterial cultures were negative. Serum was negative for IgG and IgM against Borrelia burgdorferi, and rapid plasma reagin test results were negative. Test results for antibody levels against Campylobacter spp. and gangliosides (anti-GQ1b IgG) were negative. On hospital day 2, the patient had diffi culty breathing and bulbar signs progressed, followed by descending extremity weakness (left more than right) and arefl exia. A diagnosis of Miller Fisher syndrome (MFS), a variant of Guillain-Barré syndrome, was considered, and treatment with intravenous immunoglobulin was begun. The Connecticut Department of Public Health and the Centers for Disease Control and Prevention (CDC) were contacted for a botulism consultation. Botulinum antitoxin was not administered at that time because asymmetric neurologic defi cits and lack of exposure to injection-drug use or home-preserved foods made botulism unlikely. Respiratory paralysis progressed, and on hospital day 3 the patient required mechanical ventilation. A diagnosis of botulism was reconsidered; however, antitoxin was not administered because an alternative diagnosis (MFS) was still likely. Serum was collected on hospital day 5 and sent to CDC for botulism testing; a stool sample was collected on hospital day 14 after resolution of ileus. Neurologic improvement was fi rst noted on hospital day 7, consisting of improved upper-extremity strength. On hospital day 14, weaning from mechanical ventilation was complete. Botulinum toxin type F was confi rmed in the serum sample on hospital day 16. Treating physicians and CDC agreed that administration of antitoxin might still be benefi cial because of potential clostridial intestinal colonization. On hospital day 17, investigational heptavalent (A-G) botulinum antitoxin was administered (5). Stool sample was negative for botulinum toxin and botulinum toxin-producing Clostridium spp. On hospital day 28, the patient experienced self-limited serum sickness. He was discharged to a rehabilitation center later the same day. At the time of admission to the rehabilitation center, the patient was able to stand with assistance. At 6 weeks after symptom onset, he was able to keep his eyes open and walk with assistance, but dysphagia persisted. Subsequently, he continued to improve. Approximately 9 weeks after the illness had begun, the patient was found unresponsive; cardiopulmonary resuscitation was unsuccessful. Autopsy was limited to the brain and demonstrated infl ammatory demyelination of cranial nerve tissue (Figure). An epidemiologic investigation was conducted by state and local health departments in Connecticut and North Carolina. No contacts experiencing similar paralytic illness were identifi ed. Two food items consumed by the patient were submitted for CDC analysis; both were negative for botulinum toxin and botulinum toxin-producing Clostridium spp. Conclusions This case illustrates the challenge of diagnosing a rare form of botulism in a patient with atypical clinical features. At initial examination, the patient had bilateral but asymmetric cranial nerve defi cits and extremity weakness. Asymmetric clinical signs are unusual for botulism but have been documented previously with non-type F botulism (6). Additionally, truncal ataxia, an uncommon fi nding, was present. The time from symptom onset to intubation for this patient was 3 days, which is longer than previously recorded for type F patients (most are intubated within <24 hours) (1). Some clinical characteristics were typical for type F botulism, such as time to initial motor improvement and duration of ventilatory support (1). Also similar to reports of other type F cases, the mechanism of botulism intoxication in this patient was unclear (1). Intestinal colonization was suspected on the basis of recent antimicrobial drug use and absence of known risk factors for foodborne or wound botulism but was not thoroughly investigated because of the limited availability of stool samples. A diagnosis of MFS was considered early in the clinical presentation but was eventually ruled out in favor of botulism; botulism sometimes is misdiagnosed as MFS (7). The triad of ophthalmoplegia, arefl exia, and ataxia in this patient supported a diagnosis of MFS (8), although the former 2 fi ndings also can be observed with botulism (3). Progression to descending paralysis was typical of botulism (2,3). The cerebrospinal fl uid protein level and Campylobacter spp. and anti-GQ1b IgG ganglioside antibody test results did not support a diagnosis of MFS. Anti-GQ1b ganglioside antibodies are present among >90% of MFS patients (8). Given the descending pattern of paralysis, positive mouse bioassay for type F botulinum neurotoxin, and lack of supporting laboratory evidence for an MFS diagnosis, we believe that the patient's neurologic illness was caused by botulism alone. The patient's cause of death is unclear. Death occurred after 2 months of sustained neurologic recovery; botulism relapse was not clinically apparent. Brain autopsy did not elucidate a cause of death; however, the cranial nerve demyelination is noteworthy. According to rare reports, neuropathologic features of botulism include normal histopathologic appearance of peripheral nerves and nonspecifi c, microscopic hemorrhage and vascular engorgement in the central nervous system (9,10). However, cranial nerve demyelination was reportedly found in 1 type A botulism patient who received type E antitoxin (11). The mechanisms that account for termination of botulinum toxin action and elimination of toxin from cranial nerves remain unidentifi ed, and the possibility of toxin-induced demyelination cannot be excluded in the patient reported here. Alternatively, the abundant infl ammatory cells in areas of demyelination might refl ect the allergic response to investigational heptavalent botulinum antitoxin manifested by serum sickness reaction. Although we are unable to conclude which hypothesis is more likely, the fact that the patient's baseline neurologic function was within normal limits weighs against causes preceding his episode of botulism. We conclude that a bilateral but asymmetric presentation of neurologic signs, although rare, does not rule out the possibility of botulism. In addition, demyelination of cranial nerves might be an underrecognized fi nding during autopsy of botulism patients, possibly resulting from either the effects of botulism or its treatment.
2014-10-01T00:00:00.000Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "0ba582bac7a3408f202fa588fd144422e7b18822", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3201/eid1801.110471", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0df92e3898ebd2d471e2081da35512e87cc987d2", "s2fieldsofstudy": [ "Medicine", "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
266311541
pes2o/s2orc
v3-fos-license
Spatial–temporal distribution and key factors of urban land use ecological efficiency in the Loess Plateau of China Urban land use ecological efficiency is crucial to the urbanization process and urban ecosystem sustainability. Cities in ecologically sensitive zones with frequent natural disasters need more complex land use patterns and plans. Achieving the goal of harmonizing economy and ecosystem is key for sustainable development policy makers in these cities. Aiming to explore the urban land use ecological efficiency (LUEE) of ecologically sensitive areas, urban land use ecological efficiency index system of the Loess Plateau was constructed, the SBM-Tobit model was adopted to measure the LUEE and influencing factors from 2009 to 2018, and the characteristics of spatial–temporal evolution was discussed. The results indicated that there were significant spatial differences of LUEE in ecologically sensitive zone. The high-level cities of LUEE were located in the southwest areas, while low-level cities of LUEE were mostly situated in the northeast zones, and the temporal variation trend showed the characteristic of “W” curve. Additionally, the results of key factors identification demonstrated that science and technology expenditure and public transport development had positive effects on urban LUEE, while the land expansion, GDP growth, the second industry and real estate development will limit the improvement of urban LUEE. This study used the scientific evaluation index system and key factors identification method to explore the land use ecological efficiency in ecologically sensitive zones, aiming to provide a case study reference for urban land management and optimization in ecologically fragile areas. Spatial-temporal distribution and key factors of urban land use ecological efficiency in the Loess Plateau of China Lanyue Zhang 1 , Yi Xiao 2* , Yimeng Guo 3 & Xinmeng Qian 4 Urban land use ecological efficiency is crucial to the urbanization process and urban ecosystem sustainability.Cities in ecologically sensitive zones with frequent natural disasters need more complex land use patterns and plans.Achieving the goal of harmonizing economy and ecosystem is key for sustainable development policy makers in these cities.Aiming to explore the urban land use ecological efficiency (LUEE) of ecologically sensitive areas, urban land use ecological efficiency index system of the Loess Plateau was constructed, the SBM-Tobit model was adopted to measure the LUEE and influencing factors from 2009 to 2018, and the characteristics of spatial-temporal evolution was discussed.The results indicated that there were significant spatial differences of LUEE in ecologically sensitive zone.The high-level cities of LUEE were located in the southwest areas, while low-level cities of LUEE were mostly situated in the northeast zones, and the temporal variation trend showed the characteristic of "W" curve.Additionally, the results of key factors identification demonstrated that science and technology expenditure and public transport development had positive effects on urban LUEE, while the land expansion, GDP growth, the second industry and real estate development will limit the improvement of urban LUEE.This study used the scientific evaluation index system and key factors identification method to explore the land use ecological efficiency in ecologically sensitive zones, aiming to provide a case study reference for urban land management and optimization in ecologically fragile areas. With the development of social productivity, the global urbanization process has accelerated since the industrial revolution 1,2 .The speed of land expansion is faster than population urbanization, which aggravates the contradiction between land system and ecosystem, and seriously hinders the harmonious relationship between human and nature 3,4 .The increase of population base and expansion of the scope of human production and living have led to a sharp rise in the social demand for land resources in urban development.The land is historical product of natural processes, the carrier for human survival and development, and indispensable resources for economic development as well as progress of human society.However, the land has gradually become a scarce resource with the deepening of urbanization process.As a complex system involved in the process of production and living, the current situation of urban LUEE reflects the differences of the basic form between layout planning and functional areas, and directly affects the urban socioeconomic system and ecological system 5 .Therefore, how to protect and use land resources sustainably in the process of production and living is the key issue, which is essential to the urban planning and land protection. Under the new development pattern of China, it is particularly important to coordinate the shortage of resource elements and allocate resource elements rationally.Soil erosion, landslide disaster, barren land and drought directly lead to the limitation of land resources for sustainable use on the Loess Plateau.Rapid urbanization and industrialization will also occupy ecological protection land and agricultural land.These problems restrict urban green space, sustainable agriculture development and livable city construction.The urban land use ecological efficiency refers to the coordinated improvement of economic, social and environmental benefits in the process of urban land use, taking ecological protection as the premise and efficient use of land resources as the goal.It embodies the harmonious symbiosis between human activities and the natural environment, and www.nature.com/scientificreports/ is an important indicator to measure the level of sustainable development of cities.The urban land use ecological efficiency emphasizes the balance and coordination in urban planning, land resource allocation, ecological protection and environmental quality, aiming at the maximization of land resource utilization, the minimization of environmental pollution, and the sustainable development of social economy 6,7 .Exploring urban LUEE in the study area is conducive to urban planning and ecological protection in ecologically fragile areas.How to evaluate urban land use ecological efficiency scientifically and reveal the key factors is crucial to city sustainability and environmental protection, especially in underdeveloped areas with scarce land resources and frequent natural disasters.The differences between the geographical location, ecological environment and industrial structure of the cities show spatial non-stationarity over time, thus leading to spatial differences in LUEE at various stages of urban development. The existing researches have studied land use in different regions and industries 8,9 .In terms of land-use types, urban LUEE is related to the development degree of economic zones, including the development time and scale 10 .Urban land-use types are more complex than those in rural areas, urban land use is characterized by diversification, while rural land use is with fragmentation.In terms of economically developed areas, some studies have suggested that fragmentation and diversity of land use are the key factors that distinguish urban and rural areas.Generally, the land use of urban areas is the most diversified, while those in rural areas are the least diversified 11 .Rural LUEE and agricultural scale complement each other, the improvement of LUEE is conducive to the enhancement of agricultural planting scale and agricultural economic benefits 12 .The optimization level of industrial land is relatively high, showing obvious regional and industrial differences.Therefore, appropriate industrial transfer is beneficial for improving urban LUEE 13 .Existing studies have found that there is a close relationship between urban socioeconomic development and urban LUEE.The acceleration of urbanization will put pressure on the urban LUEE, balancing the relationship between urbanization and land use is beneficial for promoting efficient urban development 14 .The new development pattern in the new era prompts scholars to explore the harmonious relationship between land use and ecological environment protection, and the researches on the efficiency of green space utilization is gradually taken seriously 15,16 . Presently, the researches on urban LUEE focus on the differences between economic development and economic structure, it could be found that cities with high LUEE are generally located in regional economic growth center 17 .Transportation infrastructure construction is also an important factor affecting urban LUEE.Analyzing the relationship between traffic convenience and LUEE can optimize the urban traffic network, which is conducive to the circulation of resource elements 18 .Government regulatory measures directly affect the use and development of land resources, and the local government's implementation of relevant land use policies will also lead to changes in LUEE, which will be affected by land tax policies and prices 19 .Additionally, heterogeneous urban form and layout will also have an impact on urban LUEE, patch size and edge density have different effects on LUEE of large, medium, and small cities 20 . In the existing studies, the methods of evaluating urban LUEE focus on DEA 21,22 , the semi-parametric estimation method 23 , SBM model 24,25 , and SFP 26 .Existing studies have used these methods to construct the evaluation indexes of urban LUEE.The existing researches widely adopt the multi-index evaluation method to analyze the comprehensive level of urban LUEE from the aspects of ecology, industry, government and economy, and these factors are mainly divided into output factors and input factors 27,28 .From what has been discussed above, it could be concluded that existing researches mainly adopt these methods to measure urban LUEE and construct the evaluation index system.Presently, most of the researches are aimed at urban agglomerations in various countries and representative provinces with high economic development level, lacking investigations on urban LUEE and its key factors in ecologically sensitive areas.The DEA model and SBM model are used in most investigations. On this basis, SBM and Tobit models are used in this study, which make up for the shortcomings of traditional DEA models in dealing with relaxation variables, and can effectively evaluate the efficiency of unexpected output.By building an indicator system for evaluating the urban LUEE in the study area, the thirty-nine cities are selected for investigation from 2009 to 2018, and the corresponding policy suggestions to improve the LUEE of 39 cities are put forward based on the analysis of key factors. Study area The Loess Plateau belongs to the arid continental monsoon climate region.The precipitation is relatively small and the temporal and spatial distribution is uneven.In addition, the loess is of strong water sensitivity due to its large pores and rich carbonate content.Under the influence and threat of extreme weather, rainstorms occur frequently in the Loess Plateau, causing a variety of natural disasters and hindering the sustainable development of areas.With the gathering of urban residents, the continuous reclamation of wasteland by residents to meet the social development demand has led to the destruction of the original vegetation and intensified the soil erosion in this area.The continuous expansion of urban functional areas has significantly increased regional ecological risks, leading to the shortage of land resources and prominent ecological degradation.This paper excludes cities where data cannot be obtained, and considers the integrity of administrative divisions, selecting 39 cities to explore land use ecological efficiency and its key factors (Fig. 1). SBM model Compared with the stochastic frontier analysis method, which needs to set the production function as the premise, the DEA method is more convenient.However, the traditional data envelopment analysis method cannot effectively solve the problem of input-output relaxation.In the actual production and urban construction, output factors include expected factors and non-expected factors.Therefore, the improved SBM model containing undesired outputs is proposed.The specific formula is as follows. where x , y b and y g represent input, expected output, and undesired output respectively, s − , s b , and s g represent relaxation variables of input, expected output, and undesired output respectively, is the weight vector and ≥ 0 . m , s 1 , and s 2 represent the number of input factors, expected output and undesired output indicators respectively.The subscript " o " in the lower right corner of the variable indicates the evaluated decision-making unit.The attribute and interval of ρ * are the same as formula (1), but when the efficiency value of the model ρ * = 1 , which cannot distinguish the effective DMUs.The ρ * , x , y b , y g , and are the same as those in formula (1) and formula (3), the "" above the variable represents the projection value. Tobit model Due to the SBM model cannot research the influencing factors of urban LUEE in the Loess Plateau, Tobit model was adopted for discussion.The basic model is as follows: (1) where Y i refers to the explained variable, and the evaluation value of urban LUEE is the explained variable of this research.α i represents the intercepted item, β i is the parameters of the item, X it represents the explanatory variable.ε it represents the random perturbation term. Based on the discussion of the influential factors of urban LUEE in previous studies, it could be found that land use mode, population and economic growth, infrastructure construction, environmental protection and fixed asset investment are the key factors affecting land use ecological efficiency [29][30][31] .This study used the Pearson correlation coefficient to measure the linear correlation between influencing factors, and the value range of Pearson correlation coefficient is [− 1, 1].The greater the absolute value of Pearson correlation coefficient, the higher the linear correlation between the variables, and the coefficient value of 0 means that there is no linear relationship between variables.The variables tested by Pearson correlation coefficient are shown: natural population growth rate (PG), proportion of urban construction land in urban area (PCL), regional GDP growth rate (GG), proportion of secondary industry (PS), the number of buses per 10,000 people (NBP), centralized treatment rate of sewage treatment plant (CTR ), urban real estate investment (REI), the proportion of R&D expenditure in government (PRE).The specific calculation formula is as follows: From what has been discussed above, the LUEE of 39 cities in the study area was empirically analyzed and relevant policy recommendations were proposed.The specific analysis path is shown in Fig. 2. Evaluation index system By summarizing and refining the existing researches, it could be concluded that the index system is divided into two aspects: expected output elements, and undesired output elements [32][33][34][35][36] .The indicator "Urban construction land area" was selected to measure the input condition of land resources in urban construction."Total fixed asset investment" was adopted to represent the urban total capital factor input, the input capital indicator was calculated using the perpetual inventory method to get the urban capital stock based on 2009, and the depreciation rate is calculated at 9.6%.The indicator "Urban employed population" was chosen to evaluate the number of urban labor force.The expected outputs of urban construction include economic growth, social progress and welfare growth, and urban environment improvement 37,38 .The "Per capita GDP" indicator evaluates the current situation of urban development from the economic perspective, selecting "Afforestation coverage rate of built-up area" to measure the urban environmental governance capacity.Undesirable output refers to the adverse impact on the ecological environment and social progress caused by land use, including air pollution, soil pollution, water pollution, and other obstacles hindering the high-quality development of cities 39,40 .This paper selects some ecological environment pollution indicators caused in the process of land use for research.The evaluation index system of urban LUEE was shown in Table 1. (4) The analysis framework of this study.The existing studies on urban land use ecological efficiency and urban development efficiency have concluded that population factors, economic structure, urban construction and government decisions are closely related to urban LUEE 17,41 .Based on correlation tests and data availability of variables, eight variables were selected in this study, which is shown in Table 2. Measurement results of urban LUEE Based on the MaxDEA8.0 software and SBM model, the comprehensive levels of urban LUEE of this area from 2009 to 2018 were calculated (Table 3).The results showed that from 2009 to 2018, the temporal evolution of urban LUEE on the study area presented a "W" upward trend curve (Fig. 3).The study found that the urban LUEE of the cities in this region was obviously different, Luoyang had the lowest average level of urban LUEE Guyuan had the highest average value of urban LUEE.Although the comprehensive level of urban LUEE in the study area showed a dynamic growth trend, the growth rate was not ideal.There was a downward trend from 2009 to 2010, 2012 to 2013, and 2015 to 2016, indicating that the urban and use ecological efficiency fluctuated obviously in the Loess Plateau, which was mainly influenced by socioeconomic activities and natural environment. In 2009, Guyuan had the highest evaluation value of urban LUEE (1.199), Xi'an had the lowest evaluation value of urban LUEE (0.062), the difference in LUEE value between these two cities was 1.137.In terms of 2018, the difference between Guyuan (1.356) with the highest LUEE evaluation score and Zhengzhou (0.084) with the lowest LUEE evaluation score was 0.084, which was higher than that in 2009.It could be found from the average results that thirty-one cities on the Loess Plateau have not achieved effective allocation of urban land use ecological efficiency, accounting for 79.49%.The possible reason is that the current ecological environment protection and ecological restoration strategies still need to be further strengthened, and the local government should implement more efficient land governance and management planning strategies. From the change trend of a single city, the LUEE of 19 cities were declining.Indicating that nearly half of cities in the study area were not properly allocated in land use, which resulted in insufficient utilization of land resources in urban planning and construction.There are four cities with significantly decreased trends in LUEE, including Ordos, Yan'an, Shizuishan, and Wuzhong.The comprehensive level of LUEE in Ordos decreased from Based on the evaluation results of urban LUEE, the ArcGIS software was used for spatial visualization drawing of urban LUEE in 2009, 2012, 2015, and 2018.It can be found that the cities with high evaluation scores are located in the southwest areas, cities with low assessment level are distributed in the north and east of the study area (Fig. 4).From 2009 to 2018, the LUEE level of most cities in the study area was not ideal, frequent natural disasters and ecological degradation directly restricted the LUEE of the Loess Plateau.How to promote LUEE has become the main content of promoting the efficient development of the research area.It could be found from Fig. 5 that there are three types of LUEE evolution in the Loess Plateau from 2009 to 2018, involving growth type, decline type, and unchanged type.The growth type cities are concentrated in southwest region of the study area, including Wuhai, Shuozhou, Baiyin, Tianshui, Xianyang, Xi'an.The decline type cities are mainly concentrated in the middle, involving Ordos, Shizuishan, Wuzhong, Yan'an, Dingxi, Tongchuan, Sanmenxia.In addition, there are twenty-six cities showed the unchanged trend in urban land use ecological efficiency, accounting for 66.67%.Based on existing studies of the Loess Plateau, it could be found that the ecology of the central region is relatively www.nature.com/scientificreports/fragile, the possible reason is that these areas have long faced ecological threats of soil erosion, desertification, and landslides, the comprehensive level of ecological complex system shows a relatively slow growth trend, which makes the urban sustainable construction land resources relatively limited 42,43 . Analysis of key factors From what has been discussed above, the regression results of the urban LUEE in the Loess Plateau can be obtained (Table 4).It could be found that five variables passed the significance test, including PCL, GG, PS, NBP, and REI.Firstly, PCL passed the significant test of 1% with negative coefficient (− 0.1786), indicating the increase of construction land will inhibit the improvement of urban LUEE.Because the sustainable use of urban land resources in the Loess Plateau is limited, the unreasonable planning of urban buildings will occupy a lot of land resources.Secondly, GG passed the significant test of 5% with negative coefficient (− 0.0669), indicating the improvement of urban economic growth rate is not conducive to the optimization of urban LUEE.Excessive pursuit of economic growth will lead to neglect of ecological environment protection.Thirdly, PS passed the significant test of 10% with negative coefficient (− 0.1324), it shows that promoting the development of the secondary industry will hinder the growth of urban LUEE.Industrial activities will pose a threat to air quality, water environmental quality, and soil environmental quality, and the discharge of many industrial pollutants will reduce the comprehensive quality of urban ecological environment.Fourthly, NBP passed the significant test of 1% with www.nature.com/scientificreports/positive coefficient (0.1461), indicating the optimization of public transport contributes to the improvement of urban land use ecological efficiency.The development of public transport will reduce urban carbon emissions and improve urban public service capacity.Fifthly, REI passed the significant test of 1% with negative coefficient (− 0.1002), demonstrating the expansion of real estate development scale will inhibit the optimization of land resources.Due to the limitation of landform and the threat of natural disasters, the available land resources of most cities on the study area are insufficient.Real estate development will occupy scarce land resources in most cities of the study area, and how to implement sustainable urban expansion and construction according to local conditions to meet people's basic living conditions is the key content of urban LUEE.Finally, the PRE coefficient is greater than 0, indicating that the impact of technology innovation input on urban land use ecological efficiency is significant.The possible reason is that technological innovation elements can achieve carbon reduction and lower pollutant emissions by improving production efficiency and achieving cleaner production processes 44,45 .Additionally, three variables failed the significance test, including PG and CTR.For one thing, it can be found that the coefficient of PG is positive, but it fails the significance test, indicating that urban population growth on the study area has little impact on urban LUEE.The possible reason is that the current population growth in the study area has not brought significant improvement in population quality and increase in labor force, which makes the economic and environmental effects brought by population growth relatively limited 46,47 .For another, the coefficient of CTR is positive, but it fails the test, demonstrating that the effect of urban sewage treatment on urban LUEE is not significant.This is mainly due to the relatively low proportion of urban sewage discharge to urban ecological environment pollutant discharge. Urban construction and urban LUEE Through the analysis of the current situation of urban LUEE on the Loess Plateau and the study of key factors, it can be found that there is a complex relationship between urban LUEE and urban construction (Fig. 6).In the study of the relationship between urbanization process and urban construction land, scholars believe that rapid urbanization has changed the original nature of land resources 48,49 .Constrained by natural conditions, the study area is facing the threat of ecological degradation and natural disasters, coordinating the relationship between urban development and urban LUEE is the core element of land planning in these cities 50,51 .Additionally, the real estate development has hindered the optimization of urban LUEE in the study area from 2009 to 2018.Real estate development is the key to urban construction, the environmental practice and green environmental protection measures of real estate enterprises are the key measurements to avoid environmental pollution in the process of engineering construction 52 .The unreasonable expansion of real estate scale will hinder the development of urban LUEE, and the matching of urban real estate construction with residents' needs is one of the directions of building a livable city 53 .Considering the limited resource environmental carrying capacity of some cities in the study area, these cities with scarce land resources are not suitable for large-scale real estate construction and mega-large engineering construction. Industrial structure and urban LUEE The harmonious progress of social development and ecological environment is the focus of existing researches 54,55 .There are many resource-based cities in the study area, whose development and industrial optimization level are relatively backward.These cities may neglect the protection and governance of environment and ecology in the process of pursuing urban economic growth (Fig. 7).However, the development power of resource-based cities comes from relatively single heavy industry, which makes the environmental pollution problems of these cities prominent 56 .The implementation of new urbanization and sustainable development planning makes the adjustment of urban industrial structure the key to the high-quality development of regional cities 57 .The impact of industrial structure, type and land scale on land use is greater than that of policy intervention, and the increase of the proportion of the secondary industry may be unfavorable to urban LUEE 58 .The organic combination of government intervention and urban industrial structure optimization is the key to realize urban LUEE improvement.www.nature.com/scientificreports/ Urban public transportation and urban LUEE It can be found that the urban public transportation development can significantly promote the urban LUEE (Fig. 8).Relevant researches show that the construction of public transport network is beneficial to urban LUEE.The improvement of land use efficiency can also optimize public transport network and improve the service capacity 59,60 .Due to the improvement of urban public transport network and service functions, low-carbon travel behavior of residents has been significantly affected, which is conducive to reducing the urban air pollution caused by traffic congestion 61 .Additionally, most cities of the study area belong to semi-arid regions, which have faced the problem of insufficient domestic and industrial water resources for a long time 62 .How to recycle river water and rainfall to solve the shortage of urban water resources is of great importance for these cities. Policy recommendations From what has been discussed above, three policy recommendations are proposed for improving the urban land use ecological efficiency and optimizing urban land use in ecologically sensitive areas, the specific countermeasures are as follows (Fig. 9): (1) Optimizing urban planning.Urban planning in the Loess Plateau is often limited by natural conditions such as terrain and climate, so more attention should be paid to the scientific and sustainable urban planning.Urban functional zoning should be rationally planned to avoid over-development and construction, while strengthening the construction of urban transportation, water conservancy, energy Conclusions This study constructed an index system containing input elements, output elements, and undesired output elements.And it used the super-SBM model to analyze the urban LUEE of 39 cities in the study area from 2009 to 2018.Based on the exploration of temporal-spatial evolution characteristics of urban LUEE and Tobit model, the key factors of urban LUEE were analyzed in ecologically sensitive areas.The study found that in 2009-2018, the time evolution characteristics of urban LUEE presented a "W" upward trend curve, and the spatial characteristics showed an unbalanced development trend with significant spatial differences, the cities with high evaluation scores were located in the western areas and southwestern areas of the study area, the cities with low evaluation scores were located in the north and east areas. Based on the exploration of urban LUEE key factors in the Loess Plateau from 2009 to 2018, the key factors were extracted by Tobit model and empirical analysis.There were five variables that have a significant effect on urban LUEE.This study provides an evaluation index system of urban LUEE, which can be used for land research in ecologically sensitive areas or other similar areas.Based on the discussion of key factors, the driving factors of urban ecological environment protection and urban social and economic sustainable development can be found.This research is an indicator system constructed through the analysis and summary of previous surveys, and there is still room for further improvement.How to ensure the scientific nature of the indicator system is the direction of future research. Figure 1 . Figure 1.Geographical location of the Loess Plateau, China (Mapping based on the ArcGIS10.8software can be obtained from the following link, https:// deskt op.arcgis.com). Figure 3 . Figure 3. Temporal evolution of urban LUEE in the study area from 2009 to 2018. Figure 4 . Figure 4. Spatial-temporal evolution of urban LUEE in the study area (Mapping based on the ArcGIS10.8software can be obtained from the following link, https:// deskt op.arcgis.com). Figure 5 . Figure 5.The three types of urban LUEE evolution in the study area (Mapping based on the ArcGIS10.8software can be obtained from the following link, https:// deskt op.arcgis.com). Figure 6 . Figure 6.Urban construction and land use/planning. Figure 7 . Figure 7. Industrial structure and land use. Figure 8 . Figure 8. Urban public transportation and land use. The data of this study area includes land use, ecological environment, and socioeconomic development.Urban land use ecological efficiency data refers to the variables related to urban construction and planning.The data sources include China Urban Statistical Yearbook (2010-2019),Provincial and Municipal Statistical Yearbooks (2010-2019),Provincial and Municipal Ecological Environment Bulletin (2010-2019), and some unavailable data were supplemented by interpolation. Table 1 . The evaluation index system of urban LUEE. Table 2 . Variable descriptive statistical results. Table 3 . Measurement results of urban LUEE in the Loess Plateau. Table 4 . Regression results based on Tobit model.
2023-12-17T06:16:15.409Z
2023-12-15T00:00:00.000
{ "year": 2023, "sha1": "50fdb54f21da4c43f82fc27532105a2a03290548", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "41250bdc9660bb181717c8c8a6e51bd806dfb653", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Medicine" ] }
235917443
pes2o/s2orc
v3-fos-license
DNA sonification for public engagement in bioinformatics Objective Visualisation methods, primarily color-coded representation of sequence data, have been a predominant means of representation of DNA data. Algorithmic conversion of DNA sequence data to sound—sonification—represents an alternative means of representation that uses a different range of human sensory perception. We propose that sonification has value for public engagement with DNA sequence information because it has potential to be entertaining as well as informative. We conduct preliminary work to explore the potential of DNA sequence sonification in public engagement with bioinformatics. We apply a simple sonification technique for DNA, in which each DNA base is represented by a specific note. Additionally, a beat may be added to indicate codon boundaries or for musical effect. We report a brief analysis from public engagement events we conducted that featured this method of sonification. Results We report on use of DNA sequence sonification at two public events. Sonification has potential in public engagement with bioinformatics, both as a means of data representation and as a means to attract audience to a drop-in stand. We also discuss further directions for research on integration of sonification into bioinformatics public engagement and education. Introduction As the field of genomics has matured, the need to provide novel tools for representing DNA sequence data has become pressing. This need ranges from comparative investigation of sequence variation across multiple sequences, searches for functional domains across extended sequences, searches for sequences within and between organisms that share homology or are otherwise similar, and capturing and comparing entire genomes. As genomics data and its potential for scientific investigation has grown, so too have methods for representing data. Visual sequence data has been a standard approach, but it has its limitations, especially when long sequences are involved. Algorithmic conversion of DNA sequences to sound-sonification-offers an alternative means of representing DNA sequences that in turn draws upon other human sensory mechanisms. Sonification may have very general value, though the field of bioinformatics sonification is not yet sufficiently mature to support (or reject) this as a broad conclusion. DNA sequence visualisation has a relatively long history in bioinformatics. Even the representation of the four bases as letters in the alphabet-A, C, G and T, with the sequence of one strand of the double helix represented as a string of such letters-is a simple visualisation, with a long tradition in databases and publications involving DNA sequence (e.g., [1,2]). Visualisation of genome annotation, pairwise sequence alignment diagrams and multiple alignments take visualisation further, by adding visualisation of properties of the sequence (e.g., predicted gene structures or similarity to other sequences). However, like any representation, visualisation has limits. In a research context, visualisation of large amounts of sequence is constrained by the size of a computer screen, the ability to convey diverse information through colours or symbols, and the attention of the researcher. In the context of public engagement with science, DNA sequence has a monotonous visual appearance, at odds with the "whizz-bang" often expected in such public displays or shows. Although the quantity and apparently random nature of large DNA sequences is engaging for a short time, more detailed examination of DNA sequence or annotation demands a level of focus that is atypically high for a public drop-in event or short activity. At public events we have had success with a wide range of ages using the Phylo game [3], where anyone can attempt to improve multiple alignments of regions of genes associated with human disease [4]. However, any one game or activity can only present a small aspect of bioinformatics. Although we have had success introducing bioinformatics to school pupils [5,6] and introducing bioinformatics and computational science to the public in a science centre [4], bioinformatics as a whole is rather lacking in public engagement activities. We aim to help fill this gap by means of sonification of DNA, where the representation of sequence is auditory rather than visual. Since music is widely understood and used for relaxation and entertainment, this may be more attractive to a public audience than a screen full of DNA sequence or associated data. By use of the Sonic Pi program [7] for sonification, we make the underlying program visible and customisable in real-time. This also provides a link between DNA and programming, which in is central to bioinformatics in general. Sonification of scientific data in is not new (e.g., [8]). More specifically, the use of DNA sequence to generate music has a history in music composition for artistic purposes, and there is a developing literature on DNA sonification for research (e.g., [9][10][11]). However, though DNA sonification examples for public engagement exist (e.g., [10,12,13]), they appear rare. Based on our preliminary activities, DNA sonification has potential for successful public engagement activities. Further efforts in this area are warranted. We report a brief analysis from public engagement events we have conducted that featured sonification. We also discuss some further directions for research on integration of sonification into public engagement. Methods At Picademy in Glasgow on 29 November 2016, an initial sonification project was carried out. The coding sequence of the l-gulonolactone oxidase gene from mouse (Gulo; as is used in our separate workshops for school classes [14,15]) was converted to a Sonic Pi program, using search-and-replace to convert bases A, C, G and T respectively to notes A4 or note 69 in the Musical Instrument Digital Interface (MIDI) standard, C4 (MIDI 60), G4 (MIDI 67) and G#0 (MIDI 20, chosen just because T is 20th letter in the alphabet), using the default synthesiser. A filtered bass drum was added at the start of each codon (i.e., every three bases). Subsequent events used Raspberry Pi computers running the 4273pi variant of GNU/Linux [16]. 4273pi comes with Sonic Pi installed. Sequence and script files-for example from our Additional files-can be transferred to the Pi via a Universal Serial Bus (USB) stick. Subsequently we incorporated sonification into a series of public events as follows. In preparation for Doors Open Day, a public drop-in event where we were based at the Ashworth Laboratories, King's Buildings, University of Edinburgh on 23 Sept, 2017 from 10 am to 4 pm [17], further sonifications were prepared, using a Perl script we wrote to convert DNA sequence in Fasta format to a Sonic Pi program followed by manual editing of the result. For script and example Sonic Pi programs and sound files, see Additional files. The event brought in a range of people, some potential new students and parents who were interested in biology, but also families with young children and other local people. For this event, the sonification of the base T was changed from MIDI 20, which Sonic Pi can play but is difficult to hear, to MIDI 64 (E4). These additional sonifications consisted of: (1) a codon-free version of Gulo; (2) a rave version of the Gulo sonification, using the "mod_saw" synthesizer, with a drum roll starting every 4th note for musical effect; and (3) a sonification of part of an intron of the CF transmembrane conductance regulator gene (CFTR) in human, which includes a dinucleotide microsatellite (AT), to highlight the potential research benefit of sonification (using no drums). Tempos were from 60 equal-length notes per minute up to 300 notes per minute, set using a time parameter to the "play_pattern_timed" function. Additionally, a brief tutorial to Sonic Pi including DNA sonification exercises was written and printed out in laminated form, along with the Gulo coding sequence and the start of the intron sequence from CFTR, with the microsatellites highlighted. Doors Open Day 2017 at the Ashworth Laboratories was attended by at least 258 people. During the event, we engaged ~ 30 visitors in a discussion about DNA, mainly by approaching them with the question "Have you ever wondered what DNA would sound like?". The event was evaluated by means of direct experience and diaries of H.P. and D.B. As part of Ada Lovelace Day, in a free but ticketed public event at the James Clerk Maxwell Building, King's Buildings, University of Edinburgh on 10 October 2017 [18], we gave an introductory presentation for ~ 50 participants, and offered three workshops later in the day, which were attended by a total of 13 people. The workshops lasted approximately 30-40 min each and consisted of a short DNA investigation using BLAST [19] with the Gulo gene (as in [15]), followed by a guided exploration of prepared DNA sonifications, and the opportunity to go through the same printed sonification tutorial. The event was evaluated by means of a short questionnaire and diaries of H.P. and D.B. Results and discussion Our preliminary study indicates that DNA sonification is highly suitable for public engagement activities, both for short drop-in events and for more focused workshops. Doors Open Day was a large public event, during which people engaged well with the sonification activity. The question "Have you ever wondered what DNA would sound like" was a successful way of starting a discussion about DNA. Everyone who was asked this question stayed to listen to a sample of our sonifications and engaged in a short discussion about DNA. Visitors were guided through prepared sonifications, and given the opportunity to go through the printed sonification tutorial. Visitors found the event creative and inspiring and were either happy to see it as a light distraction, or to discuss scientific applications. Relatively little use was made of the Sonic Pi tutorial. During the event, Minecraft proved a distraction, so we discreetly removed it from the main menu of each Raspberry Pi. Later, Scratch was also some distraction. For the future, we would remove these in advance because they are not part of our display. Restrictions on Web browser access based on Uniform Resource Locator (URL) may also have some value. The Phylo game ran slowly at times on the Raspberry Pi computers (a problem that would be reduced or removed with newer models of the hardware). The sonification workshops during Ada Lovelace Day were well-attended, and the audience appeared engaged. Six participants completed questionnaires. Others did not, due to leaving early for lunch (1 participant) and leaving at short notice to collect their bags from the main activity room (5 participants). When asked what the best thing about the workshop was, participants stated "The opportunity to engage with DNA or Biology from a different perspective. Hearing a sequence made me think of it as an object in a different way. "; "Sonic Pi (amazing!)"; and "Updating me on sequence investigation". When asked what the worst thing about the workshop was, participants did not have many comments, i.e., "None", "Nothing", "not enough time!". We had difficulty connecting to the University wireless network and used a 4G phone as a hotspot instead. This was resolved before participants arrived and was not noticeable to them. The event was a success. However, H.P. and D.B. thought a little more time to focus on sonification would be helpful for such events in the future. It appears that DNA sonification engages both researchers and the public in thinking about DNA from a different perspective. Further development of DNA sonification for public engagement activities is warranted. Limitations Although our preliminary study is sufficient to suggest DNA sonification has potential in public engagement activities, a larger study is required to discover the full limits of its successful application. Our audience for evaluation consisted of adults, mostly staff and students at Edinburgh University. We recommend a larger programme, using qualitative and quantitative analysis, with a diverse range of audiences. We regard our simple sonification technique-using one note per base-as more relevant to public engagement than to research. The limited range of notes makes for a catchy melody but it is difficult to distinguish sequence features, beyond simple examples such as a long dinucleotide repeat. However, for public engagement, it is important to avoid a strong reliance on biological knowledge. For research, future directions involve sonification of more complex bioinformatics data, for example multiple sequence alignments (Martin et al., in prep.). Multiple alignment depends on biological concepts such as homology and cross-sequence comparison, unlikely to be rapidly comprehensible to the general public. The Phylo multiple alignment game [3] is a counter-example. However, as well as an exercise in multiple alignment, Phylo may be understood as a logic puzzle, potentially bypassing biological concepts for many users while leaving them as useful discussion points for those with expertise in the area. For public engagement and education, one promising future direction may be to highlight the effect of frameshift mutations. We have developed and apply a workshop for biology students in secondary education [15] (open educational resources available at [14]), centred on the Gulo gene (e.g., [20]), which in humans is disrupted by frameshift mutations. Visually, a frameshift mutation is difficult to notice on-screen in BLAST alignments comparing the Gulo coding sequence and the human pseudogene. A frameshift is often only indicated by a single dash ("-") among DNA symbols. Sonification could help make the frameshift stand out. In conclusion, sonification has a demonstrated potential for public outreach and engagement. Our methodology was well-received. It is our aspiration to build on these preliminary efforts to use sonification to make DNA sequence information entertaining as well as informative and to increase the nuance and complexity that we convey in public engagement.
2021-07-16T14:02:32.266Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "4404b1259bdd8d18d3e336790524016b541a7d35", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-021-05685-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c599e2def4f57c7f904364371f38157492e081cc", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
229029236
pes2o/s2orc
v3-fos-license
Effect of Constructivist Teaching Approach on Student’s Achievement in Science at Elementary Level 1. Associate Professor, Department of Education, Lahore College for Women University, Lahore Pakistan 2. Lecturer, Department of Education, Lahore College for Women University, Lahore Pakistan 3. MS Scholar, Department of Education, Lahore College for Women University, Lahore Pakistan PAPER INFO ABSTRACT Received: July 29, 2020 Accepted: September 05, 2020 Online: September 30, 2020 Main objective of this study is to determine the effect of this approach while comparing it with traditional method of teaching. The study was experimental in nature. The population of study was 7th class students school. Sixty students were selected randomly. After pretesting students were distributed into two groups. Treatment was assigned randomly to the groups. Experimental group was taught through constructive approach and control group was taught through traditional approach. Treatment time was 8-weeks. Selected topics of science were taught to both groups. An achievement test was constructed and validity and reliability of the test was ensured through expert opinion and pilot testing. Conquest 4 was used to determine psychometric properties of items based on IRT. Selected items were used in achievement test. After taking posttest independent t-test was used to calculate the difference between two groups. The main findings of the research indicated that there is a significant difference between achievements of two groups. Experimental group performance is better as compare to traditional method. So, it would be suggested that as constructivism renovates the student from a passive learner to an active participant in the teaching learning process, teacher’s role is to facilitate students in constructing knowledge rather just mechanically consuming knowledge from the teacher or the textbook. Introduction Some core ideas of constructivism are mainly influenced by Swiss psychologist and epistemologist Jean Piaget . Starting from his early writing to his last publication, he remained loyal to his stance on constructivist perspective. Educationist deeply focused on Piaget's stage theory and tried to implement the intellectual development stages in educational setting. This endeavor of matching remained influential in educational setting for a long period (Sjoberg, 2010). As pointed out in previous paragraph that constructivism developed from a Piagetian perspective, and later on some other theorist enters and tried to relate intellectual development with social and cultural conditions mandatory for learning. One of the Piaget's contemporary is Russian Lev Vygotsky (1896-1934 who laid emphasis on social and cultural aspects in the process of language learning. Because of more stress on the social and cooperative aspect of learning, he is called as a father of social constructivism, on the other side Piaget is considered as a father of cognitive constructivism. Applications of social constructivism are most commonly initiated in schools recently through the use of cooperative and collaborative teaching methodologies (Jones & Brader-Araje, 2000).The significant contribution of constructivism's perspectives is its emphasis on construction of meaning by the learner through his/her experience. Active participation of the learner in learning process is more appealing for educators. Through social cooperation learners continually check their postulates and create new information by précising the previous knowledge. Constructivism acknowledged the role of prior learning, distinguishing that learners are not blank slates or empty vessels waiting to be filled with knowledge. Rather students construct new knowledge on a rich array of previous experiences, knowledge, and beliefs. Students are dynamic information earpiece (Khan, 2019;Jones, Carter, & Rua, 1999). This prior knowledge is significant and called schema. All knowledge is passed through existing schema and when students are actively involved in learning rather than passive members learning become more efficient as stated by constructivist (Chowudry, 2016;Azeem & Khalid, 2012). "The central principles of this approach are that learners can only make sense of new situations in terms of their existing understanding. Learning involves in active process of learning in which learners construct meaning by linking new ideas with their existing knowledge" (Naylor & Keogh, 1999, p.93).That's why every student has its own interpretation of knowledge according to their mental capability (Learning Theories Knowledgebase, 2008).Constructivist said that human world is different from world of fantasy so they learn accordingly (Chowudry,2016).Learners curiosity about how things work in real word triggered by constructivism (Azeem & Khalid, 2012). Relatively, knowledge is constructed by learners, they construct knowledge through active and cognitive process of development; they are the maker and creators of important knowledge (Mir & Jain, 2015). Understanding of new knowledge depends upon assimilations and accommodations capacity of human beings. Learning occurs when student make use of previous knowledge and experience. Thus, manifold explanations of an incident are possible and these multiple explanations are source of creativity in learners. A constructivist believes that students require time to reflect on their experiences to assimilate and accommodate it with what they already know. After this become enable to understand new phenomenon (Khan, 2019;Thompson, 2018;Bada,2015). According to Mir and Jain (2015) "Constructivist teaching fosters critical thinking and creates active and motivated learning. A constructivist approach frees teachers to make decisions that will boost and improve learner's development it means constructivist classroom not only benefits learning of students, but it also helps to increase the various ability among learners, such as: problem-solving ability, scientific attitude, fostering creativity, decision power making ability, reflective ability, higher order thinking ability and many more"(pp 362). Process of learning is more significant than the product of learning as per constructivist approach. Role of learner as an active recipient is emphasized. It depends on learner how he/she learns new knowledge. This type of learning requires flexible classrooms and learner should have freedom to participate in different activities to construct new knowledge (Gomleksiz &Elaldi, 2011;Amineh&Asl, 2015;Jaleel & Verghis, 2015;Kamphorst, 2018). Teaching in constructive classroom is to guide students to grasp the concepts and formulate opportunities for learners to rethink and build new concepts by understanding misunderstandings; teachers ask questions from students to encourage and engage them and corroborate them in research to challenge current concepts. In constructive classroom, teacher play passive role just guide students and lead them by providing instructions and ideas to create new learning situations to active students of science. Constructivism has many models for learning experiences of students but the 5 E's model by Roger Bybee is best for implementation in science classroom because it formulate under the Biological science curriculum study (BSCS) project. The 5 'Es' stand for Engage, Explore, Explain, Elaborate and Evaluate. That's why constructive approach emphasizes learning by process of demonstration, group work, analyze problems, self-assessment and other methods such as taking review from peers rather than using learning in result of consequences (Ayaz,2015;Caliskan,2015;Singh,2015). Due to the above mention reason many countries understand the need and change their educational system from traditional to constructivist learning methodology. Pakistan education is still following traditional mode of teaching and learning. Keeping in view the importance of this approach for student's conceptual understanding, present study is designed to know empirically the effect this approach on student's achievement in Pakistani context. Material and Methods The main objective of this experimental research was to explore the effect of constructivist teaching approach on student's achievement in science at grade VII. For present study quantitative research approach under the umbrella of positivistic paradigm was used. Experimental method was to investigate phenomenon under study. Pretest and post-test design was used. The main foundation of pretestposttest design consists of obtaining the outcome of interest after conducting some treatment, followed by posttest on the same sample set after treatment. All the 7 th class students studying in a local school were the population of the study. There were 360 students distributed in 6 sections (60 students in each section). One of the sections is selected randomly. After pretest, students were distributed in to two groups (30 students in each group). Treatment was also randomly assigned. Group taught through constructivist approach is called experimental group and group taught through traditional method is called control group. Science achievement test was constructed by using the mathematical framework of National Assessment of Educational Progress (NAEP) and national curriculum of Pakistan for grade 7. Items were constructed and aligned to the science framework. Three aspects of Bloom's taxonomy were covered (knowledge, understanding and application). Test was content validated through expert opinion (8 experts). Instrument was pilot tested on 500 hundred students to ensure the reliability of the test. The data from pilot test was analyzed by using Conquest 4 software to determine psychometric properties of the test (IRT model fit indices). 40 items were selected that fulfilled the criteria. Lesson plans were developed for both method and both groups were taught for eight weeks. Results and Discussion Data were collected while administering achievement test and t test was used to test following hypotheses. Hypothesis 1: There is no significant difference between achievement score of experimental and control groups in pretest. Table 1 indicates that experimental group and control group are not significantly different and the value of t is 1.785 which is smaller than the critical value 1.96(df = 58) at 0.05 level of significance. Similarly, value of p is .079> 0.05.So, it indicates that null hypothesis showing there is no significant difference between pretest achievement score of experimental and control groupwas accepted. This revealed that both groups were equal before treatment. Hypothesis 2: There is no significant difference between experimental and control groups post test scores. Table 2 indicates that experimental group and control group are significantly different and the value of t is 6.018 which is greater than the critical value 1.96(df = 58) at 0.05 level of significance. Similarly, value of p is .000< 0.05. This show that null hypothesis stated there is no significant difference between post-test achievement score of experimental group and control group was rejected. Table concludes that students of experimental group are showing significantly better results than control group after treatment in post-test. Discussion Teachers who used constructivist activities in the classroom empower the learners to experience new things and enhance their understanding on the basis of prior knowledge. Piaget and Vygotsky's is earliest proponent of constructivism. Later on, in 1970's educators focused on this approach and it became one of the prominent students centered approach in teaching and learning environment. Some advance countries Like USA and UK revamp their curriculum as per need of this approach. In USA , a center was established and numerous researches were conducted and many projects were designed to establish the effectiveness of this approach. It was established that effectiveness of this approach depends upon teachers' understanding of constructivist theory, principles and pedagogy. Metaanalysis was conducted by Gunduz and Hursena (2015) to investigate the development and the inclination of researches in the field of constructivism in teaching. It was found during this investigation that 161 were articles published between 2002 and 2013 in Science Direct, Eric and EBSCO are examined. There are three publications in 2002 and become 43 till 2012 and 10 studies published in 2013.Analysis revealed that constructivist approach is a contemporary trend in teaching and learning and is gaining importance. This approach is student centered and the role of the student is active recipient of information. Like, wise role of the teacher is described as a facilitator in teaching learning process (Ozcan, Gunduz, Danju, 2013). Most of the study shows English language is the major area constructivist approach was used in research studies. Teaching of science is another area where this approach is extensively used. Present study is also focusing on teaching of science through this approach. Another mata-analysis was conducted by Ayaz (2015) in Turkish context. The aim of the study was to determine the effect of constructive approach on the academic achievement of the students. It was found that 53 studies were conducted to find the out effects of constructivist learning approach on students' academic achievement. Total number of participants were 3271 (control group and the experimental group). It is revealed during data analysis that 50 out of 53 study results indicate positive effect of constructivist approach on student's achievement. Only three studies showed negative effect. Findings of the present study are congruent with above studies. Finding of this study also showed positive effect of this approach on student's achievement. Another important finding reported in literature is that this approach is more effective for teaching of science. In Pakistan, constructive learning strategy in classroom is not very common and main focus of teaching is rote memorization. In Pakistan science is considered as a subject who mostly taught trough simple lecture method. It was a different experience for group of students, taught through that approach. Keeping in view, it's necessary to teach the subject according to its need by adopting new and modern methods of teaching. The novelty of this method makes students more attentive and motivated.
2020-11-05T09:11:05.465Z
2020-09-30T00:00:00.000
{ "year": 2020, "sha1": "443e54f6daaf00eba6f8c095b5e0ffd47888c300", "oa_license": "CCBYNC", "oa_url": "https://pssr.org.pk/issues/v4/3/effect-of-constructivist-teaching-approach-on-student-s-achievement-in-science-at-elementary-level.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d4a7d89b3e476123927ae45fdab79252c97066a2", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
159226530
pes2o/s2orc
v3-fos-license
Life Cycle Assessment of a Lithium Iron Phosphate ( LFP ) Electric Vehicle Battery in Second Life Application Scenarios This paper presents a life cycle assessment (LCA) study that examines a number of scenarios that complement the primary use phase of electric vehicle (EV) batteries with a secondary application in smart buildings in Spain, as a means of extending their useful life under less demanding conditions, when they no longer meet the requirements for automotive purposes. Specifically, it considers a lithium iron phosphate (LFP) battery to analyze four second life application scenarios by combining the following cases: (i) either reuse of the EV battery or manufacturing of a new battery as energy storage unit in the building; and (ii) either use of the Spanish electricity mix or energy supply by solar photovoltaic (PV) panels. Based on the Eco-indicator 99 and IPCC 2007 GWP 20a methods, the evaluation of the scenario results shows that there is significant environmental benefit from reusing the existing EV battery in the secondary application instead of manufacturing a new battery to be used for the same purpose and time frame. Moreover, the findings of this work exemplify the dependence of the results on the energy source in the smart building application, and thus highlight the importance of PVs on the reduction of the environmental impact. Introduction The transportation sector is known to be one of the main contributors to greenhouse gas (GHG) emissions and other hazardous pollutants worldwide, resulting in environmental degradation and climate change, which have both become more pronounced over the last decades.In this respect, the electrification of the transportation sector is typically viewed as a promising direction for reducing GHG, given that electric vehicles (EVs) produce no tailpipe emissions during their operation.However, significant environmental impact can be traced not only to the energy operational processes for generating electricity to charge the EV battery, but also to the life cycle of the battery itself [1].Hence, a life cycle assessment (LCA) approach is required to fully capture the environmental footprint of EVs [2], while the reuse of EV batteries in less demanding applications can extend the use phase of their life cycle, it is thus of particular interest for both academia and industry [3]. Nowadays, EVs are typically powered by lithium-ion (Li-ion) batteries due to their distinctive characteristics in terms of high energy and power density, long life, as well as little maintenance requirements, when compared to other battery technologies [4,5].However, current Li-ion batteries, with a specific energy in the range of 100-150 Wh kg −1 [4], cannot provide an average EV with a driving range comparable to that of conventional vehicles.Moreover, relevant LCA studies show that Li-ion battery technologies produce substantial environmental impacts during their life cycle, including the manufacturing phase.Specifically, the authors in [5] analyze the environmental burden caused by a lithium manganese oxide (LMO) battery and conclude that the main contributors include the copper and aluminum supply for the anode and cathode production, along with the required cables or the battery management system.Similarly, the authors in [6] study the environmental impact from the production of a Li-ion nickel-cobalt-manganese (NMC) EV battery, reporting that the production chains with the highest contribution are the manufacture of battery cells, positive electrode paste, and negative current collector. With respect to the end-of-life of EV batteries, the work in [7] discusses the economic and environmental benefits of NMC battery recycling in China.The study in [8] employs a modelling framework for the global lithium cycle based on dynamic material flow analysis to assess the potential for lithium recovery from EV battery recycling, while the challenges identified in this field include not only the cost-effectiveness of the recycling technologies, but also the efficiency of material recovery processes and the required infrastructure.In the same direction, the authors in [9] focus on the metallurgical and mechanical methods for recycling of lithium-ion battery pack for EVs, summarizing the two main basic aspects of recycling battery packs, namely mechanical procedure and chemical recycling. To this end, recent years have witnessed significant research efforts on the research and development of alternative Li-ion battery technologies, focusing on the use of novel materials to increase the energy density, e.g., silicon nanowires as anode material [10], yet enhancing their environmental performance remains an open research challenge.In this context, lithium-sulfur (Li-S) is a prominent example of the most promising battery technologies for future EV applications [11,12].Given that sulfur is characterized by a high theoretical capacity of 1672 mAh g −1 [13], a Li-S battery offers a theoretical energy density of ∼2600 Wh kg −1 [12].Despite the fact that the practically achievable gravimetric energy of Li-S batteries (reported to be ∼600 Wh kg −1 ) is significantly lower than the theoretical one, it is still significantly higher compared to that of state-of-the-art Li-ion batteries with a value of 280 Wh kg −1 [14].The authors in [15] perform an LCA study to evaluate the environmental impact of a Li-S battery pack in an EV application, reporting that the Li-S battery has a lower environmental impact by 9-90% in most impact categories compared to a conventional NMC-graphite battery. In addition, the lithium iron phosphate (LFP) battery technology has also attracted the interest of many researchers.The authors in [16] track the degradation in LFP batteries using differential thermal voltammetry, while the authors in [17] evaluate the power capability of LFP batteries based on a multi-parameter constraints dynamic estimation method, where the performance of the proposed approach is experimentally tested using dynamic loading profiles.The work in [18] focuses on the reliability assessment and failure analysis of LFP batteries, proposing a strategy to enhance their reliability based on statistical analysis and clustering analysis of experimental data from full life cycle testing.A series of experimental tests were performed in [19] to characterize and compare the performance of LFP and lithium polymer (LiPo) battery technologies for both stationary and automotive purposes.An overview of the current battery technologies for EVs, as well as advances in Li-ion batteries are given in [20]. Combining the above, it becomes apparent that assessing not only the performance characteristics, but also the environmental impact during the full life cycle of Li-ion batteries for EV applications is of particular interest.Using the ReCiPe method, the authors in [21] present a comprehensive LCA study on a potential next-generation Li-ion battery with molybdenum disulphide anode (MoS 2 ) and NMC oxide cathode, where the results of the comparison between an NMC-MoS 2 battery and a conventional NMC-graphite battery reveal that the environmental impact of the former is higher in most impact categories.The authors in [22] examine some critical issues regarding the LCA of Li-ion batteries for EVs, concluding that the use of water as a solvent instead of N-methyl-2-pyrrolidone (NMP) in the slurry for casting the cathode and anode of Li-ion batteries reduces the environmental impact.The work in [23] applies LCA to analyze and compare the environmental impact of lead acid (LA), LMO and LFP batteries, revealing that the LFP production has the lowest overall environmental impact.Moreover, the authors in [24] perform an LCA study on Li-ion and nickel metal hydride (NiMH) batteries for plug-in hybrid and battery EVs, showing that the NiMH technology has the highest environmental impact.In this context, the authors in [25] report that the assumptions and modelling approaches employed in an LCA have a significant impact on the outcome of the study that can be even greater than that of the particular cell chemistry, thus it is of paramount importance to establish a common base for conducting LCA studies to enhance the process of benchmarking the environmental performance of different battery chemistries. Additionally, a number of studies have focused on the LCA study of reusing EV batteries that no longer meet the requirements of automotive purposes in less demanding applications, in particular as stationary energy storage systems.In the light of these secondary applications as a means of extending the useful life of Li-ion batteries, the authors in [26] consider the case of an LFP battery (with a Li 4 Ti 5 O 12 anode and LiFePO 4 cathode) of an urban EV in Spain, and examine alternative end-of-life scenarios, including the reuse of the battery as an energy storage unit in a smart building with solar photovoltaic (PV) panels.Despite the fact that there are additional environmental burdens from manufacturing the PV panels, the results of this study confirm the environmental benefit of reusing the existing EV battery in the smart building application compared to manufacturing a new one for the same purpose.Similarly, the authors in [27] examine the second life application of a Li-ion battery with cell chemistry of LiFePO 4 cathode and graphite anode, assuming that the use, remanufacturing and reuse phases occur within the Province of Ontario, Canada.Furthermore, the authors in [28] analyze the environmental trade-offs of cascading reuse of EVs' Li-ion batteries in stationary energy storage at automotive end-of-life, reporting that the net cumulative energy demand and global warming potential can be reduced by 15% under conservative estimates and by as much as 70% in ideal refurbishment and reuse conditions. In this context, the present paper builds upon the work in [26] to expand and further analyze the second life application scenarios of an LFP EV battery by combining the following cases for the second use phase: (i) either reuse of the EV battery or manufacturing of a new battery as energy storage unit in the building; and (ii) either use of the Spanish electricity mix or energy supply by the PVs.A limitation of this work is that battery recycling options are not included in the analysis due to the lack of relevant data for this kind of processes in the life cycle inventory employed.The rest of the paper is organized as follows: Section 2 describes the characteristics of the LCA study, the materials of the battery, and the LCA tool employed in this work, as well as it introduces the second life application and presents the scenarios under study.Section 3 discusses the results obtained from the scenarios under study and the last section concludes the paper. LCA Study Characteristics The goal of this LCA is to examine the potential benefits in terms of environmental impact from the reuse of an LFP battery, which can no longer be used in an EV, but still fulfills the requirements as an energy storage unit in a building in Spain.To this end, the scope of the analysis includes the manufacturing phase, the use phases (primary and secondary applications) and the disposal phase of the EV battery, along with the related background processes, whereas other EV components are out of the boundaries of the system.Given that the scenarios under study have the factor of time as common reference, the functional unit in the analysis is chosen to be equal to 4000 days (assuming one battery cycle per day).The information for the materials and processes required for the manufacture of the LFP battery is based on the work in [24].The data source for the life cycle inventory is the Ecoinvent database, which is incorporated in the LCA tool employed for the purposes of this work, namely SimaPro developed by Pré Consultants.The impact categories considered in the LCA study are carcinogens, respiratory organics, respiratory inorganics, climate change, radiation, ozone layer, ecotoxicity, acidification/eutrophication, land use, minerals and fossil fuels. LCA Tool SimaPro is one of the most widely used LCA software, chosen by industry, research institutes, and consultants in more than 80 countries.It is based on the ISO 14040 and 14044 standards, where the first considers the principles and framework for an LCA, while the latter specifies the requirements and guidelines for carrying out an LCA study.SimaPro incorporates the Ecoinvent database that covers more than 10,000 processes, as a result of a joint effort by different Swiss institutions to update and integrate several life cycle inventory databases.Moreover, it offers several mid-point and end-point impact assessment methodologies (e.g., ReCiPe/IPCC 2007/Greenhouse Gas Protocol/CML IA/Ecological footprint and ReCiPe/Eco-indicator 99/Impact 2002+/EPS 2000 respectively), each one containing a number of impact categories (e.g., climate change, acidification, etc.).The methodology of impact assessment is based on aggregating each elementary flow from the inventory to one impact category (classification).Then, equivalency factors (e.g., IPCC for climate change) are applied to determine the whole impact category result (characterization).The outcome consists of the quantification of different impact categories (e.g., climate change in kg of CO 2 eq). Materials Regarding the production of the electrode paste, the main components include a binder substance (5-10% of the total paste), carbon black to improve conductivity (4-10% of the total paste), LiFePO 4 and an electrochemically active material.It is also mentioned that an important modeling choice is the assumption of hydrothermal synthesis for LiFePO 4 , among the many different synthesis paths available [24].Moreover, the binder material selected is polyvinylidene fluoride (PVDF), while the solvent selected to obtain the desired slurry texture is NMP, which is evaporated during the mixing process with the substrate.The electrode substrate is a very thin (1520 µm) metal foil, mainly composed of aluminum mixed with other metals, which is utilized as the current collector and gives physical support for being later coated with the electrode paste.Due to the lack of available data for the manufacture of this part of the cathode, it is assumed that this process is similar to the "sheet rolling" process in the Ecoinvent database.As already pointed out, for the purposes of this work, hydrothermal synthesis is assumed for the production of LiFePO 4 , among the available options.This production process includes the reaction of iron sulphate salt (FeSO 4 × 7H 2 O) with lithium hydroxide (LiOH) and phosphoric acid (H 3 PO 4 ) in a water medium inside a hermetic reactor at a temperature ranging between 150-250 • C.After this process, LiFePO 4 precipitates and is picked up by a suction filter and later dried during five hours at a constant temperature of 60 • C. Battery Degradation and Second Life Application Lithium batteries for EVs are subject to two mechanisms that shorten life-time and deteriorate performance, namely cycling capacity loss and calendar capacity loss.The former depends on the number of battery charging/discharging cycles, while the latter depends on the state of charge, aging time, and expose of the battery to high temperatures.Specifically, cycling capacity loss is typically attributed to the formation of solid electrolyte interphase (SEI) layer, structural changes in the electrodes and loss of lithium during battery charging/discharging. Calendar capacity loss is attributed to battery self-discharge and side reactions that occur during the energy storage period [29]. It is estimated that LFP batteries can support at least 2000-2500 cycles in electro-mobility applications, for example, daily use of a charge and discharge cycle for seven years, until the remaining capacity reduces to 80% of the initial battery capacity.This allows its use for another 1000-2000 cycles until the capacity reduces to 60% of the initial capacity.When this occurs, the aging process of the battery has advanced to point that the voltage drop does not allow further use of the battery [30]. After the end of the useful life of a battery for electro-mobility purposes, typical applications include its use as energy storage unit in smartgrids or uninterruptible power supply.The main characteristic of these applications is the lower stress that the battery cells suffer, enhancing thus the durability of the battery pack.In the context of this work, the primary use phase considers a 24 kWh LFP battery with efficiency of 80% used for 2500 days in an EV, while the second life application considers the case of using an LFP battery as an energy storage unit in a smart building for 1500 days (or equivalently, four years), taking into account the average home consumption in Spain in 2010.In the scenarios that refer to the use of the same battery in the primary and secondary application, it is further assumed that the efficiency drops from 80% to 75% due to the aging of the battery. Scenarios For the purposes of this work, the LCA study examines five different scenarios, i.e., a base scenario as reference and four alternative scenarios.The base scenario for the LFP battery includes the following stages: battery manufacture, use phase in the EV for 2500 cycles, disposal once it reaches the end of life for automotive purposes (considering the case of treatment of incineration and then landfilling of the leftover residues) and second use phase assuming a new EV battery with the same specifications for another 2500 cycles.As this scenario considers the life cycle of two batteries in an EV, it exceeds the functional unit of time, however it is used only as an indicative reference for the comparison of the four alternative scenarios that consider the use of the battery as energy storage unit in a building. The first two alternative scenarios consider the secondary application of an EV battery in a smart building for stationary energy storage using electricity from the grid.Specifically, the stages of scenario 1 are as follows: LFP battery manufacture, use phase in the EV for 2500 cycles (until the capacity drops to 80% of the initial value), second life application to the smart building for additional 1500 cycles until the capacity degradation does not allow more uses (i.e., decrease of battery capacity down to 60% of initial value), and battery disposal with the same treatment as it is assumed in the base scenario.The structure of scenario 2 is the same with the base scenario until the disposal of the initial battery, with the difference being that a new battery with a smaller capacity is manufactured for storing the energy from the grid (based on the Spanish electricity mix) and supply it to the smart building for 1500 cycles.The other two alternative scenarios, namely scenarios 3 and 4, are similar to scenarios 1 and 2, but instead of evaluating the use of the battery in smart building applications by storing energy provided from the grid, the energy is supplied by PVs.Therefore, additional environmental burden is allocated in scenarios 3 and 4 for the manufacture of PVs. Figure 1 illustrates the phases included in each alternative scenario. Scenario 1 Scenario 1 is based on the idea of reutilizing the LFP batteries in smart buildings once they have reached the end of life for electro-mobility purposes due to the degradation of the battery (80% of the initial capacity).This scenario includes four stages: (i) the manufacturing process of the battery, which is the same for all scenarios; (ii) the primary use phase of the battery in the EV for 2500 days; (iii) the secondary use phase (second life application) as energy storage in a smart Scenario 1 Scenario 1 is based on the idea of reutilizing the LFP batteries in smart buildings once they have reached the end of life for electro-mobility purposes due to the degradation of the battery (80% of the initial capacity).This scenario includes four stages: (i) the manufacturing process of the battery, which is the same for all scenarios; (ii) the primary use phase of the battery in the EV for 2500 days; (iii) the secondary use phase (second life application) as energy storage in a smart building using the Spanish electricity mix for 1500 days (after the degradation of the battery due to the use in the EV); and (iv) the disposal of the battery once it reaches its end of life.At this point, it is noted that the environmental impact of the disposal stage is left out of the scope of this comparative analysis, and thus not evaluated, due to the lack of relevant data for battery recycling options (that represent more realistically the possible end-of-life treatment options for the EV batteries) in the life cycle inventory employed in this work; yet this assumption still provides a valid basis of comparison given that the disposal stage is common in all the scenarios under study.The single score of the LFP battery for scenario 1 with respect to the manufacturing process, use phase and second life application, obtained by using the Eco-indicator 99 method and disaggregated per impact category, is given in Table 1.The results obtained in this scenario show that the stage with the lowest overall environmental impact is the production of the battery, followed by its use in the EV and finally, the most harmful stage for the environment is the second use phase of the battery, mainly caused by the large quantity of energy supplied to the battery by the grid, taking into account the Spanish electricity mix.Given that the source of electricity is the same in both use phases (primary and secondary), the higher environmental impact caused by the second life application is due to the higher energy demand in the building in comparison with the EV.In this context, an important factor to consider is the lower efficiency of the battery during the smart building application, leading to higher losses of electricity during the charging and discharging phases, and thus higher environmental impact. Figure 2 presents the contribution of each stage of the battery life cycle in each impact category, indicating that there are significant differences between the categories for the overall environmental impact.In detail, the three main categories that influence the final result are the fossil fuels, respiratory inorganics and carcinogens, followed by climate change. is the lower efficiency of the battery during the smart building application, leading to higher losses of electricity during the charging and discharging phases, and thus higher environmental impact. Figure 2 presents the contribution of each stage of the battery life cycle in each impact category, indicating that there are significant differences between the categories for the overall environmental impact.In detail, the three main categories that influence the final result are the fossil fuels, respiratory inorganics and carcinogens, followed by climate change. Scenario 2 In contrast to scenario 1, scenario 2 is not based on the idea of reutilizing the LFP batteries once they have reached the end of life for electro-mobility purposes, but instead, a new battery with a smaller capacity of 12 kWh is manufactured and utilized for the smart building application, in replacement of the first battery with the degraded capacity.Specifically, this scenario includes five stages: (i) the manufacturing process of the first battery; (ii) the use phase of the first battery in the EV for 2500 days; (iii) the disposal of the first battery; (iv) the manufacturing process of the second battery with the same technology but smaller capacity; and (v) the use phase of the second battery (second life application) in a smart building using the Spanish electricity mix for 1500 days.Similarly to scenario 1, the evaluation of the disposal phase is omitted due to the lack of relevant data for battery recycling options (that represent more realistically the possible end-of-life treatment options for the EV batteries) in the life cycle inventory employed in this work, given that it is common for all the alternative scenarios under study.Table 2 shows the single score of the two LFP batteries for scenario 2 with respect to their manufacturing process and corresponding use phases, obtained by using the Eco-indicator 99 method and disaggregated per impact category. Scenario 2 In contrast to scenario 1, scenario 2 is not based on the idea of reutilizing the LFP batteries once they have reached the end of life for electro-mobility purposes, but instead, a new battery with a smaller capacity of 12 kWh is manufactured and utilized for the smart building application, in replacement of the first battery with the degraded capacity.Specifically, this scenario includes five stages: (i) the manufacturing process of the first battery; (ii) the use phase of the first battery in the EV for 2500 days; (iii) the disposal of the first battery; (iv) the manufacturing process of the second battery with the same technology but smaller capacity; and (v) the use phase of the second battery (second life application) in a smart building using the Spanish electricity mix for 1500 days.Similarly to scenario 1, the evaluation of the disposal phase is omitted due to the lack of relevant data for battery recycling options (that represent more realistically the possible end-of-life treatment options for the EV batteries) in the life cycle inventory employed in this work, given that it is common for all the alternative scenarios under study.Table 2 shows the single score of the two LFP batteries for scenario 2 with respect to their manufacturing process and corresponding use phases, obtained by using the Eco-indicator 99 method and disaggregated per impact category.As expected, the results of scenario 2 show that the stage with the lowest overall environmental impact is the production of the second (smaller) battery for the second life application, followed by the production of the first battery for the EV, the use phase of the first battery in the EV and finally, the use phase of the second battery, which is the most harmful stage, mainly due to the large quantity of energy supplied to the battery using the Spanish electricity mix.Taking into account that the source of electricity is the same in both use phases (primary and secondary), the higher environmental impact caused by the second life application is due to the higher energy demand in the building in comparison with the EV.Consequently, if the energy supplied to the building is lower, the environmental impact of this stage will then also be lower.A close examination of Tables 1 and 2 reveals that the environmental impact of the second life application in scenario 2 is reduced when compared to that of scenario 1, as a result of the higher efficiency of the new battery used in scenario 2 instead of the existing battery that is degraded due to aging effects.Nevertheless, the additional environmental burden for the manufacture of the second LFP battery results in a higher total impact in scenario 2. Figure 3 illustrates the impact of the aforementioned stages of the batteries life cycle per category, indicating that there are significant differences between the categories for the overall environmental impact.Similarly to scenario 1, the categories with the highest impact in scenario 2 are the fossil fuels, respiratory inorganics and carcinogens, followed by climate change. Scenario 3 Scenario 3 considers the reuse of the LFP batteries once they have reached the end of life for electro-mobility purposes as in scenario 1, but with the main difference of using PVs as the electricity source for the second life application in smart buildings.This scenario includes four stages: (i) the manufacturing process of the battery; (ii) the primary use phase of the battery in the EV for 2500 days; (iii) the secondary use phase (second life application) as energy storage in a smart building using PVs as the energy source for 1500 days (after the degradation of the battery due to the use in the EV); and (iv) the disposal of the battery once it reaches its end of life.Similarly to the previous scenarios, the evaluation of the disposal phase is omitted due to the lack of relevant data for battery recycling options (that represent more realistically the possible end-of-life treatment options for the EV batteries) in the life cycle inventory employed in this work, given that it is common for all the alternative scenarios under study.Table 3 presents the single score of the LFP battery for scenario 3 with respect to the manufacturing process, use phase and second life application, obtained by using the Eco-indicator 99 method and disaggregated per impact category. Scenario 3 Scenario 3 considers the reuse of the LFP batteries once they have reached the end of life for electro-mobility purposes as in scenario 1, but with the main difference of using PVs as the electricity source for the second life application in smart buildings.This scenario includes four stages: (i) the manufacturing process of the battery; (ii) the primary use phase of the battery in the EV for 2500 days; (iii) the secondary use phase (second life application) as energy storage in a smart building using PVs as the energy source for 1500 days (after the degradation of the battery due to the use in the EV); and (iv) the disposal of the battery once it reaches its end of life.Similarly to the previous scenarios, the evaluation of the disposal phase is omitted due to the lack of relevant data for battery recycling options (that represent more realistically the possible end-of-life treatment options for the EV batteries) in the life cycle inventory employed in this work, given that it is common for all the alternative scenarios under study.Table 3 presents the single score of the LFP battery for scenario 3 with respect to the manufacturing process, use phase and second life application, obtained by using the Eco-indicator 99 method and disaggregated per impact category.In contrast to scenario 1, the results obtained in scenario 3 show that the use of the battery in the EV is the stage with the highest overall environmental impact, given that the contribution of the second use phase of the battery in the smart building application is significantly reduced (by 21.7%) when using the PVs as the energy source.It is important to note that the environmental benefit from using a different energy source in the primary and secondary use phase is observed, despite the fact that the battery in the smart building application has a lower efficiency (due to the degradation), leading to higher losses of electricity during charging and discharging. Figure 4 depicts the contribution of the aforementioned stages of the battery life cycle in scenario 3 for each impact category, indicating that the three main categories with the highest overall environmental impact are the fossil fuels, respiratory inorganics and carcinogens, followed by climate change, similarly to the previous scenarios.In contrast to scenario 1, the results obtained in scenario 3 show that the use of the battery in the EV is the stage with the highest overall environmental impact, given that the contribution of the second use phase of the battery in the smart building application is significantly reduced (by 21.7%) when using the PVs as the energy source.It is important to note that the environmental benefit from using a different energy source in the primary and secondary use phase is observed, despite the fact that the battery in the smart building application has a lower efficiency (due to the degradation), leading to higher losses of electricity during charging and discharging. Figure 4 depicts the contribution of the aforementioned stages of the battery life cycle in scenario 3 for each impact category, indicating that the three main categories with the highest overall environmental impact are the fossil fuels, respiratory inorganics and carcinogens, followed by climate change, similarly to the previous scenarios. Scenario 4 Scenario 4 is similar to scenario 2, which introduces a new battery with a smaller capacity of 12 kWh for the second life application after the first battery is degraded by its use in the EV, with the difference that the PVs are the energy source in the smart building (instead of using the Spanish electricity mix).Specifically, this scenario includes five stages: (i) the manufacturing process of the first battery; (ii) the use phase of the first battery in the EV for 2500 days; (iii) the disposal of the first battery; (iv) the manufacturing process of the second battery with the same technology but smaller capacity; and (v) the use phase of the second battery (second life application) in a smart building using the PVs as the energy source for 1500 days.Similarly to the previous scenarios, the evaluation of the disposal phase is omitted due to the lack of relevant data for battery recycling options (that Scenario 4 Scenario 4 is similar to scenario 2, which introduces a new battery with a smaller capacity of 12 kWh for the second life application after the first battery is degraded by its use in the EV, with the difference that the PVs are the energy source in the smart building (instead of using the Spanish electricity mix).Specifically, this scenario includes five stages: (i) the manufacturing process of the first battery; (ii) the use phase of the first battery in the EV for 2500 days; (iii) the disposal of the first battery; (iv) the manufacturing process of the second battery with the same technology but smaller capacity; and (v) the use phase of the second battery (second life application) in a smart building using the PVs as the energy source for 1500 days.Similarly to the previous scenarios, the evaluation of the disposal phase is omitted due to the lack of relevant data for battery recycling options (that represent more realistically the possible end-of-life treatment options for the EV batteries) in the life cycle inventory employed in this work, given that it is common for all the alternative scenarios under study.The single score of the two LFP batteries for scenario 4 with respect to their manufacturing process and corresponding use phases, obtained by using the Eco-indicator 99 method and disaggregated per impact category, is given in Table 4.In contrast to scenario 2, the results obtained in scenario 4 show that the use of the battery in the EV is the stage with the highest overall environmental impact, given that the contribution of the second use phase of the battery in the smart building application is significantly reduced (by 17.4%) when using the PVs as the energy source.It is also important to note that the impact of the second life application in scenario 4 (that assumes the use of a new battery) is slightly lower compared to that of scenario 3 (that assumes the use of the existing battery).However, the overall environmental impact results, and the additional environmental burden from the production of the new battery in scenario 4 in particular, suggest that it is more beneficial to use the same battery in the smart building application, despite the fact that it has a lower energy efficiency that implies higher electricity losses during charging and discharging. The contribution of the aforementioned stages of the batteries life cycle per impact category in scenario 4 is shown in Figure 5, indicating that the three main categories with the highest overall environmental impact are the fossil fuels, respiratory inorganics and carcinogens, followed by climate change, similarly to the previous scenarios. Comparative Analysis of Scenarios This section presents a comparative analysis of the scenarios under study on the basis of the global warming potential (GWP) indicator, which is a measure of how much energy the emissions of one unit of a gas will absorb over a given time interval relative to the emissions of one unit of CO2.The method employed in this work is the IPCC 2007 GWP 20a, thus the total contribution to global warming is calculated (in kg of CO2 equivalent) for a period of 20 years.The results are given in Figure 6 as a percentage in comparison to the scenario with the highest value, namely the base scenario (in grey color), which represents the production of two identical batteries and their use phase in an EV for 2500 days each, thus a total of 5000 days (see Section 2.5).As the duration of the base scenario differs from the functional unit in scenarios 1-4, the former is used only as a reference for comparison purposes.In descending order of GWP indicator values, scenario 2 (in blue color), which considers the production of a new smaller battery as an energy storage unit in a grid-connected smart building is followed by scenario 1 (in yellow color) which refers to the reuse of the existing EV battery in the same building, scenario 4 (in green color) which represents the case that a new smaller battery is used for energy storage in a smart building with PVs, and scenario 3 (in red color) which assumes the reuse of the existing EV battery in the smart building powered by PVs.Table 5 shows the analytic results of GWP values disaggregated by process in each scenario. Comparative Analysis of Scenarios This section presents a comparative analysis of the scenarios under study on the basis of the global warming potential (GWP) indicator, which is a measure of how much energy the emissions of one unit of a gas will absorb over a given time interval relative to the emissions of one unit of CO 2 .The method employed in this work is the IPCC 2007 GWP 20a, thus the total contribution to global warming is calculated (in kg of CO 2 equivalent) for a period of 20 years.The results are given in Figure 6 as a percentage in comparison to the scenario with the highest value, namely the base scenario (in grey color), which represents the production of two identical batteries and their use phase in an EV for 2500 days each, thus a total of 5000 days (see Section 2.5).As the duration of the base scenario differs from the functional unit in scenarios 1-4, the former is used only as a reference for comparison purposes. Comparative Analysis of Scenarios This section presents a comparative analysis of the scenarios under study on the basis of the global warming potential (GWP) indicator, which is a measure of how much energy the emissions of one unit of a gas will absorb over a given time interval relative to the emissions of one unit of CO2.The method employed in this work is the IPCC 2007 GWP 20a, thus the total contribution to global warming is calculated (in kg of CO2 equivalent) for a period of 20 years.The results are given in Figure 6 as a percentage in comparison to the scenario with the highest value, namely the base scenario (in grey color), which represents the production of two identical batteries and their use phase in an EV for 2500 days each, thus a total of 5000 days (see Section 2.5).As the duration of the base scenario differs from the functional unit in scenarios 1-4, the former is used only as a reference for comparison purposes.In descending order of GWP indicator values, scenario 2 (in blue color), which considers the production of a new smaller battery as an energy storage unit in a grid-connected smart building is followed by scenario 1 (in yellow color) which refers to the reuse of the existing EV battery in the same building, scenario 4 (in green color) which represents the case that a new smaller battery is used for energy storage in a smart building with PVs, and scenario 3 (in red color) which assumes the reuse of the existing EV battery in the smart building powered by PVs.Table 5 shows the analytic results of GWP values disaggregated by process in each scenario.In descending order of GWP indicator values, scenario 2 (in blue color), which considers the production of a new smaller battery as an energy storage unit in a grid-connected smart building is followed by scenario 1 (in yellow color) which refers to the reuse of the existing EV battery in the same building, scenario 4 (in green color) which represents the case that a new smaller battery is used for energy storage in a smart building with PVs, and scenario 3 (in red color) which assumes the reuse of the existing EV battery in the smart building powered by PVs.Table 5 shows the analytic results of GWP values disaggregated by process in each scenario.On the one hand, the pairwise comparison of scenario 1 with scenario 3 and scenario 2 with scenario 4 confirms that the use of PVs in the smart building is beneficial in terms of GWP in relation to the use of the Spanish electricity mix, and thus highlights the importance of renewable energy sources on the reduction of the environmental impact.On the other hand, the pairwise comparison of scenario 1 with scenario 2, and scenario 3 with scenario 4, indicates that replacing the existing EV battery in the smart building application with a new one results in additional environmental burden due to its manufacturing process, despite the fact that the existing battery has a lower efficiency, and thus higher energy losses, as a consequence of its degradation during its use in the EV.Due to the lack of relevant data for recycling options of LFP batteries in the life cycle inventory employed in this work, the evaluation of the disposal phase is omitted in the frame of this analysis.It is however noted that the inclusion of the disposal phase would increase the total environmental impact in each scenario by the same amount in absolute terms, given that the disposal phase refers to the same battery and is common in all scenarios under study.Accordingly, this further implies that the difference among the scenarios considered would be smaller in percentage terms. Conclusions This paper presents an LCA study to examine the environmental impact from the reuse of EV batteries, specifically of LFP technology, in smart buildings as a secondary application when they can no longer meet the requirements for electro-mobility purposes.The analysis of the scenarios under study clearly shows that there is significant environmental benefit from reusing the existing EV battery in the secondary application instead of manufacturing a new battery to be used for the same purpose and time frame, despite the lower efficiency, and thus higher losses, of the existing (degraded) battery due to its aging.Despite the fact that the present analysis does not take into account the disposal phase of the LFP batteries, neither in the primary nor in the secondary application, due to the lack of relevant data for battery recycling options in the life cycle inventory employed in this work, the addition of the corresponding contributions to the total environmental impact would further support the finding that it is environmentally beneficial to reuse the existing EV batteries in the second life application.Moreover, the LCA study exemplifies the dependence of the results on the energy source in the smart building application, given that the environmental impact is significantly reduced when the Spanish electricity mix is replaced with the energy supply from PVs.This further suggests that different results would be obtained in the case of a country with different electricity mix. Given that the option of battery recycling was not considered in the frame of this work due to the lack of data on this kind of processes, future work could be directed towards the inclusion of the battery recycling process in the scenarios under study, upon availability of relevant data, as a means of more realistically representing the possible end-of-life treatment options for the EV batteries.In addition, other interesting directions of future work include the examination of critical issues that have limited the reuse of EV batteries in practice, such as the risk of operating a battery without the warranty support from the manufacturer for the intended use, as well as cost factors related to the collection and transportation of used EV batteries, testing and repackaging of second life batteries, and their reinstallation to the second life application sites. 16 Figure 1 . Figure 1.Graphical representation of stages included in each alternative scenario of the LCA study. Figure 1 . Figure 1.Graphical representation of stages included in each alternative scenario of the LCA study. Figure 2 . Figure 2. Impact of LFP battery manufacturing process, use phase and second life application per impact category in scenario 1. Figure 2 . Figure 2. Impact of LFP battery manufacturing process, use phase and second life application per impact category in scenario 1. Sustainability 2019 , 16 Figure 3 . Figure 3. Impact of two LFP batteries manufacturing process and their use phase per impact category in scenario 2. Figure 3 . Figure 3. Impact of two LFP batteries manufacturing process and their use phase per impact category in scenario 2. Figure 4 . Figure 4. Impact of LFP battery manufacturing process, use phase and second life application per impact category in scenario 3. Figure 4 . Figure 4. Impact of LFP battery manufacturing process, use phase and second life application per impact category in scenario 3. Figure 5 . Figure 5. Impact of two LFP batteries manufacturing process and their use phase per impact category in scenario 4. Figure 6 . Figure 6.Comparison of scenarios based on GWP results (in percentage terms). Figure 5 . Figure 5. Impact of two LFP batteries manufacturing process and their use phase per impact category in scenario 4. Sustainability 2019 , 16 Figure 5 . Figure 5. Impact of two LFP batteries manufacturing process and their use phase per impact category in scenario 4. Figure 6 . Figure 6.Comparison of scenarios based on GWP results (in percentage terms). Figure 6 . Figure 6.Comparison of scenarios based on GWP results (in percentage terms). Table 1 . Evaluation results of scenario 1. Note: The unit of measurement in all categories is the Eco-indicator Point (Pt). Table 2 . Evaluation results of scenario 2. Note: The unit of measurement in all categories is the Eco-indicator Point (Pt). Table 3 . Evaluation results of scenario 3. Table 3 . Evaluation results of scenario 3. Note: The unit of measurement in all categories is the Eco-indicator Point (Pt). Table 4 . Evaluation results of scenario 4. Note: The unit of measurement in all categories is the Eco-indicator Point (Pt). Table 5 . Comparative analysis of scenarios based on GWP indicator.
2019-05-21T13:06:09.377Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "58a4c10c65c9bcbd144d8c50914651b4f234a175", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/11/9/2527/pdf?version=1556687683", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "58a4c10c65c9bcbd144d8c50914651b4f234a175", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Economics" ] }
118756912
pes2o/s2orc
v3-fos-license
On The Zariski Topology Of Automorphism Groups Of Affine Spaces And Algebras We study the Zariski topology of the ind-groups of polynomial and free associative algebras $\Aut(K[x_1,...,x_n])$ (which is equivalent to the automorphism group of the affine space $\Aut(K^n))$) and $\Aut(K$ via $\Ind$-schemes, toric varieties, approximations and singularities. We obtain some nice properties of $\Aut(\Aut(A))$, where $A$ is polynomial or free associative algebra over a field $K$. We prove that all $\Ind$-scheme automorphisms of $\Aut(K[x_1,...,x_n])$ are inner for $n\ge 3$, and all $\Ind$-scheme automorphisms of $\Aut(K)$ are semi-inner. We also establish that any effective action of torus $T^n$ on $\Aut(K)$ is linearizable provided $K$ is infinity. That is, it is conjugated to a standard one. As an application, we prove that $\Aut(K[x_1,...,x_n])$ cannot be embedded into $\Aut(K)$ induced by the natural abelianization. In other words, the {\it Automorphism Group Lifting Problem} has a negative solution. We explore the close connection between the above results and the Jacobian conjecture, and Kontsevich-Belov conjecture, and formulate the Jacobian conjecture for fields of any characteristic. Introduction and main results 1.1. Automorphisms of K[x 1 . . . , x n ] and K x 1 , . . . , x n . Let K be an arbitrary field. In this article we study the Zarisky topology of the ind-groups of polynomial and free associative algebras Aut(K[x 1 , . . . , x n ]) (which is equivalent to the automorphism group of the affine space Aut(K n )) and Aut(K x 1 , . . . , x n ) via Ind-schemes, toric varieties, approximations an singularities. Automorphisms of Ind-schemes are closely related with the Jacobian This conjecture is related with the proof of stable equivalence of the Jacobian and Dixmier conjectures saying that Aut(W n ) = End(W n ), where W n is the Weil algebra. In order to do it, in the papers [6,7], some monomorphism Aut(W n ) → Sympl(C 2n ) was constructed, and a natural question whether it is an isomorphism, is raised. It means that the automorphism group remains the same after quantization of standard symplectic structure. This monomorphism was defined by using sufficiently large prime. In the paper [7] it was raised the following Question. Prove that this monomorphism is independent with respect to the choice of sufficiently large prime. A precise formulation of this question in the paper [7] is follows: For a finitely generated algebra R smooth over Z, does there exist an unique homomorphism φ R : Aut(W n )(R) → Aut(P n )(R ∞ ) such that ψ R = Fr * •φ R ? Here Fr * : Aut(P n )(R ∞ ) → Aut(P n )(R ∞ ) is the group homomorphism induced by the endomorphism Fr : R ∞ → R ∞ of the coefficient ring. Question. In the above formulation, does the image of φ R belong to where i : R → R ∞ is the topological inclusion? In other words, does there exist a unique homomorphism φ can R : Aut(P n )(R) → Aut(P n )(R ⊗ Q) such that ψ R = Fr * •i * • φ can R , where P n is free poisson algebra? Comparing the two morphisms φ and ϕ defined by two different ultra-filters, we get element φϕ −1 of Aut Ind (Aut(W n )), (i.e. an automorphism preserving the structure of infinite dimensional algebraic group). Describing this group would provide the solution of this question. In spirit of the above we propose the following Conjecture. All automorphisms of Sympl(C n ) as Ind-scheme are inner. The same conjecture can be proposed for Aut(W n ). We are focused on the investigation of the group of Aut(Aut(K[x 1 , . . . , x n ])) and the corresponding noncommutative (free associative algebra) case. Question regarding the structure of this group was proposed by B.I.Plotkin, motivated by the theory of universal algebraic geometry. Wild automorphisms and the lifting problem. In 2004, the famous Nagata conjecture over a field K of characteristic zero was proved by Shestakov and Umirbaev [27,28] and a stronger version of the conjecture was proved by Umirbaev and Yu [31]. That is, let K be a field of characteristic zero. Every wild K[z]-automorphism (wild K[z]-coordinate) of K[z][x, y] is wild viewed as a K-automorphism (K-coordinate) of K[x, y, z]. In particular, the Nagata automorphism (x − 2y(y 2 + xz) − (y 2 + xz) 2 z, y + (y 2 + xz)z, z) (Nagata coordinates x − 2y(y 2 + xz) − (y 2 + xz) 2 z and y + (y 2 + xz)z) is (are) wild. In [31], a related question was raised: The lifting problem. Whether or not a wild automorphism (wild coordinate) of the polynomial algebra K[x, y, z] over a field K can be lifted to an automorphism (coordinate) of the free associative K x, y, z ? In our paper [8], based on the degree estimate [22,21], it was proved that any wild z-automorphism including the Nagata automorphism cannot be lifted as z-automorphism (moreover, in [9] we proved every z-automorphism of K x, y, z is stably tame and becomes tame after adding at most one variable). It means, if every automorphism can be lifted, then it provides an obstacle z ′ to z-lifting and the question to estimate such obstacle is naturally raised. In view of the above, naturally we could ask The automorphism group lifting problem. Whether Aut(K[x 1 , . . . , x n ]) is isomorphic to a subgroup of Aut(K x 1 , . . . , x n ) under the natural abelianization? The following examples shows this problem is interesting and nontrivial. Example 1. There is a surjective homomorphism (taking the absolute value) from C * onto R + . But R + is isomorphic to the subgroup R + of C * under the homomorphism. Example 2. There is a surjective homomorphism (taking the determinant) from GL n (R) onto R * . But obviously R * is isomorphic to the subgroup R * I n of GL n (R). In this article we prove that the automorphism group lifting problem has a negative answer. The lifting problem and the automorphism group lifting problem are closely related to the Kontsevich-Belov Conjecture (see Section 3.1). Consider a symplectomorphism ϕ : ϕ : The point is to choose a lifting ϕ in such a way that the degrees of all for n ≥ 3 is inner, i.e. is a conjugation via some automorphism. NAut means the group of nice automorphisms, i.e. automorphisms which can be approximated by tame ones (see definition 3.1). In characteristic zero case every automorphism is nice. For the group of automorphisms of a semi-group the similar results on set-theoretical level were obtained previously by A.Belov, R.Lipyanskii and I.Berzinsh [4,5]. All these questions (including Aut(Aut) investigations) are closely related to Universal Algebraic Geometry and were proposed by B.Plotkin. Equivalence of two algebras have same generalized identities and isomorphism of first order means semi-inner properties of automorphisms (see [4,5] for details). Automorphisms of the tame automorphism groups. Regarding the tame automorphism group, something can be done on the grouptheoretical level. In H.Kraft and I.Stampfli [20], the automorphism group of the tame automorphism group of polynomial algebra was brilliantly studied. In that paper, conjugation of elementary automorphisms by translations plays very important role. Our results in the current article are different. We calculate Aut(TAut 0 ) of the tame automorphism group TAut 0 preserving the origin (i.e. taking the augmentation ideal onto an ideal which is a subset of the augmentation ideal). This is technically more difficult, the advantage is that our methodology can be universally and systematically done for both commutative (polynomial algebra) case and noncommutative (free associative algebra) case. We see some problems in the shift conjugation approach for the noncommutative (free associative algebra) case, as did for commutative case in [20]. Any substitution of a ground field element for a variable can cause zero, for example in Lie polynomial [[x, y], z]. Note that calculation of Aut(TAut 0 ) (resp. Aut (TAut 0 ), Aut (Aut 0 )) imply also the same results for Aut(TAut) (resp. Aut (TAut), Aut (Aut)) according the approach of this article via stablization by the torus action. . , x n ) be tame automorphism groups preserving the augmentation ideal. Theorem 1.5. Any automorphism ϕ of G n (in the group theoretical sense) for n ≥ 3 is inner, i.e. is a conjugation via some automorphism. or the group TAut(K x 1 , . . . , x n ) (in the group theoretical sense) for n ≥ 4 is semi-inner, i.e. is a conjugation via some automorphism and/or mirror anti-automorphism. b) The same is true for E n , n ≥ 4. The case of TAut(K x, y, z ) is much more difficult. We can treat it only on the Ind-scheme level, but even then it is the most technical part of this paper (see section 5.2). b) The same is true for Aut Ind (E 3 ). By TAut we mean the tame automorphism group, Aut Ind is the group of Ind-scheme automorphisms (see section 2.2). Approximation allows us to formulate the famous Jacobian conjecture for any characteristic. Lifting of the automorphism groups. In this article we prove that the automorphism group of polynomial algebra over an arbitrary field K cannot be embedded into the automorphism group of free associative algebra induced by the natural abelianization. Example. Let M be the group of automorphisms of an affine space, and M j be set of all automorphisms in M with degree ≤ j. There is an interesting Question. Investigating the growth function on Ind-varieties. For example, the dimension of varieties of polynomial automorphisms of degree ≤ n. Note that coincidence of the growth functions for Aut(W n ) and Sympl(C 2n ) would imply Kontsevich-Belov conjecture [7]. Definition 2.2. The ideal I generated by variables x i is the augmentation ideal. The augmentation subgroup H n is the group of all automorphisms ϕ such that ϕ(x i ) ≡ x i mod I n . The set G n ⊃ H n is a group of automorphisms whose linear part is scalar, and ϕ(x i ) ≡ λx i mod I n (λ does not dependant on i). KB n : Does Aut(W n ) ≃ Sympl(C 2n ). A similar conjecture can be stated for endomorphisms KB n : Does End(W n ) ≃ Sympl End(C 2n ). If the Jacobian conjecture JC 2n is true, then these two conjectures are equivalent. W n = C[x 1 , . . . , x n ; ∂ 1 , . . . , ∂ n ] is the Weil algebra of differential operators. It is natural to approximate automorphisms by tame ones. There exists such approximation up to terms of any order not only in the situation of polynomial automorphisms, but also for automorphisms of Weil algebra, symplectomorphisms etc. However, naive approach fails. It is known that Aut(W 1 ) ≡ Aut 1 (K[x, y]) where Aut 1 means the Jacobian determinant is one. However, considerations from [25] shows that Lie algebra of the first group is derivations of W 1 and hence has no identities apart ones which have free Lie algebra, another coincidence of the vector fields which divergents to zero, and has polynomial identities. They cannot be isomorphic [6,7]. In other words, this group has two coordinate system non-smooth with respect to each other (but integral with respect to each other). One system provided by coefficients of differential operators, another with coefficients of polynomials, which are images ofx i ,ỹ i . The group Aut(W n ) can be embedded into Sympl(C 2n ), for any n. But the Lie algebra Der(W n ) has no polynomial identities apart from ones which have free Lie algebras, another coincidence of the vector fields preserving symplectic form and has polynomial identities. In the paper [25] functionals on m/m 2 were considered in order to define the Lie algebra structure. In the spirit of that we have the following Conjecture. The natural limit of m/m 2 is zero. It means that the definition of the Lie algebra has some functoriality problem and it depends on the presentation of (reducible) Ind-scheme. In his remarkable paper, Yu.Bodnarchuck [14] established a result similar to our Theorem 1.1 by using the Shafarevich results for tame automorphism group and for case when automorphism of Ind-scheme is regular in following sense: sent polynomials on coordinate functions (coordinate -coefficient before corresponding monomial) to coordinate functions. In this case tame approximation works (as well as for the symplectic case as well). For this case his method is similar to ours, but we display it here for self-contain-ness, and convenience of readers, and also to treat the noncommutative (free associative algebra) case. But in general, for regular functions, if the approximation via the Shafarevich approach is correct, then the Kontsevich-Belov conjecture (for isomorphism between Aut(W n ) and Sympl(K n )) would follow directly, which would be absurd. We would like to mention also the very recent paper of H. Kraft and I. Stampfli [20]. They show brilliantly that every automorphism of the group G n := Aut(A n ) of polynomial automorphisms of complex affine n-space A n = C n is inner up to some field automorphisms when restricted to the subgroup T G n of the tame automorphisms. They play on conjugation with translation. This generalizes a result of J.Deserti [15] who proved this for dimension two where all automorphisms are tame: T G 2 = G 2 . Our method is slightly different. We calculate automorphism of tame automorphism group preserving the origin (i.e. taking the augmentation ideal onto a subset of the augmentation ideal). In this case we cannot play on translations. One advantage of our approach is that we also established the same results for the noncommutative (free associative algebra) case, which could not be treated by the approaches of Bodnarchuck and that of Kraft and Stampfli. We always treat dimension more than two. In the sequel, we do not assume regularity in the sense of [14] but only assume that the restriction of a morphism on any subvariety is a morphism again. Note that morphisms of Ind-schemes Aut(W n ) → Sympl(C 2n ) has this property, but not regular in the sense of Bodnarchuk [14]. In order to make approximation work, we use the idea of singularity which allows us to prove the augmentation subgroup structure preserving, so approximation works in the case (not in all situations, in a much more complicated way). Consider the isomorphism Aut(W 1 ) ∼ = Aut 1 (K[x, y]). It has some strange property. Let us add a small parameter t. Then an element arbitrary close to zero with respect to t k does not go to zero arbitrarily, so it is impossible to make tame limit! There is a sequence of convergent product of elementary automorphisms, which is not convergent under this isomorphism. Exactly same situation happens for W n . These effects cause problems in the quantum field theory. 3.2. The Jacobian Conjecture for any characteristic. Naive formulation is not good because of example of mapping x → x − x p in characteristic p. Approximation provides a way to formulate a question generalizing the Jacobian conjecture for any characteristic and put it into framework of other questions. According to Anick [1], any automorphism of K[x 1 , . . . , x n ] if Char(K) = 0 can be approximated by tame ones with respect to augmentation subgroups H n . such that D.Anick [1] shown that if Char(K) = 0 any automorphism is nice. However, this is unclear in positive characteristic. Question. Is any automorphism over arbitrary field nice? The Jacobian conjecture for any characteristic Is any good endomorphism over arbitrary field an automorphism? Each good automorphism has Jacobian 1, and all such automorphisms are good (even nice) when Char(K) = 0. Similar notions can be formulated for the free associative algebra. Question. Is any automorphism of free associative algebra over arbitrary field nice? Now we came to generalization of the Jacobian conjecture to arbitrary characteristic: Question. Is any good endomorphism of free associative algebra over arbitrary field an automorphism? Approximation for the automorphism group of affine spaces. The approximation is the most important method of the current paper. In order to do it, we have to prove that ϕ ∈ Aut Ind (Aut preserves the structure of the augmentation subgroup. We treat here only the affine case. For symplectomorphisms for example, the situation is more complicate and we can treat just the general automorphism group. H n is a subgroup of elements identity modulo ideal (x 1 , . . . , x k ) n . . , x k ) n also for free associative case. This follows from the next two lemmas. Lemma 3.5. Let M be an automorphism of the polynomial algebra. Then A(t)MA(t) −1 has no singularities. i.e. It is an affine curve for t = 0 for any A(t) with the properties that A(t) dependent on parameter t such that eigenvalues are t n i and if and only if M ∈Ĥ n whereĤ n is homothety modulo the augmentation ideal. Proof. The 'If' part is obvious, because the sum k j=1 n i j is greater then n m and the homothety commutes with linear map hence conjugation of the homothety via the linear map is itself. We have to prove that if the linear part of ϕ does not satisfy the condition ( * ), then A(t)MA(t) −1 has a singularity at t = 0. Case 1. The linear partM of M is not a scalar matrix. Then after basis change it is not a diagonal matrix and has a non-zero coefficient on all position on the main diagonal except j-th it has t n i and on j-th position t n j . Then D(t)M D −1 (t) has (i, j) entry with the coefficient λt n i −n j and if n j > n i it has a singularity at t = 0. Let also n i < 2n j . Then the non-linear part of M does not produce singularities and cannot compensate the singularity of the linear part. So we are done. Case 2. The linear partM of M is a scalar matrix. Then conjugation of linear part can not produce singularities and we are interested just in the smallest non-linear term. Let ϕ ∈ H k \H k+1 . Due to a linear base change we can assume that ϕ( Let A(t) = D(t) be a diagonal matrix of the form (t n 1 , t n 2 , t n 1 , . . . , t n 1 ). The next lemma can be proved by concrete calculations: It is easy to see that if either k or n relatively prime with Char(K), then all the terms of degree k + n−1 does not cancel and ϕ ∈ H n+k−1 \H n+k . Now suppose that Char(K) ∤ n, then obviously n − 1 is relatively prime with Char(K). Consider mappings ψ 1 : x → x + y k , y → y, Lifting of automorphisms from Theorem 3.8. Any effective action of torus T n on K x 1 , . . . , x n is linearizable. That is, it is conjugated to a standard one. Proof. Similar to the proof of Theorem 4.1. As a consequence of the above theorem, we get Proposition 3.9. Let T n be standard torus action. Let T n its lifting to automorphism group of the free algebra. Then T n is also standard torus action. Proof. Consider the roots x i of this action. They are liftings of the coordinates x i . We have to prove that they generate the whole associative algebra. According to the reducibility of this action, all elements are product of eigenvalues of this action. Hence it is enough to prove that eigenvalues of this action can be presented as a linear combination of this action. This can be done as did by Byalickii-Birula [13]. Note that all propositions of previous section hold for free associative algebras. Proof of the Theorem 3.3 is similar. Hence we have the following This also implies that an automorphism group lifting, if exists, satisfies the approximation properties. Suppose Ψ : H → G be a group homomorphism such that its composition with natural projection is the identity map. Then (1) After some coordinate change ψ provide correspondence between standard torus actions x i → λ i x i and z i → λ i z i . (2) Images of elementary automorphisms are elementary automorphisms of the form (Hence image of tame automorphism is tame automorphism). (3) ψ(H n ) = G n . Hence ψ induces map between completion of the groups of H and G respect to augmentation subgroup structure. Proof of Theorem 1.9 Any automorphism, including the Nagata automorphism can be approximated via product of elementary automorphisms with respect to augmentation topology. In the case of the Nagata automorphism corresponding to all such elementary automorphisms fix all coordinates except x 1 , x 2 , Due to (2) and (3) 4.1. Reduction to the case when Ψ is identical on SL n . We follow [20] and [14] using the classical theorem of Byalickii-Birula [12,13]: Theorem 4.1 (Byalickii-Birula). Any effective action of torus T n on C n is linearizable. That is, it is conjugated to a standard one. Remark. An effective action of T n−1 on C n is linearizable [13,12]. There is a conjecture whether an action of T n−2 on C n is linearizable, established for n = 3. For codimension more than 2, counterexamples were constructed [2]. Remark. H.Kraft and I.Stampfli [20] proved (considering periodic elements in T that this action is not just abstract group action but also if Ψ ∈ Aut(Aut) its image of T is an algebraic group. In fact their proof is also applicable for free associative algebra. (It based on consideration of elements of finite order.) We use this result. Consider the standard action of torus T n on C n : In particular, Applying Lemma 4.5 and comparing the coefficients we get the following Lemma 4.6. Consider the diagonal T 1 action: x i → λx i . Then the set of automorphisms commuting with this action is exactly the linear automorphisms. Similarly (using Lemma 4.5) we obtain Lemmas 4.7, 4.9, 4.10: Lemma 4.7. a) Consider the following T 2 action: Then the set S of automorphisms commuting with this action generated with the following automorphisms b) Consider the following T 2 action: Then the set S of automorphisms commuting with this action generated with following automorphisms x i j j , (λ = (λ 2 , . . . , λ n ), β, λ j ∈ K). Remark. The similar statement for the noncommutative (free associative algebra) case is true, but one has to consider the setŜ of automorphisms x 1 → x 1 + H, x i → ε i x i , i > 1, (ε ∈ K, the polynomial H ∈ K x 2 , . . . , x n has multi-degree J, in non-commutative case it is not just monomial anymore). 4.9. Consider the following T 1 action: Then the set S of automorphisms commuting with this action generated with following automorphisms Lemma 4.11. Let n ≥ 3. Consider the following set of automorphisms (Numeration is cyclic, so for example x n+1 = x 1 ). Let β i = 0 for all i. Then all of ψ i simultaneously conjugated by torus action to ψ ′ i : . . , n in a unique way. Proof. Let α : x i → α i x i , then by Lemma 4.5 we obtain Comparing the coefficients of the quadratic terms, we see that it is sufficient to solve the system: because β i = 0 for all i, this system has unique solution. Remark. In the free associative algebra case, instead of βx 2 x 3 one has to consider βx 2 x 3 + γx 3 x 2 . 4.2. The lemma of Rips. Note that we have proved an analogue of Theorem 1.1 for tame automorphisms. Proof of Lemma 4.12. Let G be group generated by elementary transformations as in Lemma 4.11. We have to prove that G = TAut 0 , tame automorphism group fixing the augmentation ideal. We need some preliminaries. Lemma 4.13. The linear transformations and ψ : x → x, y → y, z → z + xy generate all the mappings of the following form Proof of Lemma 4.13. We proceed by induction. Suppose we have automorphism Conjugating by the linear transformation (z → y, y → z, x → x), we obtain the automorphism Composing this with ψ from the right side, we get the automorphism ϕ(x, y, z) : x → x, y → y + bx n−1 , z → z + yx + x n . Now we see that n and the lemma is proved. Corollary 4.14. Let Char(K) ∤ n (in particular, Char(K) = 0) and |K| = ∞. Then G contains all the transformations z → z + bx k y l , y → y, x → x such that k + l = n. Proof. For any invertible linear transformation ϕ : x → a 11 x + a 12 y; y → a 21 x + a 22 y, z → z; a ij ∈ K we have Note that sums of such expressions contains all the terms of the form bx k y l . Corollary is proven. 4.3. Generators of the tame automorphism group. Proof of Theorem 4.15. Then γ : α −1 ψα : x → x, y → y, z → z + xy + 2bxy n + by 2n . Composing with ψ −1 and ψ 2n −2b we get needed α 2b n (x, y, z) : x → x, y → y, z → z + 2byx n , b ∈ K. The proof is similar to the proof of Corollary 4.14. Note that either n or n + 1 is not a multiple of p so we have x → x, y → y, z → z + xy generate all the mappings of the following We have proved 4.12 for the three variable case. In order to treat the case when n ≥ 4 we need one more lemma. We have to prove the same for other type of monomials: Proof. Let M = a n−1 i=1 x k i i . Consider the automorphism α : Here the polynomial It has the following form where N i are monomials such that none of them is proportional to a power of x 1 . According to Corollary 4.8, Ψ(ψ M ) = ψ bM for some b ∈ K. We need only to prove that b = 1. Suppose the contrary, b = 1. Then From the other hand Comparing the factors ψ Then ϕ preserves all tame automorphisms. For free associative algebras, we note that any automorphism preserving torus action preserves also symmetric and the skew symmetric elementary automorphisms. The first property follows from Lemma 4.9. The second follows from the fact that the skew symmetric automorphisms commute with automorphisms of the following type and this property distinguish them from elementary automorphisms of the type Theorem 1.2 follows from the fact that only forms βx 2 x 3 + γx 3 x 2 corresponding to multiplication preserving the associative law when either β = 0 or γ = 0 and the approximation issue (see section 3.3). Proposition 5.3. The group G containing all linear transformations and mappings x → x, y → y, z → z + xy, t → t contains also all the transformations of form x → x, y → y, z → z + P (x, y), t → t. Proof. It is enough to prove that G contains all transformations of the following form x → x, y → y, z → z + aM, t → t; a ∈ K, M is monomial. Step 1. Let Automorphism φ −1 • α • φ is the composition of automorphisms β : x → x, y → y, z → z, t → t + M and γ : x → x, y → y, z → z, t → t + zx k . β is conjugated to the automorphism β ′ : x → x, y → y, z → z + M, t → t by the linear automorphism x → x, y → y, z → t, t → z, similarly γ is conjugated to automorphism γ ′ : x → x, y → y, z → z + yx k , t → t. We have reduced to the case when M = x k or M = yx k . Step 2. Consider automorphisms α : x → x, y → y + x k , z → z, t → t and β : x → x, y → y, z → z, t → t + azy. Then It is composition of automorphism γ : x → x, y → y, z → z, t → t + azx k which is conjugate to needed automorphism γ ′ : x → x, y → y, z → z + yx k , t → t, and automorphism δ : x → x, y → y, z → z, t → t + azy which is conjugate to the automorphism δ ′ : x → x, y → y, z → z + axy, t → t and then to the automorphism δ ′′ : x → x, y → y, z → z + xy, t → t (using similarities). We reduced the problem to proving inclusion G ∋ ψ M , M = x k for all k. Step 3. Obtaining the automorphism x → x, y → y +x n , z → z, t → t. Similar to the commutative case of k[x 1 , . . . , x n ] (see section 4). Proposition 5.3 is proved. Let us formulate the Remark after Lemma 4.7 as follows: Lemma 5.4. Consider the following T 2 action: Then the set S of automorphisms commuting with this action generated with following automorphisms x 1 → x 1 + H, x i → x i ; i > 1, H is homogenous polynomial of the same degree as n j=2 x i j j (λ = (λ 2 , . . . , λ n ), β, λ j ∈ K). Proposition 5.3 and Lemma 5.4 imply Corollary 5.5. Let Ψ ∈ Aut 0 (TAut(K x 1 , . . . , x n )) stabilizing all elements of torus and linear automorphisms, Let P = I P I , P I -homogenous component of P of multi-degree I. b) P Ψ = I P Ψ I ; where P Ψ I -homogenous of multi-degree I. c) If I has positive degree respect to one or two variables, then P Ψ I = P I . Q consists of all terms containing one of the variables x 3 , . . . , x n−1 , P Q consists of all terms containing just variables Q for all Q then P = R. We get the following Proposition 5.9. Let n ≥ 4. Let Ψ ∈ Aut(TAut 0 (K x 1 , . . . , x n )) stabilizing all elements of torus and linear automorphisms and automor- Let n ≥ 4. Let Ψ ∈ Aut(TAut 0 (K x 1 , . . . , x n )) stabilizing all elements of torus and linear automorphisms. We have to prove that Ψ(EL) = EL or Ψ(EL) : x i → x i ; i = 1, . . . , x n−1 , x n → x n + x 2 x 1 . In the last case Ψ is the conjugation with mirror anti-automorphism of K x 1 , . . . , x n . In any case The next lemma can be obtained by direct computation: It mean that * is either associative or non-alternative operation. Now we are ready to prove Proposition 5.9. Consider the automor- Let δ : x → x, y → y, z → z + x 2 , t → t, ǫ : x → x, y → y, z → z, t → t + zy. On the other hand we have We also have ε = γ. Equality Ψ(ε) = Ψ(γ) is equivalent to the equality x * (x * y) = x 2 y. We conclude. (TAut(K x, y, z )). This is the most technical part of this article. We are unable to treat this situation on the group theoretical level. In this section we shall determine just Aut TAut 0 (K x, y, z ), i.e. Ind-scheme automorphisms and prove Theorem 1.8. We use the approximation results of Section 3.3. In the sequel, we suppose that Char(k) = 2. {a, b, c} * denotes associator of a, b, c respect to operation * , i.e. The group Aut Ind Let Ψ ∈ TAut 0 (K x, y, z ) be an Ind-scheme automorphism, stabilizing linear automorphisms. In this section, we work only on the Ind-scheme level. Proof. Consider the automorphism t : x → x, y → y, z → z + xy. Then Ψ(t) : x → x, y → y, z → z + x * y, x * y = axy + byx. Due to conjugation on the mirror anti-automorphism and coordinate exchange one can suppose that x * y = xy + λyx. We have to prove that λ = 0. In that case Ψ = Id. Lemma 5.14. a) Let φ l : x → x, y → y, z → z + y 2 x. Then Ψ(φ) : Proof. According to the results of the previous section we have Ψ(φ l ) : x → x, y → y, z → z + P (y, x) where P (y, x) is homogenous of degree 2 respect to y and degree 1 respect to x. We have to prove that H(y, x) = P (y, x) − y * (y * x) = 0. Proof. a) can be obtained by direct computation. b) follows from a) and Lemma 5.12. We need some auxiliary lemmas. The first is an analogue of the hiking procedure from [19,3]. The next lemma provides some translation between language of polynomials and group action language. It is similar to the hiking process [3,19]. Then And degree of all monomials of R ′ strictly grater then N, degree of all monomials of Q grater equal N. degree of all monomials of S strictly grater then N, degree of all monomials of T grater equal N. Proof. a) Direct calculation, b) It follows from a). Remark. In the case of characteristic zero, the condition of K to be algebraically closed can be released. After hiking of several steps, we need to prove just Lemma 5.18. Let Char(K) = 0, n is a positive integer. Then there exist k 1 , . . . , k s ∈ Z and λ 1 , . . . , λ s ∈ K such that Using this lemma we can cancel all terms in the product in the lemma 5.17 but the constant. The proof of Lemma 5.18 for any field of zero characteristics can be obtained based on the following observation: λ i ) n − j (λ 1 + · · · + λ i + · · · + λ n ) n + · · · + +(−1) n−k x i and if m < n then Lemma 5.19 allows us to replace n-th powers by product of different constant, then statement of Lemma 5.18 became easy. Proof. Similar to the proof of Theorem 3.2. Our goal is to prove thatΨ(P ) = P for all P if Ψ stabilizes the linear automorphisms andΨ(xy) = xy. N is the sum of terms of degree strictly greater than k + l. It means that g = φ • L, L ∈ H k+l+1 . We shall use theorem 3.2. Applying Ψ we get the result because Ψ(ϕ i ) = ϕ i , i = 1, 2, 3 and ϕ(H n ) ⊆ H n for all n. The lemma is proved. For any monomial M = M(x, y) we shall define an automorphism ϕ M : x → x, y → y, z → z + M. We also define the automorphisms φ e k : x → x, y → y + zx k , z → z and φ o k : x → x + zy k , y → y, z → z. We shall treat case of even s. Odd case is similar. Let D e zx k be the derivation of K x, y, z such that D e zx k (x) = 0, D e zx k (y) = zx k , D e zx k (z) = 0. Similarly D o zy k be derivation of k x, y, z such that D o zy k (y) = 0, D o zx k (x) = zy k , D zy k (z) o = 0. The next lemma can be obtained via direct computation: As the conclusion of this article, we would like to raise the following questions. (1) Is it true that any automorphism ϕ of Aut(K x 1 , . . . , x n ) (in the group theoretical sense) for n = 3 is semi-inner, i.e. is a conjugation via some automorphism or mirror anti-automorphism. (2) Is it true that Aut(K x 1 , . . . , x n ) is generated by affine automorphisms and automorphism x n → x n + x 1 x 2 , x i → x i , i = n? For n = 3 answer is negative, see Umirbaev [30], see also Drensky and Yu [16]. For n ≥ 4 we think the answer is positive. (3) Is it true that Aut(K[x 1 , . . . , x n ]) is generated by the linear automorphisms and automorphism x n → x n +x 1 x 2 , x i → x i , i = n? For n = 3 the answer is negative, see the proof of the Nataga conjecture [27,28,31]. For n ≥ 4 it is plausible that the answer is positive. (4) Is any automorphism ϕ of Aut(K x 1 , . . . , x n ) (in the group theoretical sense) for n = 3 is semi-inner? (5) Is it true that the conjugation in Theorems 1.3 and 1.7 can be done by some tame automorphism? Suppose ψ −1 ϕψ is tame automorphism for any tame ϕ. Does it follow that ψ is tame? (6) Prove Theorem 1.8 for Char(k) = 2. Does it hold on the set theoretical level, i.e. Aut(TAut(K x, y, z )) are generated by conjugations on automorphism or mirror anti-automorphism? The similar questions can be proposed for nice automorphisms. Acknowledgements The authors would like to thank to J.P. Furter, T. Kambayashi, H.Kraft, R.Lipyanski, Boris Plotkin, Eugeny Plotkin, Andrey Regeta, and M. Zaidenberg for stimulating discussion. We are grateful to Eliahu Rips as he kindly agrees to include and use his crucial results. The authors also thank Shanghai University, Shanghai, Jilin University, Changchun and South University of Science and Technology of China, Shenzhen for warm hospitality and stimulating atmosphere during their visits, when part of this project was carried out.
2016-12-13T19:59:54.000Z
2012-07-09T00:00:00.000
{ "year": 2012, "sha1": "722232fce1ed2eeaed1f74ed64c20706937f1f84", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "722232fce1ed2eeaed1f74ed64c20706937f1f84", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
226262320
pes2o/s2orc
v3-fos-license
Point to the Expression: Solving Algebraic Word Problems Using the Expression-Pointer Transformer Model Solving algebraic word problems has recently emerged as an important natural language processing task. To solve algebraic word problems, recent studies suggested neural models that generate solution equations by using ‘Op (operator/operand)’ tokens as a unit of input/output. However, such a neural model suffered two issues: expression fragmentation and operand-context separation. To address each of these two issues, we propose a pure neural model, Expression-Pointer Transformer (EPT), which uses (1) ‘Expression’ token and (2) operand-context pointers when generating solution equations. The performance of the EPT model is tested on three datasets: ALG514, DRAW-1K, and MAWPS. Compared to the state-of-the-art (SoTA) models, the EPT model achieved a comparable performance accuracy in each of the three datasets; 81.3% on ALG514, 59.5% on DRAW-1K, and 84.5% on MAWPS. The contribution of this paper is two-fold; (1) We propose a pure neural model, EPT, which can address the expression fragmentation and the operand-context separation. (2) The fully automatic EPT model, which does not use hand-crafted features, yields comparable performance to existing models using hand-crafted features, and achieves better performance than existing pure neural models by at most 40%. Introduction Solving algebraic word problems has recently become an important research task in that automatically generating solution equations requires understanding natural language. Table 1 shows a sample algebraic word problem, along with corresponding solution equations that are used to generate answers for the problem. To solve such problems with deep learning technology, researchers recently suggested neural models that generate solution equations automatically (Huang Problem One number is eight more than twice another and their sum is 20. What are their numbers? Numbers 1('one'), 8('eight'), 2('twice'), 20. Equations x 0 − 2x 1 = 8, x 0 + x 1 = 20 Answers (16, 4) Amini et al., 2019;Chiang and Chen, 2019;. However, suggested neural models showed a fairly large performance gap compared to existing state-of-the-art models based on hand-crafted features in popular algebraic word problem datasets, such as ALG514 (44.5% for pure neural model vs. 83.0% for using hand-crafted features) (Huang et al., 2018;. To address the large performance gap in this study, we propose a larger unit of input/output (I/O) token called "Expressions" for a pure neural model. Figure 1 illustrates conventionally used "Op (operator/operands)" versus our newly proposed "Expression" token. To improve the performance of pure neural models that can solve algebraic word problems, we identified two issues that can be addressed using Expression tokens, which are shown in Figure 1: (1) expression fragmentation and (2) operandcontext separation. First, the expression fragmentation issue is a segmentation of an expression tree, which represents a computational structure of equations that are used to generate a solution. This issue arises when Op, rather than the whole expression tree, is used as an input/output unit of a problem-solving model. For example, as shown in Figure 1 (a), using Op tokens as an input to a problem-solving model disassembles a tree structure into operators ("×") and operands ("x 1 " and "2"). Meanwhile, we propose using the Table 1 for the (a) expression fragmentation issue, (b) operandcontext separation issue, and (c) our solution for these two issues. "Expression" (×(x 1 , 2)) token, which can explicitly capture a tree structure as a whole, as shown in Figure 1 (c). The second issue of operand-context separation is the disconnection between an operand and a number that is associated with the operand. This issue arises when a problem-solving model substitutes a number stated in an algebraic word problem into an abstract symbol for generalization. As shown in Figure 1 (b), when using an Op token, the number 8 is changed into an abstract symbol 'N 1 '. Meanwhile, when using an Expression token, the number 8 is not transformed into a symbol. Rather a pointer is made to the location where the number 8 occurred in an algebraic word problem. Therefore, using such an "operand-context pointer" enables a model to access contextual information about the number directly, as shown in Figure 1 (c); thus, the operand-context separation issue can be addressed. In this paper, we propose a pure neural model called Expression-Pointer Transformer (EPT) to address the two issues above. The contribution of this paper is two-fold; 1. We propose a pure neural model, Expression-Pointer Transformer (EPT), which can address the expression fragmentation and operandcontext separation issues. 2. The EPT model is the first pure neural model that showed comparable accuracy to the existing state-of-the-art models, which used handcrafted features. Compared to the state-ofthe-art pure neural models, the EPT achieves better performance by about 40%. In the rest of the paper, we introduce existing approaches to solve algebraic word problems in Section 2. Next, Section 3 introduces our proposed model, EPT, and Section 4 reports the experimental settings. Then in Section 5, results of two studies are presented. Section 5.1 presents a performance comparison between EPT and existing SoTA models. Section 5.2 presents an ablation study examining the effects of Expression tokens and applying operand-context pointers. Finally, in Section 6, a conclusion is presented with possible future directions for our work. Related work Our goal is to design a pure neural model that generates equations using 'Expression' tokens to solve algebraic word problems. Early attempts for solving algebraic word problems noted the importance of Expressions in building models with hand-crafted features (Kushman et al., 2014;Zhou et al., 2015;. However, recent neural models have only utilized 'Op (operator/operand)' tokens (Wang et al., 2017;Amini et al., 2019;Chiang and Chen, 2019;Huang et al., 2018;, resulting in two issues: (1) the expression fragmentation issue and (2) the operandcontext separation issue. In the remaining section, we present existing methods for tackling each of these two issues. To address the expression fragmentation issue, researchers tried to reflect relational information between operators and operands either by using a two-step procedure or a single step with sequenceto-sequence models. Earlier attempts predicted operators and their operands by using a two-step procedure. Such early models selected operators first by classifying a predefined template (Kushman et al., 2014;Zhou et al., 2015;, then in the second step, operands were applied to the template selected in the first step. Other models selected operands first before constructing expression trees with operators in the second step . However, such two-step procedures in these early attempts Input Output Expression token Meaning Secondly, there were efforts to address the operand-context separation issue. To utilize contextual information of an operand token, researchers built hand-crafted features that capture the semantic content of a word, such as the unit of a given number Koncel-Kedziorski et al., 2015;Zhou et al., 2015;Roy and Roth, 2017) or dependency relationship between numbers (Kushman et al., 2014;Zhou et al., 2015;. However, devising hand-crafted input features was timeconsuming and required domain expertise. Therefore, recent approaches have employed distributed representations and neural models to learn numeric context of operands automatically (Wang et al., 2017;Huang et al., 2018;Chiang and Chen, 2019;Amini et al., 2019). For example, Huang et al. (2018) used a pointer-generator network that can point to the context of a number in a given math problem. Although Huang's model can address the operand-context separation issue using pointers, their pure neural model did not yield a comparable performance to the state-of-the-art model using hand-crafted features (44.5% vs. 83.0%). In this paper, we propose that by including additional pointers that utilize the contextual information of operands and neighboring Expression tokens, performance of pure neural models can improve. Figure 2 shows the proposed Expression-Pointer Transformer (EPT) 1 model, which adopts the encoder-decoder architecture of a Transformer model (Vaswani et al., 2017). The EPT utilizes the ALBERT model (Lan et al., 2019), a pretrained language model, as the encoder. The encoder input is tokenized words of the given word problem, and encoder output is the encoder's hidden-state vectors that denote numeric contexts of the given problem. EPT: Expression-Pointer Transformer After obtaining the encoder's hidden-state vectors from the ALBERT encoder, the transformer decoder generates 'Expression' tokens. The two decoder inputs are Expression tokens and the ALBERT encoder's hidden-state vectors, which are used as memories. For the given example problem, the input is a list of 8 Expression tokens shown in Table 2. We included three special commands in the list: VAR (generate a variable), BEGIN (start an equation), and END (gather all equations). Following the order specified in the list of Table 2, the EPT receives one input Expression at a time. For the ith Expression input, the model computes an input vector v i . The EPT's decoder then transforms this input vector to a decoder's hidden-state vector d i . Finally, the EPT predicts the next Expression token by generating the next operator and operands simultaneously. To produce 'Expression' tokens, two components are modified from the vanilla Transformer: input vector and output layer. In the following subsections, we explain the two components. Input vector of EPT's decoder The input vector v i of ith Expression token is obtained by combining operator embedding f i and operand embedding a ij as follows: where FF * indicates a feed-forward linear layer, and Concat(·) means concatenation of all vectors inside the parentheses. All the vectors, including v i , f i , and a ij , have the same dimension D. Formulae for computing the two types of embedding vectors, f i and a ij are stated in the next paragraph. For the operator token f i of ith Expression, the EPT computes the operator embedding vector f i as in Vaswani et al. (2017)'s setting: where E * (·) indicates a look-up table for embedding vectors, c * denotes a scalar parameter, and LN * (·) and PE(·) represent layer normalization (Ba et al., 2016) and positional encoding (Vaswani et al., 2017), respectively. The embedding vector a ij , which represents the jth operand of ith Expression, is calculated differently according to the operand a ij 's source. To reflect contextual information of operands, three possible sources are utilized: problem-dependent numbers, problem-independent constants, and the result of prior Expression tokens. First, problemdependent numbers are numbers provided in an algebraic problem (e.g., '20' in Table 1). To compute a ij of a number, we reuse the encoder's hidden-state vectors corresponding to such number tokens as follows: where u * denotes a vector representing the source, and e a ij is the encoder's hidden-state vector corresponding to the number a ij . 2 Second, problemindependent constants are predefined numbers that are not stated in the problem (e.g., 100 is often used for percentiles). To compute a ij of a constant, we use a look-up table E c as follows: Note that LN a , c a are shared across different sources. Third, the result of the prior Expression token is an Expression generated before the ith Expression (e.g., R 0 ). To compute a ij of a result, we utilize the positional encoding as follows 3 : where k is the index where the prior Expression a ij generated. Output layer of EPT's decoder The output layer of the EPT's decoder predicts the next operator f i+1 and operands a i+1,j simultaneously when the ith Expression token is provided. First, the next operator, f i+1 , is predicted as follows: where σ(k|x) is the probability of selecting an item k under a distribution following the output of softmax function, σ(x). Second, to utilize the context of operands when predicting an operand, the output layer applies 'operand-context pointers,' inspired by the pointer networks (Vinyals et al., 2015). In the pointer networks, the output layer predicts the next token using attention over candidate vectors. The EPT collects candidate vectors for the next (i + 1)th Expression in three different ways depending on the source of operands: for the kth number in the problem, d k for the kth Expression output, E c (x) for a constant x (7) Then the EPT predicts the next jth operand a i+1,j , as follows. Let A ij be a matrix whose row vectors are such candidates. Then, the EPT predicts a i+1,j by computing attention of a query vector Q ij on a key matrix K ij , as follows. As the output layer is modified to predict an operator and its operands simultaneously, we also modified the loss function. We compute the loss of an Expression by summing up the loss of an operator and the loss of required arguments. All loss functions are computed using cross-entropy with the label smoothing approach (Szegedy et al., 2016). Metric and Datasets The metric for measuring the EPT model's performance is answer accuracy, which is the proportion Table 3: Characteristics of datasets used in the experiment of correctly answered problems over the entire set of problems. We regard a problem is correctly answered if a solution to the generated equations matches the correct answer without considering the order of answer-tuple, as in Kushman et al. (2014). To obtain a solution to the generated equations, we use SymPy (Meurer et al., 2017) at the end of the training phase. For the datasets, we use three publicly available English algebraic word problem datasets 4 : ALG514 (Kushman et al., 2014) Table 3. The high-complexity datasets, ALG514 and DRAW-1K, require more expressions and unknowns when solving the algebraic problems than the low-complexity dataset, MAWPS. For DRAW-1K, we report the accuracy of a model on the development and test set since training and development sets are provided. For the other two datasets -MAWPS and ALG514, -we report the average accuracy and standard error using 5-fold cross-validation. Baseline and ablated models We examine the performance of EPT against five existing state-of-the-art (SoTA) models. The five models are categorized into three types; model using hand-crafted features, pure neural models, and a hybrid of these two types. • Models using hand-crafted features use expertdefined input features without using a neural model: MixedSP . designed a model using a set of hand-crafted features similar to those used by Zhou et al. (2015). Using a data augmentation technique, they achieved the SoTA on ALG514 (83.0%) and DRAW-1K (59.5%). • Pure neural models take algebraic word problems as the raw input to a neural model and do not require the use of a rule-based model: CASS-RL (Huang et al., 2018) and T-MTDNN (Lee and Gweon, 2020 After examining the EPT model performance, we conducted an ablation study to analyze the effect of using two main components of EPT; Expression tokens and operand-context pointers. We compared three types of models to test each of the components: (1) the vanilla Transformer model, (2) the Transformer with Expression token model, which investigates the effect of using Expression tokens, and (3) the EPT, which investigates the effect of using pointers in addition to Expression tokens. Additional details on the input/output of the vanilla Transformer and the Transformer with Expression token models are provided in Appendix A. Implementation details The implementation details of EPT and its ablated models are as follows. To build encoder-decoder models, we used PyTorch 1.5 (Paszke et al., 2019). For the encoder, three different sizes of ALBERT models in the transformers library (Wolf et al., 2019) are used: albert-base-v2, albert-large-v2, and albert-xlarge-v2. We fixed the encoder's embedding matrix during the training since such fixation preserves the world knowledge embedded in the matrix and stabilizes the entire learning process. For the decoder, we stacked six decoder layers and shared the parameters across different layers to reduce memory usage. We set the dimension of input vector D as the same dimension of encoder hidden-state vectors. To train and evaluate the entire model, we used teacher forcing in the training phase and beam search with 3 beams in the evaluation phase. For the hyperparameters of the EPT, parameters follow the ALBERT model's parameters except for training epoch, batch size, warm-up epoch, and learning rate. First, for the training epoch T , a model is trained in 500, 500, and 100 epochs on ALG514, DRAW-1K, and MAWPS, respectively. For batch sizes, we used 2,048 (albert-base-v2 and albert-large-v2) and 1,024 (albert-xlarge-v2) in terms of Op or Expression tokens. To acquire a similar effect of using 4,096 tokens as a batch, we also employed gradient accumulation technique on two types of consecutive mini-batches; two (base and large) and four (xlarge). Then, for the warm-up epoch and learning rate, we conduct the grid-search algorithm for each pair of a dataset and the size of the ALBERT model. For the grid search, we set the sampling space as follows: {0.00125, 0.00176, 0.0025} for the learning rates and {0, 0.005T, 0.01T, 0.015T, 0.02T, 0.025T } for the warm-up. The resulting parameters are listed in Appendix B. During each grid search, we only use the following training/validation sets and keep other sets unseen: the fold-0 training/test split for ALG514 and MAWPS and the training/development set for DRAW-1K. For the unstated hyperparameters, the parameters follow those of the ALBERT. These parameters include the optimizer and warm-up scheduler; we used LAMB (You et al., 2019) optimizer with β 1 = 0.9, β 2 = 0.999, and = 10 −12 ; and we EPT (XL) -* 60.5 59.5 -* Note: [M] MixedSP, [C] CASS-RL, [T] T-MTDNN, [H] CASS-hybrid, [D] DNS. * Overfitted on some folds. employed linear decay with warm-up scheduling. All the experiment, including hyperparameter search, was conducted on a local computer with 64GB RAM and two GTX1080 Ti GPUs. Result and Discussion In section 5.1, we first present a comparison study, which examines the EPT's performance. Next, in section 5.2, we present an ablation study, which analyzes the two main components of EPT; Expression tokens and operand-context pointers. Comparison study As shown in Table 4, the performance of EPT is comparable or better in terms of performance accuracy compared to existing state-of-the-art (SoTA) models when tested on the three datasets of ALG514, DRAW-1K, and MAWPS. The fully automatic EPT model, which does not use handcrafted features, yields comparable performance to existing models using hand-crafted features. Specifically, on the ALG514 dataset, the EPT outperforms the best-performing pure neural model by about 40% and shows comparable performance accuracy to the SoTA model that uses hand-crafted features. On the DRAW-1K dataset, which is harder than ALG514 dataset, a similar performance trend to ALG514 is found. The EPT model outperforms the hybrid model by about 30% and achieved comparable accuracy to the SoTA model that uses hand-crafted features. On the MAWPS dataset, which is only tested on pure neural models in existing studies, the EPT achieves SoTA accuracy. One possible explanation for EPT's outstanding performance over the existing pure neural model is the use of operand's contextual information. Existing neural models solve algebraic word problems by using symbols to provide an abstraction of problem-dependent numbers or unknowns. For example, Figure 1 shows that existing methods used Op tokens, such as x 0 and N 1 . However, treating operands as symbols only reflects 2 out of 4 means in which symbols are used in humans' mathematical problem-solving procedures (Usiskin, 1999). The 4 means of symbol usage are; (1) generalizing common patterns, (2) representing unknowns in an equation, (3) indicating an argument of a function, and (4) replacing arbitrary marks. By applying template classification or machine learning techniques, (1) and (2) were successfully utilized in existing neural models. However, the existing neural models could not consider (3) and (4). Therefore, in our suggested EPT model, we dealt with (3) by using Expression tokens and (4) by using operand-context pointers. We suspect that the EPT's performance, which is comparable to existing models using hand-crafted features, comes from dealing with (3) and (4) explicitly when solving algebraic word problems. Ablation study From the ablation study, our data showed that the two components of generating 'Expression' token and applying operand-context pointer, each improved the accuracy of the EPT model in different ways. Specifically, as seen in Table 5, adding Expression token to the vanilla Transformer improved the performance accuracy by about 15% in ALG514 and DRAW-1K and about 1% in MAWPS. In addition, applying operand-context pointer to the Transformer with Expression token Case 1. Effect of using Expression tokens Problem The sum of two numbers is 90. Three times the smaller is 10 more than the larger. Find the larger number. Expected 3x 0 − x 1 = 10, Case 2. Effect of using pointers Problem A minor league baseball team plays 130 games in a season. If the team won 14 more than three times as many games as they lost, how many wins and losses did the team have? Expected Case 3. Comparative error Problem One number is 6 more than another. If the sum of the smaller number and 3 times the larger number is 34, find the two numbers. Expected Case 4. Temporal order error Problem The denominator of a fraction exceeds the numerator by 7. if the numerator is increased by three and the denominator increased by 5, the resulting fraction is equal to half. Find the original fraction. Expected Table 6 shows the result of an error analysis. The cases 1 and 2 show how the EPT model's two components contributed to performance improvement. In case 1, the vanilla Transformer yields an incorrect solution equation by incorrectly associating x 0 + x 1 and 3. However, using an Expression token, the explicit relationship between operator and operands is maintained, enabling the distinction between x 0 +x 1 and 3x 0 −x 1 . The case 2 example shows how adding an operand-context pointer can help distinguish between different expressions, in our example, x 0 , 130x 0 , and 14x 0 . As the operand-context pointer directly points to the contextual information of an operand, the EPT could utilize the relationship between unknown (x 0 ) and its multiples (130x 0 or 14x 0 ) without confusion. We observed that the existing pure neural model's performance on low-complexity dataset of MAWPS was relatively high at 78.9%, compared to that of high-complexity dataset of ALG514 (44.5%). Therefore, using Expression tokens and operand-context pointers contributed to higher performance when applied to high-complexity datasets of ALG514 and DRAW-1K, as shown in Table 5. We suspect two possible explanations for such a performance enhancement. First, using Expression tokens in highcomplexity datasets address the expression fragmentation issue when generating solution equations, which is more complex in ALG514 and DRAW-1K than MAWPS. Specifically, Table 3 shows that on average the number of unknowns in ALG514 and DRAW-1K is almost twice (1.82 and 1.75, respectively) than MAWPS (1.0). Similarly, the number of Op tokens is also twice in ALG514 and DRAW-1K (13.08 and 14.16, respectively) than that of MAWPS (6.20). As the expression fragmentation issue can arise for each token, probability of fragmentation issues' occurrence increases exponentially as the number of unknowns/Op tokens in a problem increases. Therefore, the vanilla Transformer model, which could not handle the fragmentation issue, yields low accuracy on high-complexity datasets. Second, using operand-context pointers in highcomplexity datasets addresses the operand-context separation issue when selecting an operand, which is more complex in ALG514 and DRAW-1K than MAWPS. Specifically, Table 3 shows that on average the amount of Expression tokens is also twice in ALG514 and DRAW-1K (7.45 and 7.95, respectively) than that of MAWPS (3.60). As numbers and Expression tokens are candidates for selecting an operand, probability of separation issues' occurrence increases linearly as the amount of numbers/Expressions in an equation increases. Since a Transformer with Expression token could not handle the separation issue, the model showed lower accuracy on high-complexity datasets. In addition to the correctly solved problem examples, Table 6 also shows cases 3 and 4, which were incorrectly answered by the EPT model. The erroneous examples can be categorized into two groups; 'Comparative' error and 'Temporal order' error. 'Comparative' occurs when an algebraic problem contains comparative phrases, such as '6 more than,' as in case 3. 49.3% of incorrectly solved problems contained comparatives. When generating solution equations for the comparative phrases, the order of arguments is a matter for an equation that contains non-commutative operators, such as subtractions or divisions. Therefore, errors occurred when the order of arguments for comparative phrases with non-commutative operators was mixed up. Another group of error is 'Temporal order' error that occurs when a problem contains phrases with temporal orders, such as 'the numerator is increased by three,' as in case 4. 44.5% of incorrectly solved problems contained temporal orders. We suspect that these problems occur when co-referencing is not handled correctly. In a word problem with temporal ordering, a same entity may have two or more numeric values that change over time. For example, in case 4, the denominator has two different values of x 1 and x 1 + 7. The EPT model failed to assign a same variable for the denominators. The model assigned x 0 in the former expression and x 1 in the latter. Conclusion In this study, we proposed a neural algebraic word problem solver, Expression-Pointer Transformer (EPT), and examined its characteristics. We designed EPT to address two issues: expression fragmentation and operand-context separation. The EPT resolves the expression fragmentation issue by generating 'Expression' tokens, which simultaneously generate an operator and required operands. In addition, the EPT resolves the operand-context separation issue by applying operand-context pointers. Our work is meaningful in that we demonstrated a possibility for alleviating the costly procedure of devising hand-crafted features in the domain of solving algebraic word problems. As future work, we plan to generalize the EPT to other datasets, including non-English word problems or non-algebraic domains in math, to extend our model. A Input/output of ablation models In this section, we describe how we compute the input and output of the two ablation models: (1) a vanilla Transformer and (2) a vanilla Transformer with 'Expression' tokens. Figure 3 shows the two models. The first ablation model is a vanilla Transformer. The model generates an 'Op' token sequence and does not use operand-context pointers. The model manages an 'Op' token vocabulary that contains operators, constants, variables, and number placeholders (e.g., N 0 ). So the input of this model's decoder only utilizes a look-up table for embedding vectors. For the decoder's output, the vanilla Transformer uses a feed-forward softmax layer to output the probability of selecting an Op token. In summary, the input vector v i of a token t i and the output t i+1 can be computed as follows. v i = LN in (c in E in (t i ) + PE(i)) , (11) t i+1 = arg max t σ (FF out (d i )) t . (12) The second ablation model is a vanilla Transformer model that uses 'Expression' tokens as a unit of input/output. This model generates an 'Expression' token sequence but does not apply operand-context pointers. Instead of using operandcontext pointers, this model uses an operand vocabulary that contains constants, placeholders for numbers, and placeholders of previous Expression token results (e.g., R 0 ). The input of this model's decoder is similar to that of EPT's decoder, but we replaced the equations 3 and 5 with the following formulae. a ij = LN a (c a u num + E c (a ij )) , (13) a ij = LN a (c a u expr + E c (a ij )) . (14) For the output of this model's decoder, we used a feed-forward softmax layer to output the probability of selecting an operand. Since the softmax output can select the unavailable operand, we set the probability of such unavailable tokens as zeros to mask them. So, we replace equation 10 with the following formula. a i+1,j = arg max a σ (a |M (FF j (d i )) ) , (15) where M is a masking function to set zero probability on unavailable tokens when generating ith Op token. The other unstated equations 1, 2, 4, and 6 remain the same. Table 7 shows the best parameters and performances on the development set, which are found using grid search.
2020-11-06T22:09:23.233Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "3938fe72ccfe4fe92387258874cb1cbe66194d4f", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/2020.emnlp-main.308.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "3938fe72ccfe4fe92387258874cb1cbe66194d4f", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
252683975
pes2o/s2orc
v3-fos-license
SDC-based Resource Constrained Scheduling for Quantum Control Architectures Instruction scheduling is a key transformation in backend compilers that take an untimed description of an algorithm and assigns time slots to the algorithm's instructions so that they can be executed as efficiently as possible while taking into account the target processor limitations, such as the amount of computational units available. For example, for a superconducting quantum processor these restrictions include the amount of analogue instruments available to play the waveforms to drive the qubit rotations or on-chip connectivity between qubits. Current small-scale quantum processors contain only a few qubits; therefore, it is feasible to drive qubits individually albeit not scalable. Consequently, for NISQ and beyond NISQ devices, it is expected that classical instrument sharing to be designed in the future quantum control architectures where several qubits are connected to an instrument and multiplexing is used to activate only the qubits performing the same quantum operation at a time. Existing quantum scheduling algorithms either rely on ILP formulations, which do not scale well, or use heuristic based algorithms such as list scheduling which are not versatile enough to deal with quantum requirements such as scheduling with exact relative timing constraints between instructions, situation that might occur when decomposing complex instructions into native ones and requiring to keep a fixed timing between the primitive ones to guarantee correctness. In this paper, we propose a novel resource constrained scheduling algorithm that is based on the SDC formulation, which is the state-of-the-art algorithm used in the reconfigurable computing. We evaluate it against a list scheduler and describe the benefits of the proposed approach. We find that the SDC-based scheduling is not only able to find better schedules but also model flexible relative timing constraints. INTRODUCTION Quantum technology promises to boost the computing capabilities available today by orders of magnitude, which will revolutionize key application domains and give birth to new ones that will drive the human evolution for decades to come. However, because of the novelty of the computing approach that completely differs from any existing classical computing technology, we require a holistic research and development agenda in which everything from the lowest physical level, the qubit, to the highest application level needs to be (re)invented. A key component in the quantum full stack is the (backend) compiler. The main objective of a compiler is to efficiently translate a quantum high-level algorithm into an optimal quantum circuit that can be executed correctly on a quantum chip. For example, quantum circuits, which are composed of a series of quantum operations called gates, require scheduling to ensure all the gates of the quantum algorithm and their dependencies are satisfied while making sure that the resource limitations of the target quantum chip are taken into consideration. Moreover, in the context of quantum compilation, where short decoherence times are an additional burden [1], the availability of a performant scheduling algorithm can be the difference between a correctly executing quantum algorithm and a completely useless circuit. At the same time, the number of qubits available in current quantum processors is low (at the moment of publication the largest device is IBM Eagle [2] with 127 qubits), which implies that qubits can be driven individually and independently of other qubits because in these early quantum computing chips the quantum control electronics are not shared. Nevertheless, this simplistic approach is not realistic for scalable quantum computing systems starting with several hundreds to thousands of qubits, such as the IBM System Two quantum architecture [3], that will require multiplexing qubit control wires. Consequently, developing resource constrained scheduling algorithms is key to the success of these future architectures. However, existing approaches for current quantum computing deal with the scheduling problem in a trivial manner by mostly ignoring architectural limitations and resort to scheduling the quantum algorithm in an as soon as possible (ASAP) style, where only the program dependencies are taken into consideration. One of the few compiler frameworks for quantum computation that addresses the limitations mentioned above is OpenQL [4], which includes a backend list scheduling compiler pass [5] for the Surface-17 superconducting quantum processor [6]. However, the problem with using list scheduling in a quantum compiler backed is that it is not versatile enough to model quantum gate decomposition [7] and satisfy the relative timings finer quantum operations should obey with respect to one another after decomposition, e.g., performing a flux operation while at the same time parking other qubits, or ensuring fixed timing that might be required for feedback control or error detection and correction. The alternative to use a fully specified integer linear programming (ILP) formulation would solve the above ver-satility problem, albeit not being scalable. Consequently, a scheduling algorithm that provides a balanced trade-off between versatility and scalability was proposed. This scheduling algorithm is based on a system of difference constraints (SDC) [8] formulation stemming from ILP and is the stateof-the-art in high-level synthesis compilers [9] used in classical reconfigurable computing. However, SDC scheduling with resource constraints in a quantum context is not optimal due to the way quantum resources are shared, i.e., one quantum instrument can perform multiple quantum gates of the same type at once similar to classical vector processing units. In this paper, we propose a novel quantum resource constraint scheduling algorithm based on SDC (QSDC) to generate efficient quantum schedules when quantum resources are shared. Concretely, the novelties of this paper are: • We develop a novel resource constraint scheduling algorithm based on the SDC formulation and integrate it into the OpenQL compiler framework. • We provide a comprehensive analysis of the advantages of our proposed algorithm when compared with the current list scheduling algorithm available in OpenQL. The paper is organized as follows. First, section 2 presents the necessary background, including SDC preliminaries and related works. Then, in section 3 we present the QSDC algorithm. Section 4 describes the experimental results. Finally, section 5 summarizes the paper and highlights future work. BACKGROUND In this section, we introduce first the underlying concepts of quantum computation, then we present the instruction scheduling problem based on the system of difference constraints formulation, and finally, we review the scheduling state-of-the-art for quantum compilers. Quantum Computing and Resources Quantum computing requires a radically novel approach to the developing of processors and compilers, the hardware and software building blocks of any computing system. The main reason for the requirement of new processor design techniques and novel compiler algorithms is due to the switch to implementing quantum mechanics operations rather than the classical approach of performing Boolean logic arithmetic. While processing units enabling Boolean logic operations can be implemented fully in the digital domain by mature EDA techniques using large-scale integrated circuits, quantum mechanics requires the integration of analogue instruments that drive the basic quantum computational unit, the qubit, by generating waveforms to instruct which quantum gate has to be performed on a qubit. Table 1 summarizes the differences between the classical and quantum basic computational concepts and processor micro-architecture functional units. The major difference stems from the requirement to implement quantum mechanics that requires driving analogue devices, e.g., an Arbitrary Waveform Generator (AWG) to play different waveforms corresponding to particular quantum gates (e.g., an X gate, as opposed to an arithmetic operation performed by an Arithmetic-Logic Unig (ALU) in a classical processor) controlled by general-purpose Control Units (CUs) that keep track of which quantum gates have to be performed at a given time step according to the quantum algorithm. Consequently, due to the digital-analog domain crossing required in the design of a quantum processor micro-architectures, the sharing of AWGs is key to the success of developing scalable quantum computing systems. For example, Figure 1 shows the Surface-17 "schematic of the targeted realization of Surface-17 in a planar cQED architecture with vertical I/O. Every transmon (represented by a circle) has dedicated flux control line, microwave-drive line, and readout resonator. Dedicated bus resonators mediate interactions between nearest-neighbor data and ancilla qubits. Readout resonators are simultaneously interrogated using frequency-division multiplexing in diagonally-running feedlines [6]. In the current S-17 configuration, qubits colored the same are connected to the same microwave-drive line and controlled by the same AWG instrument. For example, qubits 8, 9, and 10 are driven by a single AWG. In this work, we use this target processor with the instrument connections depicted in Figure 1. However, the work can be easily retartgeted by modifying the instrument sec-tion in the OpenQL's platform configuration file, as shown in Listing 1. qwgs section describe the connections for the single-qubit rotation gates (instructions of 'mw' type) that are controlled by AWGs. Each qwg controls a private set of qubits, enumerated in the connection map. A qwg can control multiple qubits at the same time, but only when they perform the same gate and started at the same time. There are 'count' qwgs. For each qwg it is described which set of qubits it controls as configured for the S17 quantum device. Additionally, single-qubit measurements (instructions of 'readout' type) are controlled by measurement units. Each one controls a private set of qubits. A measurement unit can control multiple qubits at the same time, but only when they start at the same time. There are 'count' measurement units and for each measurement unit it is described which set of qubits it controls. Sections concerning the available instructions and chip topology information related to the connectivity of the device is left out for space reason. OpenQL [4] is quantum compiler framework developed by Qutech, depicted graphically in Figure 2. It is a modular and retargetable framework as it allows to easily add new compiler passes as well as generate code for different quantum devices and technologies by simply describing the chip architecture in a new platform configuration file [5], illustrated above. OpenQL currently supports ASAP and ALAP scheduling, and resource constrained list scheduling. Furthermore, quantum programs can be written either in C++ or using the pyhon API. It can generate both simulatable cQASM code [10] for QX simulator [11] and quantum micro-code (CC-micro) for Qutech's Central Controller [12]. OpenQL Quantum Compiler Framework Overview. SDC Preliminaries Instruction scheduling is a central problem in compilers. Informally, the instruction scheduling problem can be formulated as finding the best (i.e., usually the one that runs the fastest) sequence of instructions that minimize the execution time of an algorithm given different constraints, such as the available resources existent in Instruction scheduling is a central problem in compilers. Informally, the instruction scheduling problem can be formulated as finding the best (i.e., usually the one that runs the fastest) sequence of instructions that minimize the execution time of an algorithm given different constraints, such as the available resources existent in the target processor. Formally, the scheduling problem can be defined as an assignment of execution slots to each instruction (i.e., the time when the instruction is active) so that all program and platform dependencies are taken into account. For example, using integer linear programming (ILP) the following constraint has to be imposed on the scheduling variables defined for each instruction so that a valid schedule is produced: where it is assumed an m clock-cycle schedule for each of the i instructions in the program. According to [13], equation (1) is called an appearance constraint and is needed to ensure one instruction will only be executed in exactly one cycle. Several other constraints have to be formulated in a similar manner as in (1) to solve the scheduling problem. Although using this formal specification using a mathematical approach is optimal, giving the best Quality-of-Results (QoR), due to the difficulty of solving it for large problems (i.e., the scheduling problem under resource constraints is known to be NP-hard), alternative scheduling algorithms have been proposed that are based on heuristics. One wellknown algorithm is list scheduling [14], which uses a ready list of instructions and sorts them in increasing order of some predefined priority to select the next node to be scheduled. By using this ready list of instructions, the algorithm reduces the search space that in turn increases the scalability of the algorithm with the risk of obtaining a local minima solution; therefore, degrading the QoR of the obtained schedule. As a comprimise between the two orthogonal features, i.e., runtime vs. QoR, a more versatile scheduling heuristic has been proposed in [8] based on a system of integer difference constraints (SDC). This heuristic is rooted in ILP, however, due to the intelligent way of encoding the scheduling variables and instruction constraints, which are defined as a linear system of inequalities where the underlying matrix is totally unimodular, the scheduling formulation can be solved using a linear programming relaxation that generates optimal integer solutions in polynomial time. Therefore, SDC can find better schedules than list scheduling while being faster than the fully specified ILP problem. Furthermore, contrary to list scheduling, in the SDC formulation we can easily specify relative timing constraints, which are very important in quantum computing because it is often necessary to guarantee an exact latency distance between two operations that would avoid for example the situation where the qubits will decohere. Using the terminology in [8], equations (2) and (3) highlight the inequalities needed to specify an exact timing constraint between two operations a and b: , where sv beg is the first cycle scheduling variable associated with an instruction and l ab is the number of clock cycles between a and b. It is worth noting that several other constraints can be specified using the SDC formalism, all of which can be found in [8]. Related Works Several quantum compilation frameworks have been developed over the last decade and in this part we will highlight some of them focusing on the available scheduling algorithm and target platform supported as a differentiating factor from this work. One of the first compilers developed was the ScaffCC [15] compiler for the Scaffold programming language. Developed initially by the Princeton University, in collaboration with IBM T.J. Watson and University of Santa Barbara, the compiler was build using the LLVM compiler infrastructure [16] and offered several advanced compiler transformations, such as the RKQC, the reversible logic circuitry toolkit for quantum computation, and the Longest-Path-First-Schedule (LPFS) scheduling compiler pass. Furthermore, another important characteristic of ScaffCC is that it is able to generate OpenQASM v2.0 [17] and cQASM v1.0 [10] quantum assembly languages. However, ScaffCC can be considered only a front-end compiler because it does not support a target quantum processor, rather it defers this compilation process to a backend-compiler, available for example in IBM's QisKit runtime [], that knows the resource limitation of that particular quantum device. Consequently, the LPFS scheduler in ScaffCC is just a variant of an As-Soon-As-Possible (ASAP) scheduler that does not consider any resource constraints. Qiskit [18] is another compilation framework developed by IBM. The software development kit is written in python for fast prototyping and uses a list scheduler for the different backends it supports. However, due to the limited amount of qubits available in the early quantum devices, i.e., up to 127 qubits available in the IBM Eagle that was based on the System One architecture, there was no multiplexing of the control wires of each qubits. Consequently, there was no need for advanced scheduling algorithms that optimize the circuit under resource sharing incurred by control wire stemming from multiplexing qubit control wires as required for developing scalable quantum control architectures starting with IBM System Two quantum architecture [3]. Therefore, developing resource constrained scheduling algorithm is key to the success of these future architectures. Other compilers, such as Qcor [19] and t|ket [20], are suffering the same drawbacks as the compilers described above, namely they are focusing on front-end compilation tasks and defer backend target compilation to quantum device providers and their backend runtime software, e.g., via IBM Quantum Experience. The main limitation of this approach is, as previously mentioned, that due to current small size of existing quantum devices the scheduling algorithms used were mostly based on basic ASAP style of scheduling. Contrary to this state-of-the-art in quantum circuit scheduling, we focus on the resource constraint scheduling problem for future quantum processors, such as those based on IBM's System Two architecture, that will include sharing of quantum control electronics. We integrate our proposed QSDC scheduling algorithm into OpenQL compiler framework and target the Surface-17 chip that has a scalable architecture by sharing its control electronics as described in section 2.1. QSDC SCHEDULING ALGORITHM In this section, we will describe the QSDC scheduling algorithm at the hand of a simple example shown in Algorithm 1. The quantum circuit is written using OpenQL's python API and is composed of four gates X,Y,X, and Z that operate on three qubits 2,3, and 4. Recall that according to the S-17 instrument connections the qubits involved are driven by the same AWG with id 0 (see Figure 1 and Listing 1). ate a control and data dependency graph(CDFG) out of the program instructions that fulfill the dependencies of the program code, e.g., the abstract operations X and Z should happen exactly in the order they appear in the code. Please note that we call these operations abstract because at this point the compiler does not know that the quantum gates X and Z operating on qubit 2 are commutative. The (C)DFG for our example is shown in Figure 4a. Alongside we show the SDC formulation when the chosen scheduling option is As-Soon-As-Possible(ASAP) without considering any resource constraints. When we schedule in this way, we see that a state transition graph(STG) is created in which all the independent operations are scheduled in parallel in the first state. That is, the first cycle 0 is assigned to the scheduling variables (s1, s2, and s3) belonging to the v1 to v4 graph nodes, while cycle 1 is assigned to s4 of the Z2's v4 node. Before we continue with the formulation for the resource constrained scheduling problem, we recall that the scheduling problem with resource constraints is known to be an NPhard problem. Therefore, to solve large problems, we rely on heuristics such as list scheduling or SDC, which also cannot model exactly the resource constraints. Consequently, the heuristic in SDC is to use a sorting algorithm to create a linear order of the CDFG nodes and then to use this order to add constraints based on the resources types and counts. In the best mode of operation known for SDC, as explained in [6], this includes sorting the nodes in ascending order using an As-Late-As-Possible (ALAP) primary key with the ASAP key as a tiebreaker when a couple of nodes have the same ALAP cycle. However, this is not optimal for quantum computing as illustrated in Figure 4b. Using the X2, Y3, X4, Z2 linear order created by the aforementioned sorting heuristic and considering that an AWG can perform multiple operations in parallel only if they are the same type, a resource constraint between Y3 and X4 has to be imposed such that X4 starts at least 1 cycle after Y3 finishes because they both require the same AWG-0. Similarly, two other resource constraints have to be added between <v1 and v2> and <v3 and v4>, leading to a sub-optimal schedule in which all 4 nodes execute individually on the AWG requiring 4 cycles to run the whole quantum circuit. However, if we consider that an AWG can control multiple qubits at the same time when they perform the same gate started at the same time, we observe that the X2 and X4 quantum gates could have been scheduled in parallel because stacking is possible in a quantum context. This observation led us to introduce a new heuristic that we call QSDC, which can be combined with the default resource constraint linear order sorting heuristic to generate a more optimized STG that takes into consideration the quantum resource features. QSDC enables stacking of quantum operations on the same quantum resource whenever possible, as shown in Figure 4c. To track the free stacking space an instrument has, we introduce the concept of a running instruction that tracks if and how many instructions are assigned to which instruments and what is the maximum allowed stacking. Whenever we schedule an instruction we refer to the last running instruction of its type and anchor to it by adding a resource dependency of ≤ 0 from its scheduling variable to the running instruction's variable, instead of the previously added ≤ −1 dependency to the last instruction of a different type. The full QSDC RC scheduling algorithm is given in Algorithms 2 and 3. In the first algorithm, the overall main loop is shown to iterate through the resource types (e.g., AWG or readout units) of the platform and for each type it selects each instance (e.g., 0, 1, or 2 for an AWG type), shown in lines 1-2. Then, the CDFG nodes using a qubit that is connected to the selected instrument and that performs a quantum gate supported by that instrument are added to a new list (lines [3][4][5][6][7][8][9][10][11][12][13]. If any instructions are found, then the method to add resource constraints for that instruments is called (lines [14][15][16][17]. This method is depicted in the second algorithm. Here, for each new instruction to be scheduled (line 3), two main cases are considered. First, if the instruction type is already running (lines 4-6), then a ≤ 0 inequality is added between the running instruction and this instruction (lines 7-11), unless the maximum stacking was reached, in which case a ≤ −1 inequality is added and the counters are reset to 1 (lines [12][13][14][15][16]. Second, if the instruction is not running, then the last instruction of other type is found (line 25) and a ≤ −1 inequality is added to the scheduling formulation (line 26-29). In the case no running instruction is there (line 19), we simply initialize the running instruction list with this instruction (line 21) and set the last instruction see to this (line 30). if (runInstr! = runInstructions.end()) then 6: instruction type is already running 7: EXPERIMENTAL RESULTS In this section, we evaluate the QSDC algorithm by compiling ten circuits from [22] for the superconducting processor Surface-17 that shares several control electronics among the 17 qubits for different types of quantum operations, limitations described in section 2.1 and highlighted in Figure 1. This sharing is translated into the resource constraints described in the platform configuration file used by the OpenQL compiler (see Listing 1). We implemented the QSDC compiler into the release version 0.8.1 of OpenQL and compare it against the default existing resource constrained list scheduling algorithm. Furthermore, because the list scheduling pass was recently updated in OpenQL version 0.10.5, the latest version at the time of writing, we compare against this as well. Please note that several bug fixes were done from version 0.8.1 to 0.10.5, which increased the overall latency of the compiled benchmarks. We compiled and run the benchmarks in an Ubuntu 20.04.5 LTS installation running on an eight-core eight-thread Intel(R) Core(TM) i7-9700 processor @ 3.00 GHz with 16 GB of memory. Benchmarks The ten benchmarks randomly selected from [22] are described in Table 2, which lists the initial number of qubits, gates, and CNOTs as specified in the original, not scheduled, circuit. To compile and schedule these circuits so that they can correctly execute on the S-17 quantum processor, the following compiler passes are selected, executed in the listed order: Clifford gate optimizer, Qmap, Clifford gate optimizer, and RCsched, which was configured both as a list scheduling and qsdc for the experiments. For the Qmap pass the minextendrc option was set, while the platform configuration file did not include the gate decomposition section. It is important to realize that each of these passes modify the original specification to optimize it as well as to solve connectivity limitations (i.e., by Qmap), which introduce additional gates to move the qubits into adjacent positions before the actual CNOTs gates can be executed. For example, the Qmap compiler pass increases the number of quantum operation and therefore the latency of the circuit. The QSDC as well as the resource constraint scheduler are invoked after the qmap has generated a topologically correct circuit. CONCLUSIONS AND FUTURE WORK In this paper, we proposed a novel resource constrained scheduling algorithm that is based on the SDC formulation, which is the state-of-the-art algorithm used in the reconfigurable computing. Concretely, we have proposed a new selection heuristic to select the operation nodes in a quantum efficient way to account for the quantum context where quantum gates can be stacked on the same instrument when they are the same type. Furthermore, we have validated our work by integrating the QSDC algorithm into the OpenQL compiler framework and evaluated it using ten quantum algorithms. The experimental results showed the benefits of QSDC against the default OpenQL list scheduling, where an average of 10% speedup was observed. In future work, we will analyze and demonstrate the benefits of controlling rel-ative timing constraints between quantum operations. However, before that can be showed, support for integrating flexible conditions between quantum gates have to be added at the data-dependecy graph in OpenQL.
2022-10-04T06:42:08.599Z
2022-10-03T00:00:00.000
{ "year": 2022, "sha1": "b9605d19f673aec225cd75a845d655492b87552b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b9605d19f673aec225cd75a845d655492b87552b", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
266451229
pes2o/s2orc
v3-fos-license
An Unusual Presentation of an Ectopic Mandibular Third Molar in the Condylar Region: A Case Report and Review of Literature Abstract Rationale: Ectopic teeth arise from developmental abnormalities, pathological conditions or iatrogenic factors. They can be supernumerary, deciduous or permanent and cause dental and facial pain, swelling and infection. Limited cases reveal limited knowledge about causes, symptoms, treatment options and surgical procedures. A thorough evaluation, including radiographic imaging and clinical examination, aids diagnosis and treatment planning. Patient Concern: A 54-year-old female patient complains of tooth mobility in the upper right back tooth region for one month and occasional pain in the right pre-auricular region. Diagnosis: Chronic generalised periodontitis with an impacted tooth in the right condylar region. Intervention: Extraction of Grade III mobile 17 and conservative treatment for ectopic molar in the condylar region. Outcome: The patient is on regular follow-up with no similar complaints. Take-away Lesson: A personalised approach is crucial in managing ectopic mandibular third molars and should take into account the patient’s symptoms, preferences and potential complications. Successful treatment requires informed decision-making and thorough evaluation. she was advised an orthopantomogram.A cone-beam computed tomography was not done due to financial reasons.Then, the patient was referred to the Department of Oral and Maxillofacial Surgery for needful treatment; radiographic evaluation shows an ectopic third molar in the right condylar region in an inverted position with a radiolucent image around the third molar crown and generalised bone loss present [Figure 1].A continuation of the radiolucent image displayed towards the retromolar trigone simulates an abnormal eruption path.The patient was informed about the ectopic position of the third molar in the right condylar region.The patient was explained about the merits and demerits of leaving the tooth in place or opting for removing the tooth with condylectomy followed by reconstruction of condyle options.The patient was explained that leaving the tooth in place avoids the removal of the condyle and reduces the risk of pathological fracture of the mandible in the future as well as permanent nerve damage.Contrarily, leaving the tooth in place would be associated with a chance of subsequent severe infection and removal would help to prevent associated infection with the tooth.The patient opted for leaving the tooth in place as of now and does not want to undergo surgical treatment, so analgesic was given to relieve pain and was kept on follow-up every biannually.Extraction of 17 was done under local anaesthesia without any complication and was given antibiotics and analgesics for the same.dIscussIon There were a total of nine individuals who had an ectopic third molar in the mandibular condyle, including the case of the author, which was taken from the literature [Table 1]. Of nine cases reported in this study, six were women, indicating a higher prevalence of ectopic mandibular third molars in females aged 28-68 years.One patient had bilateral ectopic mandibular third molars, [5] with one in the condyle and the other in the mandibular ramus.The most common symptoms reported were pain and swelling on the ipsilateral side of the mandible or pre-auricular region, trismus, difficulty in mastication, cutaneous fistulae and temporomandibular joint (TMJ) dysfunction. [1]Asymptomatic ectopic cases were also reported.Conservative management may be appropriate, while surgical intervention may be necessary to prevent complications.Further research is needed to better understand ectopic wisdom teeth's aetiology and management. [1]e choice of approach depends on the size and position of the tooth, the presence or absence of symptoms, the patient's anatomy and the surgeon's experience.Intraoral access is preferred for small and superficially located ectopic teeth, while extra-oral access is preferred for deeply located ectopic teeth.Endoscopic access has advantages in terms of visualisation and minimal invasiveness but may not be suitable for all cases. [3]In cases where the tooth is associated with a dentigerous cyst, [9] enucleation of the cyst is also necessary. The extraoral approach [10] is used for ectopic third molars in the condylar region, providing better visibility and a Table 1: Ectopic mandibular third molars in the condylar region reported in literature lower risk of injury to adjacent structures.However, it also increases surgical morbidity, such as scarring and numbness.On the other hand, the intraoral approach involves incisions in the oral mucosa and dissection through soft tissue and bone to access the tooth.It is associated with lower morbidity but may not provide sufficient access in all cases, especially for ectopic third molars in the condylar region.The anteroparotid-transmasseteric approach is less commonly used and is reserved for more complex cases or when submandibular or retromandibular approaches are not feasible.The choice of surgical approach should be based on the individual patient's anatomy, the location of the ectopic tooth and the surgeon's experience and preference.In some cases, a combination of approaches may be necessary for optimal access and visualisation.It is crucial to discuss complications with the patient during the informed consent process.The extraoral technique has the following disadvantages: (i) a cutaneous scar, (ii) a danger of facial nerve lesion, (iii) a risk of TMJ lesion, (iv) a risk of salivary fistula and sialocele and (v) a risk of chronic cutaneous fistulas. [7]traoperative complications can occur during surgical removal of ectopic mandibular third molars, including nerve injury, TMJ injury and aesthetic concerns.Pre-operative planning, imaging and surgical techniques are crucial to minimise these risks.Nerve injury, particularly the inferior alveolar nerve, can cause numbness or paraesthesia in the lower lip and chin.TMJ injury can lead to pain, dysfunction and limited mouth opening.Proper pre-operative evaluation and imaging studies can help identify patients at higher risk for TMJ injury. [5]nclusIon An ectopic mandibular third molar is an uncommon clinical condition with vague symptoms.Its position in the mandible is intimately connected to its clinical appearance and determines the surgical strategy.While treating this unusual condition, surgeons must carefully consider the advantages, potential dangers and consequences. Ectopic teeth are caused by developmental abnormalities, pathological conditions or iatrogenic factors.Theories include aberrant eruption, trauma and ectopic formation of tooth nuclei.Ectopic mandibular third molars are uncommon in dental practice and have no standardised classification.They can cause dental and facial pain, swelling and infection.A thorough evaluation, including radiographic imaging and clinical examination, can aid in the diagnosis and treatment planning.
2023-12-22T16:22:17.762Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "3d30feca38aa092c9330148a57abc4c086fb0138", "oa_license": "CCBYNCSA", "oa_url": "https://journals.lww.com/aoms/fulltext/9900/an_unusual_presentation_of_an_ectopic_mandibular.17.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1d0d7a4fd9e04459ae86f4bcc9e019a65e9c6f63", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270002458
pes2o/s2orc
v3-fos-license
Evaluation of the knowledge of and attitudes towards pharmacovigilance among healthcare students in China: a cross-sectional study Background Knowledge of pharmacovigilance (PV) and adverse drug reactions (ADRs) are the core competencies that healthcare students should acquire during their studies. The objective of this study was to assess attitudes towards and knowledge of PV and ADRs among healthcare students in China. Methods An online, cross-sectional survey was conducted nationally among healthcare students in China from April through October 2023. Knowledge of PV and ADRs was assessed using a questionnaire based on current PV guidelines. We performed logistic regression analysis to determine the potential factors related to knowledge of and attitudes towards PV and ADRs. Results A total of 345 students were included in the analysis. Among the healthcare students who participated in the survey, 225 (65.22%) students correctly defined PV, while only 68 (19.71%) had a correct understanding of ADRs. Among all respondents included in the analysis, only 71 (20.58%) reported having taken a PV course. Pharmacy students were more likely to have taken PV courses at a university and to demonstrate superior knowledge compared to other healthcare students. The logistic regression model revealed that the significant predictors of a higher level of PV knowledge were being female (odds ratio [OR]: 1.76; 95% confidence interval (CI): 1.06–2.92; P value: 0.028) and having previously taken PV-related courses (OR: 2.00; 95% CI: 1.06–3.80; P value: 0.034). Conclusions This study revealed that healthcare students’ knowledge of PV and ADRs is unsatisfactory. However, there were a limited number of universities providing PV education. Given the vital role of healthcare professionals in identifying and reporting ADRs, our findings raise significant concerns. Hence, more efforts should be made to enhance PV education for future healthcare professionals. Supplementary Information The online version contains supplementary material available at 10.1186/s12909-024-05561-5. Evaluation of the knowledge of and attitudes towards pharmacovigilance among healthcare students in China: a cross-sectional study Yan Zhao 1 † , Lei Yang 2 † , Ruijie Tan 3 and Jing Yuan 3* † Background Pharmacovigilance (PV)-practices related to detecting, monitoring, understanding, and preventing adverse events [1]-is crucial for ensuring drug safety [2].Although adverse drug events (ADEs) can potentially be detected in premarketing clinical trials, there are noticeable limitations, including narrow patient selection criteria, small sample sizes, and short follow-up periods, which make it nearly impossible to identify all events associated with a drug, particularly rare adverse drug reactions (ADRs) [3,4].In fact, rare (1 in 1000) and very rare (1 in 10,000) ADRs are more likely to be reported after the obtainment of market authorization [5].Hence, strengthening postmarketing PV activities, including the spontaneous reporting of suspected ADRs, is necessary.Spontaneous reporting systems-systems in which suspected ADRs are voluntarily reported by health professionals and pharmaceutical manufacturers [6]-are considered one of the most effective approaches for collecting safety data in the postmarketing phase [7].However, spontaneous ADE reporting is compromised by underreporting of events.It is estimated that only 2-10% of ADRs are voluntarily reported [8,9], which further threatens drug safety management in healthcare systems.Thus, it is necessary to design interventions for improving ADR reporting in each country. Since the 1990s, spontaneous reporting activities to monitor drug safety have been performed in China, including the establishment of the National Adverse Drug Reaction Monitoring System by the National Medical Products Administration (NMPA).Recently, an increasing number of innovative drugs have been granted conditional approval, which generally requires more stringent postmarketing safety surveillance [10].Consequently, PV has received greater public attention in China and worldwide [11].The newly revised Drug Administration Law of the People's Republic of China [12], implemented in December 2019, explicitly proposes establishing a PV system.In 2021, the NMPA released the Guidelines on Good PV Practices [13], which further clarify the provisions needed for safety surveillance activities, such as the reporting, monitoring, risk identification, risk assessment, and control of ADRs. ADRs represent a significant public health concern, contributing to increased morbidity, mortality, and economic burdens [14].The reporting of events is heavily reliant on the level of knowledge, professional obligation, and attitude and motivation of healthcare professionals; hence, knowledge of PV and ADR reporting is vital to ensure postmarketing safety.Therefore, healthcare professionals should gain knowledge and develop attitudes towards PV during their undergraduate and graduate studies to develop competence in identifying and reporting ADRs later in their practice.The knowledge of healthcare professionals has been evaluated in many countries [15][16][17][18]; however, the knowledge of healthcare students, who may become doctors, pharmacists, and nurses in the future, is limited [19][20][21], particularly in China.We assessed healthcare students' attitudes towards and knowledge of PV and ADRs and the potential predictors by conducting a questionnaire survey.We also examined healthcare students' perceived need for PV courses to be provided for healthcare-related professions or disciplines to optimize the curriculum in the future. Study design We used a cross-sectional, questionnaire-based survey design.Medical students were invited to participate in this nationwide survey on knowledge of and attitudes towards PV in China.The survey period was from April 27, 2023, to October 15, 2023.This study was approved by the Shanghai Ethics Committee for Clinical Research.The requirement for written informed consent from participants was waived by the Shanghai Ethics Committee for Clinical Research.Although written informed consent was not needed, the first page of the survey included an informed consent statement describing that participation was voluntary and anonymous.After completing the questionnaire, the respondents agreed (consented) to participate in the anonymous survey. Survey questionnaire development The questionnaire, which evaluated medical students' knowledge of and attitudes towards PV, was generated based on references and expert opinions [19][20][21]. To ensure the quality of the questionnaire, a group of researchers with diverse research backgrounds, including regulatory agency (Y.Z.), medical education (L.Y., J.Y.), clinical practice (L.Y., J.Y.), and industry (J.Y.) backgrounds, reviewed the questionnaire for accuracy and clarity.We also consulted with other researchers.All the feedback from the researchers was considered in the revision of the questionnaire.The questionnaire was then pilot tested among a purposive sample of healthcare students, including five undergraduate students and five graduate students who were majoring in pharmacy, medicine, and traditional Chinese medicine (TCM), to ensure the appropriateness of the questionnaire.The questionnaire was further refined based on the students' input. The questionnaire includes four parts.The first part collected demographic and school information, including age, sex, study program, and type of institution; the second part collected information about the respondents' participation in PV courses; the third part collected information on the respondents' knowledge about PV-related activities; and the fourth part collected information about the students' perceived need for PV courses.The questionnaire consisted of single-choice questions, multiple-choice questions, and free text entries.The answers to the single-choice and multiple-choice questions were based on existing guidelines or textbooks.The questionnaire is shown in Appendix. Participant recruitment The research participants included undergraduate and graduate students from medical universities or colleges of comprehensive universities in China.To obtain a more representative sample of students, 10 teaching faculty members or counsellors were chosen as the initial deliverers of the survey.As described in previous studies, the students invited their classmates to participate in the survey via WeChat [22,23].WeChat, China's largest social media platform, has been widely used to conduct online surveys [22].The institutions of higher education involved in this study included Tsinghua University, Shanghai Medical College of Fudan University, Tongji Medical College of Huazhong University of Science and Technology, Xiangya Medical College of Central South University, and Capital Medical University, which are ranked in the top 20 Best Global Universities for Clinical Medicine in China according to U.S. News [24].Both Western medicine and TCM courses are offered for healthcare professional students in China [25].In addition, TCM drugs are regulated by the NMPA and should also meet the PV requirements [26].Therefore, the survey participants also included students majoring in TCM. Questionnaire administration The respondents received requests to complete the questionnaire via WeChat, including a link to the web-based questionnaire via the internet survey portal (https:// www.wjx.cn/).The questionnaires were completed and collected online.To avoid multiple responses from the same student, each WeChat account was allowed to complete the questionnaire only once.A questionnaire was considered valid if (1) all questions were answered; (2) the respondent was majoring in pharmacy, western medicine, TCM, public health, nursing, medical laboratory technology, biomedical engineering, or rehabilitation therapy; and (3) the respondent was still pursuing his or her undergraduate or graduate education.A total of 400 students were invited to participate.362 students responded to the questionnaire, and the response rate was 80.44%. Statistical analysis We performed descriptive data analysis for each variable.Continuous variables are presented as the mean ± standard deviation if normally distributed; they are expressed as medians and quartiles if not normally distributed.We compared the differences between the means of the two groups using Student's t-test.Categorical variables are presented as numbers and percentages and were compared using the chi-square test or Fisher's exact test.To examine the associations between knowledge scores and demographic data, we also used a logistic regression model to estimate odds ratios (ORs) with 95% confidence intervals (CIs).The data were analysed using SAS 9.4.A P value < 0.05 was considered to indicate statistical significance. Characteristics of survey participants Three hundred sixty-two students completed the questionnaire, and 17 were excluded from the analysis because of missing data or not majoring in healthcarerelated disciplines.Hence, a total of 345 students were included in the final analysis.Table 1 shows the characteristics of the students who participated in the survey.Most participants were female (n =246; 71.30%), with an average age of 23.38 ± 3.32 years.A total of 46.67% of the participants (n = 161) studied at medical universities.Among the 345 survey participants, the majority majored in Western medicine (n = 187; 54.20%), followed by pharmacy (n = 77; 22.32%) and TCM (n = 48; 13.91%).Among the respondents, 175 (50.72%) were undergraduate students, 170 (49.28%) were graduate students. Knowledge about PV and ADR reporting Among the students included in the analysis, 225 (65.22%) students correctly defined PV (Fig. 1).Female students were more likely to answer correctly than male students were (68.70% vs. 56.57%;P = 0.03).The percentage of students who answered correctly was highest among students who majored in pharmacy (75.32%), followed by those who majored in TCM (63.10%),Western medicine (60.42%), and other disciplines (60.61%). Regarding the perceived knowledge of PV, only 45 out of the 345 students (13.04%) thought they were familiar with the PV requirements.The proportion of students who felt that they were familiar with the PV requirements was greater in the male group than in the female group (17.17% vs. 11.38%;P = 0.35).The percentage of students who felt that they were familiar with the PV requirements was greater among students majoring in pharmacy (29.87%) than among those with other majors. For the definition of ADRs, 68 (19.71%) students answered correctly (Fig. 2).Students who majored in pharmacy were more likely to define ADRs accurately (42.86%) than were students who majored in TCM (11.76%),Western medicine (8.33%), and other fields (27.27%).Only 61 students (17.68%) knew the correct reporting time for ADRs.The proportion of students who selected the correct answer was highest among students who majored in pharmacy (25.97%), followed by those who majored in TCM (16.58%),Western medicine (10.42%), and other disciplines (15.15%).Among the students who participated in the survey, only 82 out of 345 (23.77%) felt that they were familiar with the requirements of ADR reporting (Fig. 2).The percentage of students who felt that they were familiar with ADR reporting was greater among the group of pharmacy students (57.14%) than among the groups of students who majored in TCM (12.30%),Western medicine (12.50%), and other disciplines (27.27%). PV courses offered at the universities Among all respondents included in the analysis, only 71 (20.58%) reported having participated in a course related to PV (Fig. 3).For PV-related courses, the course curriculum and content should cover the definition and recognition of ADR, PV theory and practices.Pharmacy students were more likely to have taken PV-related courses at a university than students in other healthcare-related professions or disciplines; 41.56% of the pharmacy students had taken PV-related courses, while 12.83% of the students who majored in TCM, 18.75% of the students who majored in Western medicine, and 18.18% of the students who majored in other disciplines had taken PV-related courses.Compared with those who did not take PVrelated courses, students who took PV-related courses had better performance in defining PV (77.46 vs. 62.04%;P = 0.02) and ADRs (46.48 vs. 12.77%; P < 0.001). In terms of the students' perceived need for a PVrelated course, 277 students (80.29%) thought that a PVrelated course was necessary for their study (Fig. 4).The percentage of students who felt that PV-related courses were necessary was greater among students majoring in pharmacy (89.61%) than among students majoring in TCM (80.75%),Western medicine (68.75%), and other disciplines (72.73%). The 277 students who thought a PV-related course was necessary were further asked about their preference for teaching methods and course content (Fig. 4).The most preferred teaching methods were case studies (64.64%), followed by blended learning (40.58%), interactive teaching (37.97%), traditional teaching (27.83%), and practical teaching (23.19%).In terms of course content, the students wanted the course to cover recent progress in methods and technology (66.42%), followed by developmental direction in PV (58.13%),PV-related laws and guidelines (37.18%), and current requirements (31.76%). Discussion According to this extensive survey of PV knowledge and perceived needs among healthcare students, two-thirds of the students could correctly define PV.Nevertheless, fewer than 20% of the students correctly defined ADRs and the correct ADR reporting time.Our survey also revealed that only 1 in every five survey respondents had taken PV-related courses at their university, which partially explained the unsatisfactory level of PV knowledge among healthcare students in China.In other countries, more than half of healthcare students could correctly define PV or ADRs [17,27], which was higher than the proportion of students in China.PV was recently developed in China and has not been incorporated into the curriculum for healthcare professionals or disciplines.Our findings underscore the importance of enhancing PV education for healthcare students. Overall, fewer than 20% of the healthcare students knew about ADRs, while this proportion was greater than 50% among pharmacy students, possibly because of the variability of the curriculum dedicated to PV.This finding is also consistent with the literature [28,29]; pharmacists tend to have greater knowledge of the definition of ADRs than other healthcare professionals, potentially due to their specialized pharmaceutical training.Similarly, in this survey, pharmacy students had better knowledge of ADRs than did other healthcare students, potentially because they had taken pharmacy administration courses, which cover the regulation requirements for PV and ADRs.In China, a pharmacy administration course is an elective course required for a pharmacy curriculum.Most of the top pharmacy universities include a pharmacy administration course in their curriculum, offered to undergraduate students in their 3rd or 4th year and graduate students in their 1st year.However, pharmacy administration courses or other courses covering PV and ADRs are not generally offered for other healthcare students, potentially causing a knowledge gap in the field of PV.Therefore, there is a need for greater PV education for healthcare students and continuous education for healthcare professionals. In addition to the differences in the curricula of the study programs, the observed gap in PV knowledge may also contribute to the difference in attitudes towards PV between pharmacy students and other healthcare students.According to our analysis, the proportion of students who recognized the importance of PV was greater among students majoring in pharmacy than among students majoring in other healthcare disciplines [30], which is consistent with the literature.In other countries, it has been reported that pharmacy students have more significant attitudes and perceptions towards PV than other healthcare students [21].Given that most healthcare students fail to correctly define ADRs, there is considerable doubt about the ability of future healthcare professionals (e.g., doctors and pharmacists) to report ADRs in a timely manner [30].Therefore, offering more PV courses for healthcare professionals is warranted to increase awareness of PV and ADRs. Our findings indicate an urgent need to promote PV education for healthcare students.Our analysis revealed that the students had limited knowledge in defining and reporting ADRs.Hence, it is necessary to include PV education in the curricula of healthcare programs.In terms of perceived needs, the respondents expressed an urgent desire to take PV-related courses.In particular, healthcare students suggested covering case studies by incorporating interactive teaching methods.They also expressed their willingness to learn new analytical skills.The need for PV education stands in sharp contrast to the fact that few universities in China offer PVrelated courses.In addition to changing the curricula of study programs, which might take longer to implement, students could be offered an online course on PV education.The majority of the students expressed interest in e-learning.For example, the online course provided by the Uppsala Monitoring Centre could be used as a teaching module to develop courses in China [31].Online courses could also be used in continuous education for healthcare professionals [32]. With the increased number of new drugs with postmarketing safety requirements, the recent trend of pharmaceutical regulation is to "chase the high line" to encourage innovation and development.At the same time, "guarding the bottom line" ensures the safety of medicines [11].In this new era, the NMPA aims to achieve a rapid-response surveillance system involving governments, industries, healthcare professionals, and the public; thus, improving knowledge about PV is highly important.To date, Chinese healthcare professionals have a relatively limited knowledge of defining and reporting ADRs [33], potentially contributing to underreporting in China.Hence, more efforts should be directed at enhancing PV education. Several limitations of this study should be considered.First, we developed the questionnaire based on PV guidelines in China.Although we included an expert panel and a group of students to pilot the survey questionnaire, the instrument has not been validated.Furthermore, these questions only represent students' understanding of the definitions of PV and ADRs; hence, students' knowledge of PV may be inadequately reflected in this analysis.Third, despite our efforts to recruit healthcare students from universities across China [24], the respondents accounted for only a small proportion of healthcare students in China.Hence, our findings may not be generalizable to students from other universities.Finally, selection bias may exist because students who were confident in their knowledge of PV were more likely to participate in the survey.As such, we cannot exclude the possibility that students with greater knowledge of PV were included.Most of the students who participated in the survey were from top medical universities in China, and the knowledge of PV and ADRs of students from other universities might be even worse. Conclusions This study revealed that healthcare students' knowledge of PV and ADRs is relatively low.However, a limited number of universities provide PV education.This is of great concern given the vital role of healthcare professionals in identifying and reporting ADRs.Hence, we call for strengthening PV education for future healthcare professionals at both the undergraduate and graduate levels. Fig. 1 Fig. 1 Knowledge of pharmacovigilance among healthcare students in China.(a) Proportion of students (%) with a correct understanding of pharmacovigilance by sex and study major.(b) The perceived knowledge of pharmacovigilance among healthcare students by sex and study major.TCM: traditional Chinese medicine.*Other disciplines include public health, nursing, medical laboratory technology, biomedical engineering, rehabilitation therapy and other specialties Fig. 2 Fig. 2 Knowledge of ADR reporting among healthcare students.(a) Proportion of students (%) with a correct understanding of ADRs by sex and study major.(b) Proportion of students (%) with a correct understanding of ADR reporting time by sex and study major.(c) The perceived knowledge of ADRs among healthcare students by sex and study major.TCM: traditional Chinese medicine.*Other disciplines include public health, nursing, medical laboratory technology, biomedical engineering, rehabilitation therapy and other specialties Fig. 3 Fig. 3 The pharmacovigilance education provided for healthcare students.(a) Proportion of students (%) having taken PV courses by sex and study major.(b) Proportion of students (%) with a correct understanding of pharmacovigilance by course status.(c) Proportion of students (%) with a correct understanding of ADRs by course status.TCM: traditional Chinese medicine.*Other disciplines include public health, nursing, medical laboratory technology, biomedical engineering, rehabilitation therapy and other specialties Fig. 4 Fig. 4 Healthcare students' perceived needs for a pharmacovigilance course.(a) Proportion of students who reported feeling that a PV course was necessary by sex and study major.(b) The learning method suggested by healthcare students.(c) The course content suggested by healthcare students.TCM: traditional Chinese medicine.*Other disciplines include public health, nursing, medical laboratory technology, biomedical engineering, rehabilitation therapy and other specialties Table 1 Characteristics of the healthcare students who participated in the survey Abbreviations ADR: adverse drug reaction; TCM: traditional Chinese medicine *Other disciplines include public health, nursing, medical laboratory technology, biomedical engineering, rehabilitation therapy and other specialties Table 2 Factors associated with knowledge of pharmacovigilance and ADRs Abbreviations ADR: adverse drug reaction; TCM: traditional Chinese medicine; PV: pharmacovigilance *Other disciplines include public health, nursing, medical laboratory technology, biomedical engineering, rehabilitation therapy and other specialties
2024-05-26T06:17:19.505Z
2024-05-24T00:00:00.000
{ "year": 2024, "sha1": "42472e065073da945de0ecdc48fb162d732bafda", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2fdd580f1032abd54603236d39dce93a62b6d038", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
232237716
pes2o/s2orc
v3-fos-license
Cooperative approach of pathology and neuropathology in the COVID-19 pandemic Background Autopsy is an important tool for understanding the pathogenesis of diseases, including COVID-19. Material and methods On 15 April 2020, together with the German Society of Pathology and the Federal Association of German Pathologists, the German Registry of COVID-19 Autopsies (DeRegCOVID) was launched (www.DeRegCOVID.ukaachen.de). Building on this, the German Network for Autopsies in Pandemics (DEFEAT PANDEMIcs) was established on 1 September 2020. Results The main goal of DeRegCOVID is to collect and distribute de facto anonymized data on potentially all autopsies of people who have died from COVID-19 in Germany in order to meet the need for centralized, coordinated, and structured data collection and reporting during the pandemic. The success of the registry strongly depends on the willingness of the respective centers to report the data, which has developed very positively so far and requires special thanks to all participating centers. The rights to own data and biomaterials (stored decentrally) remain with each respective center. The DEFEAT PANDEMIcs network expands on this and aims to strengthen harmonization and standardization as well as nationwide implementation and cooperation in the field of pandemic autopsies. Conclusions The extraordinary cooperation in the field of autopsies in Germany during the COVID-19 pandemic is impressively demonstrated by the establishment of DeRegCOVID, the merger of the registry of neuropathology (CNS-COVID19) with DeRegCOVID and the establishment of the autopsy network DEFEAT PANDEMIcs. It gives a strong signal for the necessity, readiness, and expertise to jointly help manage current and future pandemics by autopsy-derived knowledge. Introduction The WHO declared the global outbreak of COVID-19 a pandemic on 11 March 2020. On September 7, the WHO Director-General stated, "This will not be the last pandemic. History teaches us that outbreaks and pandemics are a fact of life. But when the next pandemic comes, the world must be ready-more ready than it was this time" [8]. To cope with the consequences of the disease (coronavirus disease 2019, and pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and to prepare for future pandemics, it is of utmost importance to mobilize and harmonize basic and applied medical research both nationally and globally [15], and to collect, analyze, and make the data available, if possible in a centralized manner. This is already happening in various registries, such as the LEOSS registry (Lean European Open Survey on SARS-CoV-2 infected patients) and the CAPACITYCOVID registry (registry of patients with COVID-19 including cardiovascular risks and complications). cially infectious diseases caused by new or so-called emerging or re-emerging viruses, e.g. influenza viruses, Ebola virus, hantaviruses, and coronaviruses (CoV), e.g., SARS-CoV [11], MERS-CoV or currently SARS-CoV-2 [19]. Already in the first published postmortem studies on COVID-19, the pathogenicity mechanisms underlying the particularly severe and fatal courses of the disease with potential therapeutic effects were identified. These organ-specific consequences of COVID-19 are discussed in detail in the corresponding articles in this special issue of the journal, especially for the lung and heart [2], kidney [3], or the central nervous system [14]. The recommendations for performing autopsies in Germany during the first wave of the SARS-CoV-2 pandemic ranged from the recommendation by the Robert Koch Institute to avoid autopsies due to occupational safety aspects, to a largely united position of German pathologists, forensic pathologists, and their professional societies for performing autopsies, for which there are many years of experience and guidelines based on other infectious diseases. The knowledge gained through autopsies made it clear to medical professionals, German health authorities, and the broader public that autopsies are an important and necessary tool for understanding the pathophysiology of COVID-19 and new or reemerging diseases in general. Autopsies can very quickly provide important information that can significantly improve the risk assessment, diagnosis, and treatment of patients. Various institutes of pathology have published recommendations or reports on the practical implementation of COVID-19 autopsies [4,9,10,13,16], which are also discussed in another article in this special issue [5]. To better manage the current pandemic, characterize the disease, gain new insights, and create preparedness structures for future epidemics and pandemics, it is necessary to establish central structures, such as a national autopsy registry and general procedural recommendations for conducting infectious autopsies, data collection, and biobanking in the event of a pandemic. We have established the first national registry for COVID-19 autopsies (DeRegCOVID). Extending this, we also launched the German Research Network for Pandemic Autopsies (DEFEAT PANDEMIcs) as part of the Network of University Medicine (NUM) to implement a harmonized data collection and systematic and standardized analysis of the suitability of the collected postmortem tissues and body fluids for virological, genomic, transcriptomic and imaging analyses on a nationwide level (https://www. netzwerk-universitaetsmedizin.de/). German registry for COVID-19 autopsies-DeRegCOVID Given the need for centralized and coordinated support, reporting, systematic biobanking, and structured data harmonization, we launched the first version of a Germany-wide registry of COVID-19 autopsies on April 15 (DeRegCOVID, . Fig. 1). Further information can be found on the website (www.DeRegCOVID.ukaachen.de) or inquired via e-mail (Covid.Pathologie@ ukaachen.de). The interdisciplinary team of DeRegCOVID currently consists of 13 employees from the Institute of Pathology (medical and scientific management), Institute of Medical Informatics (technical management), Clinical Trial Center Aachen (CTC-A, project management) as well as other employees from the Legal and Data Security Division of the University Hospital Aachen. The main objective of this centralized national registry for COVID-19 autopsies is to collect, categorize, analyze and provide harmonized and factually anonymous data on all autopsies and related biomaterials of COVID-19 deceased persons in Germany to the medical and scientific community (. Fig. 2, from [18]). The DeRegCOVID is located at the University Hospital RWTH Aachen and was developed with the close support of the Federal Association of German Pathologists e. V. (BDP) and the German Society of Pathology (DGP). It is financed by the Federal Ministry of Health and has been positively reviewed by the Robert Koch Institute. Data collection is based on WHO recommendations and the German S1 guideline for performing autopsies. Many academic and nonacademic centers, including centers of the German Federal Armed Forces, are already involved in the registry, thus incorporating and implementing ethical and legal frameworks for all German autopsy centers. Centers performing COVID-19 autopsies report epidemiological data (age, sex, time of death), data on preanalytical factors (time between death and autopsy), data on known clinical course, underlying diseases, findings and causes of death determined during the autopsy, and metadata on type and amount of locally archived tissue samples (e.g., organ/topography, FFPE, cryoasservation). Entry into the reporting system is done via the internet (https://covidpat. ukaachen.de/), using password-protected login data specifically set up for each center. The platform is constantly being optimized, especially following feedback from users, to further improve data entry. Experience has shown that, depending on the complexity of an autopsy case, it currently takes between 15 and 30 min to enter data. The staff of DeRegCOVID assists with all questions (Covid.Pathologie@ukaachen.de). The collected sample material remains with the respective centers (decentralized biobanking, . Fig. 2a). The DeRegCOVID follows the principle that all centers retain the rights to their reported data and materials. » All centers retain the rights to their reported data and materials For data reporting, each center must comply with local regulatory requirements, especially regarding ethical and data protection issues. DeRegCOVID in order to meet the need for centralized, coordinated, and structured data collection and reporting during the pandemic. The success of the registry strongly depends on the willingness of the respective centers to report the data, which has developed very positively so far and requires special thanks to all participating centers. The rights to own data and biomaterials (stored decentrally) remain with each respective center. The DEFEAT PANDEMIcs network expands on this and aims to strengthen harmonization and standardization as well as nationwide implementation and cooperation in the field of pandemic autopsies. Conclusions. The extraordinary cooperation in the field of autopsies in Germany during the COVID-19 pandemic is impressively demonstrated by the establishment of DeRegCOVID, the merger of the registry of neuropathology (CNS-COVID19) with DeRegCOVID and the establishment of the autopsy network DEFEAT PANDEMIcs. It gives a strong signal for the necessity, readiness, and expertise to jointly help manage current and future pandemics by autopsy-derived knowledge. After curation and harmonization, the available data are centrally evaluated and reported to the medical and scientific community: to the Professional Association of Pathologists and Neuropathologists (Bundesverband An important aspect and a central task of DeRegCOVID is the implementation of interfaces to national and international data platforms. The reporting centers will provide their centrally curated data in a syntactically and semantically standardized and annotated form so that they can then be uploaded into national and international research projects. One such example is the research data platform of the National Pandemic Cohort Network (NAPKON) within the Network of University Medicine (NUM; https:// www.netzwerk-universitaetsmedizin. de/). Furthermore, interfaces to the structures of the registries/biobanks within the German Health Centers, e.g., German Center for Lung Research (Deutsches Zentrum für Lungenforschung, DZL), are planned. For this purpose, data set definition and export interfaces will be continuously adapted to the evolving definition of the GECCO data set (German Corona Consensus data set). Cooperation with international initiatives is also planned. » Implementation of interfaces to national and international data platforms In addition to the core tasks of central data acquisition and reporting, DeRegCOVID fulfills several other tasks. These include, for example, support in questions concerning practical aspects of COVID-19 autopsies, as a central source of information for procedural instructions and recommendations, e.g., safety measures and occupational health and safety in the context of autopsies or biobanking of biomaterial of COVID-19 deceased persons. Another important task is to serve as a mediator between participating centers and national or international scientists. If scientists require an autopsy biomaterial or data for a study, they can send a request to DeRegCOVID (an application form is available at www. DeRegCOVID.ukaachen.de). The team of DeRegCOVID first checks the plausibility of the request, i.e., whether the project can be answered by the available biomaterial or data from the registry. Our initial experience shows that some requests cannot be served, given that the questions cannot be answered with the proposed methods on postmortem biomaterial and available data. In case of suitable inquiries, the centers with available biomaterials and data are identified and contract between the requesting researchers and the reporting centers and biobanks with appropriate material is established (a function of "scientific Tinder"). Thus, it is possible to identify cases in a specific disease stage (e.g., particularly early or late stage), with a specific pathological diseases pattern (e.g., hemophagocytic lymphohistiocytosis), with specific comorbidities (e.g., diabetes, kidney disease) or therapies (e.g., no therapy, long-term or intensive therapy, ECMO). This also facilitates multicenter studies that include a sufficiently large number of cases and to produce robust results. The first example, which was supported by DeRegCOVID, is the work describing the pathological and molecular characteristics of pulmonary involvement in COVID-19 based on autopsies, published in the New England Journal of Medicine [1]. Further studies have been published in the meantime [6,7,12], and some are under review or in preparation. Compartment-specific detection of the virus in the tissues of COVID-19, but often also the basic confirmation of SARS-CoV-2 infection in deceased persons, are important aspects in the evaluation of autopsy results. Therefore another task of DeRegCOVID is to support the detection of SARS-CoV-2 in tissues for which different methods are available. Method-specific advantages and disadvantages and indications of the different methods are discussed in another article in this issue [17]. German network for pandemic autopsies-DEFEAT PANDEMIcs As a consequence of the work described above, which has shown the importance of autopsies for understanding COVID-19, the German Research Network for Autopsies in Pandemics was founded (DEFEAT PANDEMIcs). The DEFEAT PANDEMIcs network currently consists of 27 university centers (with more than 50 pathological, neuropathological, and forensic medicine institutions) and 14 non-university research institutions, including the Robert Koch Institute (. Fig. 3). It is funded by the German Federal Ministry of Education and Research within the framework of the Network of University Medicine (NUM; https://www.networkuniversitaetsmedizin.de/). The goal of the DEFEAT PANDEMIcs network is to implement and further improve a systematic, structured, harmonized, comprehensive, and rapid analysis of epidemiological data, findings, and tissue samples from autopsies during the COVID-19 pandemic and future pandemics or epidemics on a nationwide level. DeRegCOVID is the central and sustainable data platform, the information broker, and thus the electronic backbone of the DEFEAT PANDEMIcs network. Due to the modular and thus scalable architecture of the registry, DeRegCOVID can be supplemented by any number of centers. All functions of DeRegCOVID are also provided for the entire network. The Germanywide registry CNS-COVID19 (www. cns-covid19.de), founded by the German Society for Neuropathology and Neuroanatomy (DGNN), was merged with DeRegCOVID within the framework of DEFEAT PANDEMIcs. The registry CNS-COVID19, located at the Justus-Liebig University of Giessen, supported the systematic research of central and peripheral nervous system involvement in COVID-19 via standardized sampling and decentralized biobanking of human tissue samples from defined CNS/PNS/ muscle areas in COVID-19 autopsies and is supported by more than 35 university and non-university neuropathology and pathology institutions. The tasks of DEFEAT PANDEMIcs cover three subject areas (see also . Fig. 4 Conclusions An early, unified public relations effort by German pathologists and pathological societies made it clear to medical professionals, German health authorities, and the public that autopsies remain an essential tool for understanding the pathophysiology of emerging diseases in general and of COVID-19 in particular. This was also demonstrated by the important and already early autopsy-based medical-scientific work, many of which originated from the German-speaking countries and which in part led to changes in therapeutic strategies for severe COVID-19 courses. The collaborative approach was further strengthened at a very early stage by the establishment of the centralized national registry for COVID-19 autopsies (DeRegCOVID) and is being further developed with the German Research Network for Pandemic Autopsies (DEFEAT PANDEMIcs). These sustainable structures will help us to jointly manage the current pandemic as well as future epi-and pandemics. Conclusion for practice 4 The German Registry for COVID-19 Autopsies (DeRegCOVID; www. DeRegCOVID.ukaachen.de): j Collects data centrally and electronically of ideally all COVID-19 autopsies in Germany. j Any institute or center can participate. j Supports all centers and researchers. j The data sovereignty and all biomaterials remain with the respective institutes or centers. j Analyzes and reports on collected data in cooperation with the participating centers. j Acts as the electronic backbone and sustainable structure for the DEFEAT PANDEMIcs network. 4 The German Network for Autopsies in Pandemics (DEFEAT PANDEMIcs): j Aims at a nationwide implementation of high-quality, standardized, and harmonized data collection and biobanking of autopsies in pandemics. j Is a network of 27 university hospitals with more than 60 participating institutes (pathology, neuropathology, and forensic medicine) and 14 associated societies and partners. j Serves as a preparedness structure for future pandemics.
2021-03-16T14:24:05.425Z
2021-03-15T00:00:00.000
{ "year": 2021, "sha1": "56a2620eafd8b70115026ec34868900db8f039c5", "oa_license": "CCBYSA", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00292-020-00897-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "56a2620eafd8b70115026ec34868900db8f039c5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203448790
pes2o/s2orc
v3-fos-license
Primary Stromal Breast Sarcoma with Concomitant Contralateral Carcinoma: A Rare Case from Syria Bilateral breast cancers are rare cases encountered and are usually the same type in both sides. Only very few cases were reported to have different histological types of neoplasia involving sarcoma. Moreover, sarcomas rarely originate from the breast as a primary lesion whereas the common presentation is having angiosarcoma following radiotherapy. In this report, we present a rare case of a Syrian 43-year-old woman having two distinct primary lesions in the breasts: invasive ductal carcinoma and contralateral stromal sarcoma. Introduction A breast stromal sarcoma is any tumor originating from the intralobular stroma [1]. They were firstly defined in 1962 by Berg et al. as a "group of mesenchymal malignant tumors with fibrous, myxoid and adipose components excluding malignant cystosarcoma phylloides, lymphomas, and angiosarcomas" [2]. Primary malignant mesenchymal breast tumors (primary breast sarcomas) are uncommon entities that represent 0.2-1.0% of all breast malignancies [3]. Regarding that, bilateral breast cancer is rare and only 2 to 11% of women diagnosed with breast cancer will develop contralateral breast cancer in their lifetime, [4] presenting with two different types of cancers, one of which is stromal sarcoma, which is extremely rare. In this article, we report a rare case from Syria presenting with bilateral breast cancer, invasive ductal carcinoma, and primary stromal sarcoma in the other side. Case Report A 43-year-old white woman presented with a palpable lump in the right breast to AL-Bairouni University Hospital in September 2016. She is married with six children, no menstrual disturbances, no history of breast trauma, no exposure to radiation, or no family history of breast cancer. Clinical examination showed a 3 cm in diameter lump with irregular borders in the superior lateral quadrant of the right breast, with no swollen nodes. A mammogram of the right breast showed a density in the superior lateral quadrant; the density corresponds with Breast Imaging Reporting and Data System (BIRADS) 4 (suspicious abnormality) [5] which required an excisional biopsy to exclude malignancy. The lesion in the excisional biopsy was about 4.5 cm (T 2 ). The histopathology exam showed proliferation of epithelial cells of the mammary canals infiltrating to the space between the canals ( Figure 1) and in addition to simultaneous ductal carcinoma in situ ( Figure 2). CK7 staining proved carcinoma origin ( Figure 3). These findings revealed invasive ductal carcinoma grade III. A hormonal receptor test was 10% positive for progesterone receptors and 10% positive for estrogen receptors; a HER-2 test was negative. Staging depending on chest Xray, bone scintigraphy, and thoracoabdominal CT scan revealing no signs of metastatic disease suggested a T 2 N 0 M 0 score and a stage IIA tumor. The patient received neoadjuvant chemotherapy for three months consisting four cycles of EC (epirubicin-cyclophosphamide) according to the National Comprehensive Cancer Network NCCN guideline [6]. Quadrantectomy was planned after the fourth dose. Unfortunately, due to issues related to the war in Syria, there was no possible connection with the patient for about eight months which corrupted her treatment plan. Later, in August 2017, 8 months after receiving the last dose of chemotherapy, 11 months after the first presentation and mammogram, she presented with two lumps, one for each breast. Physical examination showed palpable lesions; the left breast lump was 3 cm in diameter in the superior lateral quadrant of the breast; with a mild nipple retraction without any discharge, the skin was normal. While the right breast lump was 2 cm in diameter in the superior lateral quadrant of the breast, there was no nipple retraction of the right breast and no swollen axillary lymph nodes on both sides. Differential diagnosis included the recurrence of the primary tumor. We repeated the imaging and histopathology study for confirmation. A mammogram showed on the right breast an asymmetric density in the upper outer quadrant that falls into the BIR-ADS 3 (probably benign) category suggesting recurrence. There was neither nipple retraction nor calcification. From the left breast mammogram, a heterogeneous density was noted in the upper outer quadrant. This had poorly defined margins (speculated) and appeared highly infiltrative. Also, there was a small density with poorly defined margins in the central part, which corresponds to the BIRADS 4 category (suspicious abnormality) with thickened skin and mild nipple retraction, but no calcification, which prompted excisional biopsy from the left breast. Surprisingly, histopathology of excisional biopsy performed to the left breast mass showed high cellularity of spindle-shaped cells; the mass contained fatty tissue and showed an abundant mitotic activity ( Figure 4). Immunohistochemistry (IHC) showed negative results for epithelial markers, such as cytokeratin 7 (CK7) ( Figure 5) and epithelial membrane antigen (EMA), leukocyte common antigen (LCA), and desmin, which in turn excluded carcinoma origin, lymphoma/leukaemia, and muscular origin, respectively. Positive staining of CD10 ( Figure 6) confirmed stromal origin confirming the diagnosis of high-grade stromal sarcoma of the breast. Repeating CT scan and scintigraphy showed no signs of metastatic disease. Consequently, she underwent bilateral mastectomy and bilateral axillary lymph node resection. Histopathology showed free surgical margins and no invaded nodes. In the follow-up, the patient had received hormonal therapy (Tamoxafen) after adjuvant chemotherapy: 8 cycles of Taxol 150 mg+Cisplatin 50 mg and 3 sessions of radiotherapy. The follow-up for 14 months showed no evidence of recurrence. Discussion This is the first case to be reported with concomitant contralateral breast stromal sarcoma and carcinoma from Syria. When bilateral breast cancer is present, it is usually the same type for both breasts [7]. Reviewing the literature (PubMed and Google Scholar search, March 2019), we found no identical cases of coexisting primary stromal sarcoma and invasive ductal carcinoma in distinct breasts. However, few similar cases reported bilaterally different types of cancer with sarcoma. In de Mello et al.'s report [8], a 42-year-old woman had a lobular pleomorphic carcinoma in the right breast, a different type of carcinoma compared to that in our case, and a sarcoma in the left that was diagnosed histologically. On one hand, primary sarcomas can take various histological types that often require IHC to differentiate [9], and secondary sarcomas often present as angiosarcoma, especially after radiation therapy of another tumor [10]. In our patient, it was important to do IHC to confirm the diagnosis as it is uncommon to be considered a different diagnosis in a previous carcinoma patient who never received radiotherapy. On the other hand, the mainstay treatment in soft tissue sarcoma is surgery [4,11,12]. Axillary resection is not necessary, since sarcomas rarely invade the lymphatic system [3]. We followed these management protocols of surgery. However, axillary nodes were dissected due to the presence of concomitant carcinoma. Also, in high-risk cases, adjuvant and neoadjuvant chemotherapy and radiotherapy should be considered and chemotherapy is the mainstay of treating widespread metastatic cancer, whereas radiotherapy is preferred for lymphatic metastasis and reducing the rate of locoregional recurrence [13]. The first cancer was diagnosed early in the disease course giving a good recovery chance. After presenting again with bilateral masses, the case necessitated a more radical surgery with adjuvant chemotherapy, radiotherapy, and hormonal therapy. In response to this management plan, she had a disease-free state for 14 months. Conclusion This case presents a rare entity of bilaterally different types of cancers including stromal breast sarcoma. This report highlights the importance of a profound study of new lesions previously diagnosed as breast cancer lesions to not miss the diagnosis of different types of cancer and thus be wrongly treated. This is the first case from Syria to be reported. Conflicts of Interest There is no conflict of interest.
2019-09-17T02:59:48.684Z
2019-09-10T00:00:00.000
{ "year": 2019, "sha1": "525f1eb83c274fcdb9b9cbae9715f8b1fc8ce569", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/6460847", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fa9c7d6ed1c0c7c8a4c9191f5446088559f90a9b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8696439
pes2o/s2orc
v3-fos-license
Effect of Endothelin-1 on the Excitability of Rat Cortical and Hippocampal Slices in Vitro Endothelin-1 (ET-1) is a neuroactive protein produced in most brain cell types and participates in regulation of cerebral blood flow and blood pressure. In addition to its vascular effects, ET-1 affects synaptic and nonsynaptic neuronal and glial functions. Direct application of ET-1 to the hippocampus of immature rats results in cerebral ischemia, acute seizures, and epileptogenesis. Here, we investigated whether ET-1 itself modifies the excitability of hippocampal and cortical circuitry and whether acute seizures observed in vivo are due to nonvascular actions of ET-1. We used acute hippocampal and cortical slices that were preincubated with ET-1 (20 µM) for electrophysiological recordings. None of the slices preincubated with ET-1 exhibited spontaneous epileptic activity. The slope of the stimulus intensity-evoked response (input-output) curve and shape of the evoked response did not differ between ET-1-pretreated and control groups, suggesting no changes in excitability after ET-1 treatment. The threshold for eliciting an evoked response was not significantly increased in either hippocampal or cortical regions when pretreated with ET-1. Our data suggest that acute seizures after intrahippocampal application of ET-1 in rats are likely caused by ischemia rather than by a direct action of ET-1 on brain tissue. In the mammalian CNS, the potent vasoconstrictor endothelin-1 (ET-1) is produced in neurons, in the endothelium of the cerebral microvessels, and in glial cells. Endothelins act through the G-protein-coupled receptors ET A and ET B , which are differentially distributed among brain cell types. ET A is expressed in brain vascular cells, whereas ET B receptors are predominantly expressed in glial cells (Baba 1998). ET A receptors have high specificity for ET-1, whereas ET B receptors are non-selective and accept all subtypes of endothelin nearly equally (Sakurai et al. 1990). Under physiological conditions, ET-1 contributes significantly to the regulation of blood pressure and cerebral blood flow. Moreover, ET-1 is now considered an important agent in the pathogenesis of hypertension (Hynynen and Khalil 2006). In addition to its vascular effects, ET-1 induces a wide range of physiological actions in the CNS. ET-1 is considered a neuropeptide because it influences the activity of ion channels, glutamate efflux (Rozyczka et al. 2004), glucose utilization (Sanchez-Alvarez et al. 2004), permeability of gap junctions (Blomstrand et al. 2004), and calcium signaling (Venance et al. 1997). Furthermore, higher ET-1 levels are found in the brains of patients with neurological disorders such as Alzheimer's disease, subarachnoidal hemorrhage, traumatic brain injury, and ischemia (Petrov et … In the mammalian CNS, the potent vasoconstrictor endothelin-1 (ET-1) is produced in neurons, in the endothelium of the cerebral microvessels, and in glial cells. Endothelins act through the G-proteincoupled receptors ET A and ET B , which are differentially distributed among brain cell types. ET A is expressed in brain vascular cells, whereas ET B receptors are predominantly expressed in glial cells (Baba 1998). ET A receptors have high specificity for ET-1, whereas ET B receptors are non-selective and accept all subtypes of endothelin nearly equally (Sakurai et al. 1990). Under physiological conditions, ET-1 contributes significantly to the regulation of blood pressure and cerebral blood flow. Moreover, ET-1 is now considered an important agent in the pathogenesis of hypertension (Hynynen and Khalil 2006). In addition to its vascular effects, ET-1 induces a wide range of physiological actions in the CNS. ET-1 is considered a neuropeptide because it influences the activity of ion channels, glutamate efflux (Rozyczka et al. 2004), glucose utilization (Sanchez-Alvarez et al. 2004, permeability of gap junctions (Blomstrand et al. 2004), and calcium signaling (Venance et al. 1997). Furthermore, higher ET-1 levels are found in the brains of patients with neurological disorders such as Alzheimer's disease, subarachnoidal hemorrhage, traumatic brain injury, and ischemia (Petrov et el. 2002, Rogers et al. 2003. Vol. 61 Direct injection of ET-1 into the brain parenchyma leads to severe vasoconstriction and has been used as a model of focal cerebral ischemia with reperfusion in rats (Fuxe et al. 1997, Hughes et al. 2003. We have shown recently that intrahippocampal application of ET-1 in immature rats causes acute seizures (Tsenov 2007) with consequent epileptogenesis (Mateffyova 2006). Because ET-1 has significant effects on both neuronal and glial functions, our current study was designed to elucidate the direct effects of ET-1 on cortical and hippocampal excitability. We focused in particular on whether ET-1 itself alters hippocampal and cortical excitability. To abolish the dominant vascular effect of ET-1, we used hippocampal and cortical brain slices that were acutely preincubated with ET-1 for electrophysiological testing in vitro. The experimental design was approved by the Eötvös University Animal Care Committee and by the Budapest Animal Health Care Authority. Animal care and experimental procedures were conducted in accordance with the guidelines of the European Community Council directive 86/609/EEC. Experiments were performed on 25 slices (13 cortical and 12 hippocampal) prepared from 11 male Wistar rats (100-180 g, Toxicoop, Hungary). Rats were kept under a constant 12-h light/dark cycle and controlled temperature (22±2 °C). Standard pellet food and tap water were available ad libitum. The preparation of tissue and the incubation procedure has been described in detail (Vilagi et al. 2008). Briefly, rats were deeply anaesthetized with chloral hydrate (Hungaropharma, Budapest) and decapitated. The brains were quickly removed, and coronal slices (400 µm) were cut with a Vibratome (EMS-4000, Electron Microscopy Sciences, Fort Washington, PA, USA) in ice-cold artificial cerebrospinal fluid (aCSF). Slices contained both the somatosensory cortex and the hippocampus. After 30 min of regeneration in HEPES-buffered aCSF (pH 7.3-7.4, in mM: 120 NaCl; 2 KCl; 1.25 KH 2 PO 4 ; 2 MgSO 4 ; 20 NaHCO 3 ; 2 CaCl 2 ; 10 glucose), slices were placed into a small (2 ml) incubation chamber filled with buffered aCSF or ET-1 (20 µM, Sigma Aldrich, Czech Republic) dissolved in aCSF for 30 min. Slices were then transferred to an interface recording chamber and perfused with standard aCSF with a peristaltic pump (2 ml/min; in mM: 126 NaCl; 26 NaHCO 3 ; 1.8 KCl; 1.25 KH 2 PO 4 ; 1.3 MgSO 4 ; 2.4 CaCl 2 ; 10 glucose). All solutions were saturated with carbogene (5 % CO 2 / 95 % O 2 ) at 33±1 °C. For field potentials, recording extracellular glass microelectrodes (8-10 MΩ) filled with 1 M NaCl were used. For cortical slices, a recording electrode was positioned into the lower part of Layer III of neocortex, and a bipolar tungsten stimulating electrode was positioned right below the recording electrodes at the border of the white and grey matter. For hippocampal slices, the Schaffer collaterals were stimulated, and evoked responses in the CA1 pyramidal layer were recorded (Fig. 1A). Signals were amplified with an Axoclamp 2A amplifier (Axon Instruments Inc., Union City, CA), filtered with a Supertech Signal Conditioner (Supertech Kft, Pécs, Hungary), digitized with a NI 6023E National Instruments A/D card, and recorded with the SPEL Advanced Intrasys computer program (Experimetria, Budapest, Hungary). The viability of each slice was tested at the beginning of the procedure. When applying single pulse stimulation, characteristic field responses were recorded. Stimulus threshold (T) was determined 10 min after placing the slices into the recording chamber. For this we increased the stimulation intensity from 0 in small steps, and a stimulus strength just sufficient to produce a response was regarded as 1T. If the peak-to-peak amplitude of the maximal evoked response (P1-N1, Fig. 1B) was smaller than 1.0 mV, the slice was excluded from the experiments. The duration of the square voltage stimulation pulses was 100 µs, and the amplitude was gradually varied between threshold and supramaximal values. A short latency, early component of the evoked response was determined, which was characterized by the peak-to-peak amplitude of the first negative (N1) and positive (P1) peaks. For hippocampal recordings, the amplitude of the population spikes (POP spikes, Fig. 1B) and the slope of the EPSPs were determined. To obtain stimulus intensity-evoked responses (input-output, curve, I-O), the stimulation intensity was gradually increased from 1T up to 4T in six steps with interstimulus interval 10 s and response amplitudes were plotted against stimulation intensities. 10 min after transferring slices into the recording chamber medium, 2T stimuli were delivered every 60 s for the following 60 min to detect possible long-lasting or washout effects. The data reflect the mean ± standard error of the mean (S.E.M.). For statistical comparisons between the control and experimental groups, an unpaired Student's t-test was used. We did not observe any nonphysiological spontaneous electrical activity or seizures in both control and ET-1 pretreated slices. A typical evoked response was recorded in the cortical and hippocampal slices used in the experiment. Preincubation with ET-1 did not influence the shape and/or latency of the response ( Fig. 2A, C). The slope of the I-O curve remained unchanged in both cortical (control 0.46±0.09; ET-1 preincubation 0.48±0.1) and hippocampal slices (control 0.73±0.19; ET-1 0.72±0.17; P=0.97) ( Fig. 2A, C). Furthermore, pretreatment with ET-1 did not influence the amplitude of the evoked responses. The response to 2T stimuli and the maximal response were unchanged in cortical (2T control 1. Vol. 61 Although data about the direct effects of ET-1 on neuronal excitability are sparse, increased neuronal excitability after a brief ET-1 exposure was demonstrated by Feng and Strichartz (Feng and Strichartz 2009). They found increased firing and decreased rectifying potassium current in dissociated neurons from the dorsal root ganglia. Moreover, neurons from the nucleus of the solitary tract increase their neuronal activity upon iontophoretic ET-1 application and augment the responses to glutamate in acute brain slices (Shihara et al. 1998). Increased glutamate reactivity and neuronal firing can be caused by inhibition of astrocytic glutamate transport by ET-1 (Leonova et al. 2001). However, this group found this effect only in selected neuronal populations, suggesting that the ET-1-induced increase in neuronal excitability is highly specific and not universally seen in neuronal tissue. ET-1 has a direct inhibitory effect on gap junctions between astrocytes, affecting their electrical coupling, intercellular communication, and spatial potassium buffering (Blomstrand et al. 2004). It has been shown that affected astrocytic gap junction coupling in connexin43-deficient mice is responsible for the reduced threshold for the generation of epileptiform events (Wallraff et al. 2006) and that a connexin-43 mimetic peptide has pronounced anticonvulsive actions in vitro (Samoilova et al. 2008). However, in our current set of experiments, we did not detect changes in spontaneous activity or changes in basal excitability in either hippocampal or cortical preparations. The absence of direct effects of ET-1 on network excitability supports our hypothesis that acute seizures induced by parenchymal injection of ET-1 are likely caused by an indirect effect of ET-1 on brain tissue. Using continual video/EEG monitoring, we previously demonstrated development of epileptic seizures coupled with tissue destruction after intrahippocampal injection of ET-1 in freely moving rats. On an EEG, seizures persisted for at least 24 h after ET-1 administration, and their severity was dose dependent (Tsenov et al. 2007). Development of seizures immediately after ischemic insult was described in a model of middle cerebral artery occlusion in adult rats. Hartings and collaborators (Hartings et al. 2003) demonstrated spontaneous seizures in animals with permanent occlusion and animals subjected to transient ischemia with reperfusion. Furthermore, in immature CD1 mice, ligation of the unilateral carotid artery without general hypoxia induced behavioral seizures in 75 % of animals (Comi et al. 2004). Thus focal ischemia can trigger seizures regardless of the mechanisms of its induction. The depolarization of cell membranes during cerebral ischemia and/or hypoxia results in glutamate release (Perlman 2006). Systemic or focal administration of excitatory amino acids or agonists of their respective receptors induces seizures (reviewed in Mares et al. 2004). Therefore, an ischemia-induced increase in glutamate levels is likely responsible for seizure development in a model of focal ischemia provoked by parenchymal injection of ET-1. Also, results of other studies support this mechanism. Due to the already mentioned direct interactions of ET-1 with both glial and neuronal cells, ET-1 can consequently aggravate both ictogenic and neurodestructive effects of ET-1-induced focal ischemia. Our study was not focused on this interaction, but its role should be analyzed further. Our data suggest that acute seizures after intraparenchymal injection of ET-1 are caused by its ability to reduce focal blood flow rather than by direct action of ET-1 on brain tissue. Conflict of Interest There is no conflict of interest.
2016-10-11T02:19:10.865Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "14288910237c97ddfcd436df970f8a7aba796c5f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.33549/physiolres.932218", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2660aaf677cc88d2b3bb8e9da06954c51988c6b5", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
1717705
pes2o/s2orc
v3-fos-license
Correlation between antimicrobial consumption and incidence of health-care-associated infections due to methicillin-resistant Staphylococcus aureus and vancomycin-resistant enterococci at a university hospital in Taiwan from 2000 to 2010. Objectives This study was conducted to investigate the correlation between antibiotic consumption and the incidence of health-care-associated infections (HCAIs) caused by methicillin-resistant Staphylococcus aureus (MRSA) (HCAI-MRSA) and vancomycin-resistant enterococci (VREs) (HCAI-VREs) at a university hospital in Taiwan during the period from 2000 to 2010. Methods Data on annual patient-days and annual consumption (defined daily dose/1000 patient-days) of glycopeptides (vancomycin and teicoplanin), linezolid, fusidic acid, tigecycline, and daptomycin were analyzed. Yearly aggregated data on the number of nonduplicate clinical MRSA and VRE isolates causing HCAI were collected. Results Overall, the consumption of teicoplanin and linezolid significantly increased during the study period. A significant decrease in the incidence of HCAI-MRSA and a significant increase in the incidence of HCAI-VRE were found during the study period. A significant correlation was found between the increased use of teicoplanin and linezolid and the decreased incidence of HCAI-MRSA. By contrast, positive correlations were found between the consumption of teicoplanin and tigecycline and the incidence of HCAI-VRE. Conclusion This study identified various correlations between the consumption of antibiotics and the incidence of HCAI-MRSA and HCAI-VRE. Strict implementation of infection-control guidelines and reinforcement of administering appropriate antibiotic agents would be helpful in decreasing the incidence of MRSA and VRE in hospitals. Introduction The incidence of health-care-associated infection (HCAI) caused by multidrug-resistant bacteria has gradually risen during the last decade, especially in immunocompromised patients. 1,2 The most common causative agents of HCAI in the United States are the Gram-positive bacteria Staphylococcus aureus and Enterococcus. 3 Methicillin (oxacillin)resistant S. aureus (MRSA) is of particular concern because patients with MRSA infection tend to have higher mortality rates, longer hospital stays, and higher health-careassociated costs than patients with methicillin-susceptible S. aureus infections. 1,4 In addition, DiazGranados et al 5 found that the mortality rate among patients with vancomycin-resistant enterococci (VREs) infection was significantly higher than that among patients with vancomycin-susceptible Enterococcus infections. Taiwan is no exception, and VRE and MRSA infections have become emerging infectious diseases. 6e10 Antibiotic use is one of the risk factors for antibiotic resistance among bacterial species; however, the nature of this relationship is complicated. Although several studies have examined the relationship between antimicrobial consumption and antibiotic resistance, the findings were inconsistent, possibly due to differences in resistance profiles as well as due to differences in antibiotic-prescribing practices in different countries. 11e18 In those studies, the use of glycopeptides, extended-spectrum cephalosporins, and fluoroquinolones was demonstrated to be associated with the prevalence of MRSA and VRE. 12e18 Few studies, however, have investigated the relationship between the use of linezolid or fusidic acid and the prevalence of MRSA and VRE. In addition, the association between the consumption of tigecycline, a novel anti-Gram-positive agent derived from minocycline that has been shown to be effective against many Gram-negative rods as well as Gram-positive cocci, and the prevalence of MRSA and VRE has never been studied. 19 Similarly, no studies have so far investigated the association between the prevalence of MRSA and VRE and the consumption of daptomycin, an anti-MRSA antibiotic that has been approved by the U.S. Food and Drug Administration for the treatment of complicated skin and skin-structure infections and bacteremia due to MRSA. 20,21 In this study, we investigated the correlation between consumption of antibiotics, including vancomycin, teicoplanin, linezolid, tigecycline, fusidic acid, and daptomycin, and the incidence of HCAI-MRSA and HCAI-VRE during the period from 2000 to 2010 at a medical center in Taiwan. Hospital setting The National Taiwan University Hospital (NTUH) is a 2500bed, academically affiliated medical center that provides both primary and tertiary care in northern Taiwan. The number of annual inpatient-days at the hospital increased from 624,675 in 2000 to 763,772 in 2010. Linezolid and fusidic acid were introduced into the hospital formulary in 2002. Tigecycline and daptomycin have been prescribed at the NTUH since 2007 and 2009, respectively. Some of the data analyzed in this study were included in a previous study. 16,17 Bacterial isolates Data on the susceptibilities of S. aureus to oxacillin were collected during the period from 2000 to 2010. These isolates were nonduplicate and isolates of each species from each patient recovered within 7 days were considered as a single isolate. Susceptibility testing for S. aureus and Enterococcus species followed the Clinical and Laboratory Standards Institute guidelines. 22 S. aureus ATCC 25923 was used as the control strain for routine disk-susceptibility testing. 22 Methicillin resistance among S. aureus isolates was routinely screened by measuring their growth on oxacillin (6 mg/L) in 2% NaCl-containing trypticase soy agar plate that had been incubated in ambient air at 35 C for 24 hours. 22,23 Vancomycin resistance among Enterococcus species was confirmed by growth of the isolate on a brain heart infusion agar plate containing vancomycin (6 mg/L) that had been incubated in ambient air at 35 C for 24 hours. 22 Patients with HCAI-MRSA and HCAI-VRE Yearly aggregated data on the number of nonduplicate clinical MRSA and VRE isolates causing HCAI were collected. HCAI was defined according to the National Nosocomial Infection Surveillance guidelines. 18 The incidence rates of HCAI-MRSA and HCAI-VRE were defined as the number of patients with HCAI-MRSA and HCAI-VRE, respectively, per 1000 inpatient-days. Antimicrobial agents and consumption Data on annual consumption [defined daily dose (DDD)/1000 inpatient-days] of glycopeptides (vancomycin and teicoplanin), linezolid, fusidic acid, tigecycline, and daptomycin from 2000 to 2010 were obtained from the pharmacy department of the hospital. Statistical analysis Linear regression analysis was used to analyze the trends in annual consumption of antimicrobial agents and the trends in incidence of HCAI-MRSA and HCAI-VRE over time. The Pearson product moment correlation coefficient was used to determine the relationship between annual antibiotic consumption and trends in resistance. A p value < 0.05 was considered statistically significant. Annual antibiotic consumption In general, the use of each antimicrobial agent varied over time (Table 1). Overall, the consumption of teicoplanin and linezolid significantly increased, whereas the consumption of vancomycin, glycopeptides (including vancomycin and teicoplanin), tigecycline, and fusidic acid remained stable during the study period. Trend in HCAI due to MRSA and VRE and the incidence of HCAI-MRSA and HCAI-VRE during the period 2000e2010 During the study period, a total of 4657 nonduplicate S. aureus isolates and 4219 enterococcal isolates causing HCAI were identified. Fig. 1A shows the trends in MRSA isolates and incidence of HCAI-MRSA. There was a significant decrease in the incidence of HCAI-MRSA over time. Fig. 1B shows the trends in VRE isolates and the incidence of HCAI-VRE. A significant rise in the incidence of HCAI-VRE over time was noted. Correlation between antibiotic consumption and incidence of HCAI-MRSA and HCAI-VRE Data on the correlation between the incidence of HCAI-MRSA, the incidence of HCAI-VRE, and the annual consumption of vancomycin, teicoplanin, linezolid, fusidic acid, tigecycline, and daptomycin are shown in Tables 2 and 3. A significant correlation was found between the increased use of linezolid and teicoplanin and the decreased prevalence of MRSA. By contrast, no significant correlation was found between the increased use of vancomycin, glycopeptides (vancomycin and teicoplanin), tigecycline, and fusidic acid and the incidence of HCAI-MRSA. There was, however, a positive correlation between the incidence of HCAI-VRE and the use of teicoplanin and tigecycline. Discussion This study evaluated the association between antibiotic consumption and the incidence of HCAI-MRSA and HCAI-VRE in a medical center in Taiwan during an 11-year period. . analyzed the disease density (per 1000 patient-days) due to resistant bacteria rather than resistance rate. This parameter (density) is now considered more appropriate to present the real situation. In addition, we evaluated two newly available antibiotics (tigecycline and daptomycin) and the relationship with disease density. Using this new analysis and adding some new data, we had several significant novel findings. We found that the consumption of teicoplanin and linezolid significantly increased during the study period; however, the consumption of vancomycin, glycopeptides (both vancomycin and teicoplanin), and fusidic acid remained stable. It is speculated that physicians in the hospital are more likely to prescribe teicoplanin and linezolid than vancomycin for the treatment of MRSA infections because teicoplanin and linezolid are associated with fewer side effects, such as renal toxicity. In addition, the consumption of tigecycline increased from 1.8 DDD/1000 inpatient-days in 2007 to 6.0 DDD/1000 inpatient-days in 2010. However, although the consumption of both tigecycline and daptomycin increased over time, there was no significant association between the consumption of those antimicrobial agents and the incidence of nosocomial MRSA infections during the study period. Long-term studies are needed to evaluate the trend in HCAI-MRSA infections associated with the consumption of tigecycline and daptomycin. Although the relationship between the incidence of MRSA and the use of b-lactam antibiotics and fluoroquinolones has been investigated, 11e14,24 few studies have focused on the association between consumption of glycopeptides, linezolid, and fusidic acid and the incidence of infection due to Gram-positive bacteria. 16,17 In this study, we found that the incidence of infections due to MRSA significantly decreased from 0.6023/1000 inpatientdays in 2000 to 0.2108/1000 inpatient-days in 2010. In addition, we noted a negative correlation between the use of teicoplanin and linezolid and the incidence of HCAI-MRSA. Although further research is needed to clarify this association, our findings suggest that teicoplanin and linezolid might exert a protective effect against the emergence of MRSA. We also found that there was no significant correlation between the consumption of vancomycin, glycopeptides including vancomycin and teicoplanin, and fusidic acid and incidence of HCAI-MRSA, a finding that is consistent with that reported in our previous study. 17 In addition, the consumption of tigecycline and daptomycin was not associated with the incidence of HCAI-MRSA. The reason for that finding is most likely due to the short duration of tigecycline and daptomycin use in our hospital. A longer study period is needed to clarify the correlation between those two antimicrobial agents and the incidence of MRSA infections. Our data show that the incidence of HCAI-VRE significantly increased during the study period. We found that the increase in use of teicoplanin positively correlated with the increase in incidence of HCAI-VRE. This finding is consistent with that in our previous study. 17 By contrast, we found that there was a negative correlation between the use of teicoplanin and the incidence of HCAI-MRSA. Our data also show a positive correlation between the consumption of tigecycline and the incidence of HCAI-VRE. Although tigecycline was used for only 4 years in this study, the strong positive correlation between its use and the incidence of VRE infections implies that tigecycline should be administered with caution. This study has several limitations. Although we found a significant correlation between antibiotic use and the incidence of HCAI-MRSA, the etiology behind the emergence of drug-resistant bacteria in our hospital is complicated and selective pressure from widespread use of antimicrobial agents might be only one of the causes. The prevalence of MRSA could have been affected by multiple factors such as infection-control measures and hand hygiene, or operational changes in the hospital. After the emergence of severe acute respiratory syndrome in 2003, the infection prevention and control program at the NTUH was upgraded to include hand hygiene, antibiotic-control policies, and an annual, intensive, project-based control program. In fact, it has been demonstrated that those policies are directly associated with the decrease in rates of HCAIs and bloodstream infections at the NTUH. 25 However, those effects were not measured in this study. Whether those policies had an impact on MRSA trends remains unknown. In addition, because this was an epidemiological surveillance study, we did not analyze the impact that duration of exposure to antibiotics, or clonal spread of resistant bacteria had on the trend in incidence of nosocomial MRSA infection. Furthermore, the overall DDD of a given antibiotic is a notoriously problematic denominator as there will be changes in low-dose usage in specific groups of patients (i.e., children and patients with renal insufficiency). Therefore, an analysis using number of prescriptions for those patient groups would be more appropriate. However, the impact should be limited in this study because those specific patient groups comprised only a small fraction of the study patients and the effect would be diluted and equal in each year. In conclusion, in this 11-year study in a single medical institution, we found that the use of teicoplanin and linezolid significantly increased. Furthermore, we found a negative correlation between the usage of individual antibiotics, such as teicoplanin and linezolid, and the incidence of HCAI-MRSA. However, we also found a positive correlation between consumption of teicoplanin and tigecycline and the incidence of HCAI-VRE. Therefore, strict implementation of infection-control policies including administration of appropriate antimicrobial agents may be positively correlated with a decrease in the presence of MRSA in hospitals.
2018-04-03T03:15:20.301Z
2013-12-31T00:00:00.000
{ "year": 2015, "sha1": "f871ba9e0918e12b4b38a18acae9a94b92487248", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jmii.2013.10.008", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "55c88853fa3b36291ca3108f7137d1740de8758f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
229494857
pes2o/s2orc
v3-fos-license
Feasibility of a Combined Mobile-Health Electrocardiographic and Rapid Diagnostic Test Screening for Chagas-Related Cardiac Alterations Background: Chronic Chagas cardiomyopathy (CChC) is the most common cause of death related to Chagas disease (CD). The aim of this study was to assess the feasibility of a combined rapid diagnostic test (RDT) and electrocardiographic (ECG) screening in a remote rural village of the Bolivian Chaco, with a high prevalence of CChC. Methods: Consecutive healthy volunteers > 15 years were enrolled in the community of Palmarito (municipality of Gutierrez, Santa Cruz Department, Bolivia) in February 2019. All patients performed an RDT with Chagas Stat-Pak® (CSP, Chembio Diagnostic System, Medford, NY, USA) and an ECG by D-Heart® technology, a low-cost, user-friendly smartphone-based 8-lead Bluetooth ECG. RDTs were read locally while ECGs were sent to a cardiology clinic which transmitted reports within 24 h from recording. Results: Among 140 people (54 men, median age 38(interquartile range 23–54) years), 98 (70%) were positive for Trypanosoma cruzi infection, with a linear, age-dependent, increasing trend (p < 0.001). Twenty-five (18%) individuals showed ECG abnormalities compatible with CD. Prevalence of ECG abnormalities was higher in infected individuals and was associated with higher systolic blood pressure and smoking. Following screening, 22 (16%) individuals underwent clinical evaluation and chest X-ray and two were referred for further evaluation. At multivariate analysis, positive CSP results (OR = 4.75, 95%CI 1.08–20.96, p = 0.039) and smoking (OR = 4.20, 95%CI 1.18–14.92, p = 0.027) were independent predictors of ECG abnormalities. Overall cost for screening implementation was <10 $. Conclusions: Combined mobile-Health and RDTs was a reliable and effective low-cost strategy to identify patients at high risk of disease needing cardiologic assessment suggesting potential future applications. Introduction Chagas disease (CD), caused by infection with the protozoan parasite Trypanosoma cruzi, is the neglected tropical disease exerting the highest burden in most Latin American countries, with 8 million persons chronically infected and approximately 200,000 new cases each year [1]. It is transmitted to humans through the feces of infected hematophagous triatomine insects in areas in which the disease is endemic and, occasionally, by nonvectorial mechanisms such as blood transfusion, organ transplants, or vertically from mother-to-child [2]. Three clinical stages of CD have been described: the acute phase, typically asymptomatic and short-lasting, followed by a chronic long-acting phase that may span for decades without showing any symptoms associated to the infection (indeterminate stage), and the determinate phase. Approximately 40% of chronically infected individuals progress to either advanced cardiac and/or digestive tract forms characterized by high morbidity and mortality, if left untreated [2]. Despite progress in vector control [3,4], a timely and accurate diagnosis remains a major obstacle to start treatment. Still today, early accessing to presently available drugs is a major issue. It is estimated that current chemotherapies only reach 1% of infected individuals [1,5]. Communities with intense transmission remain, especially in the Bolivian Gran Chaco (estimated infection rate at 4% per year) [3,4]. Cardiac involvement, i.e., Chagas Cardiomyopathy (CChC), is the main cause of death [2,5,6]. The early signs of Chagas cardiomyopathy are typically conduction system abnormalities, most commonly right bundle branch block (RBBB), often progressing to bifascicular blocks. Later manifestations include left ventricular systolic dysfunction, apical aneurysms, high-degree atrioventricular block, and sustained and non-sustained ventricular tachycardia [6][7][8]. Of note, sudden cardiac death may occur at any moment, including early phases. Therefore, early recognition of cardiac involvement through cost-effective screening efforts becomes a priority in areas with high endemic burden. As patients with CD, compared to non-CD subjects, have almost a threefold higher prevalence of electrocardiogram (ECG) alterations, ECG coupled with a rapid diagnostic test (RDT) screening can be a reasonable first-line approach. However, limited resources, lack of trained personnel and infrastructures in highly endemic areas, challenge the implementation of such programs. Smartphone technology, with its' computational power, applied to telemedicine may overcome several of these limitations by providing an easy and affordable access to accurate diagnostic methods [9][10][11]. The aim of this study was to understand the potential impact and sustainability of a mHealth ECG screening program, coupled with an RDT test, in remote rural villages of the Bolivian Chaco with the help of a validated smartphone-based ECG (D-Heart ® ). The present device allows low-cost ECG screening campaigns by community health workers and offers the possibility of Remote ECG interpretation by expert physicians. Study Population and Settings The study was carried out in Palmarito Community (municipality of Gutierrez, Santa Cruz Department; 19 • 49 S; 63 • 48 W, Bolivian Chaco Region), in February 2019. In this region, estimated seroprevalence of Chagas Disease is 50% in the general population, but can be as high as 70% in individuals aged >15 years [3,4]. The nearest secondary level Hospital is located 80 km far away (Hospital Municipal de Camiri). All individuals ≥ 15 years old were invited to participate to the study. Overall, 653 inhabitants live in Palmarito, of whom 402 people ≥ 15 years old. A representative sample of 140 healthy volunteers were consecutively enrolled, taking into account the age group distribution. Demographic data was recorded, and a brief clinical history, focused on common cardiovascular risk factors and manifestations, was obtained through a standardized questionnaire. Height and weight were recorded and body mass index (BMI) calculated. All participants underwent blood pressure (BP) measurement, by trained personnel, before performing electrocardiographic and serological screening; those with elevated systolic (SBP ≥ 140 mmHg) and/or diastolic blood pressure (DBP ≥ 90 mmHg) had a second measurement. ECG Screening and Referral Path For each participant, an ECG was recorded using D-Heart ® electrocardiograph. D-Heart ® is a CE marked multiple lead smartphone-based ECG device (DI, DII, DIII, aVR, aVL, aVF peripheral leads V2 and V5 precordial leads) specifically designed for ECG screening in low-income settings by non-medical personnel [10,11]. The device is manufactured by the social-vocation start-up D-Heart, an Italian-based company. The device weighs less than 194 g and is extremely portable. If operated by non-health professionals it can register an 8-lead ECG, whereas in the health professionals setting standard 12-lead ECG can be acquired. The module Bluetooth Low Energy streams the ECG data to the smartphone in a medically certified App that enables in loco reading of the tracings or Telecardiology Reporting via web-based Telecardiology Platform. The actual components of the device offer a manufacturing price of 90$ per unit. ECG tracings were acquired with D-Heart Smartphone ECG device during the on-site screening activities and were sent daily to the Cardiomyopathy Unit, Careggi Hospital, Florence, Italy where they were read with D-Heart Telecardiology Platform within 24 h by two staff physicians, N.M and C.F., blinded for subjects' T.cruzi infection status. Each ECG was recorded with a dedicated smartphone protected by a code known by the Community Health Worker. An abnormal ECG suggestive of CChC was defined as an ECG with (i) ventricular conduction defects: complete right BBB (RBBB), left anterior fascicular block, left posterior fascicular block, left bundle branch block, or bifascicular block; (ii) any degree of atrioventricular block; (iii) rhythm disturbances: atrial fibrillation/flutter, junctional rhythm, sinus bradycardia with heart rate < 50 beats/min, or complex ventricular ectopy; (iv) other: pathologic Q waves, fragmented QRS, low QRS voltage [6,7,12]. Other findings, such as incomplete RBBB, atrial ectopy, nonspecific ST-T wave changes, right or left ventricular hypertrophy were considered nonspecific for CChC and were not included in our definition of CChC-related ECG abnormality. Reports were sent back daily to the Community Health Center in Palmarito. T. cruzi Infection Screening Chagas Stat-Pak ® Assay After ECG testing, all patients during the onsite screening activities performed a Chagas Stat-Pak ® (CSP) (Chembio Diagnostic System, Medford, NY, USA), an immunochromatographic, qualitative, rapid diagnostic test (RDT), which uses a combination of antigens for the detection of IgG antibodies to T. cruzi, in use as standard tool for Chagas disease screening by the Chagas National Program since 2005. Blood samples were obtained by finger-prick and the result read after 15 min, according to the manufacturer's instructions. During previous studies, carried out in the same highly endemic area of the Bolivian Chaco, CSP yielded excellent performance in comparison with the conventional serology, with sensitivity, specificity, positive predictive value and negative predictive value up to 100%, 99.3%, 99.5%, and 100%, respectively [13,14]. Sustainability and Cost of a mHealth Screening Campaign Financial feasibility models were built to project the overall cost of our screening campaign. A total of two analyses were performed: the first model would include startup and operative costs related to human resources, consumables (RDT kits, electrodes, disinfection kits), non-consumable devices (D-Heart ® , compatible smartphone, blood pressure cuff, and internet connection) and logistics; the second one would comprise only consumables and human resources for screening continuation. Statistical Analysis Statistical analysis of the data was performed with STATA 11.0 (StataCorp, College Statio, TX, USA). Frequencies and percentages with 95% confidence intervals (CI) for categorical variables, means and medians and interquantile ranges (IQR) for continuous variables were calculated. T-Student test or Mann-Whitney test were used to compare continuous variables. Chi-square test, or Fisher's exact test, when appropriate, were used to investigate the association between positive CSP test with ECG abnormalities, individual risk factors and demographic data. Multivariate logistic regression was performed including age, sex and all the variables significantly associated to ECG abnormalities at univariate analysis. Results were considered significant when the p-value ≤ 0.05. Ethics Statement The study was realized in agreement with the Ministry of Health of the Plurinational State of Bolivia (Convenio Ministerio de Salud y Deportes, Estado Plurinacional de Bolivia/Cátedra de Enfermedades Infecciosas, Universidad de Florencia, Italia), the Servicio Departamental de Salud (SEDES) of Santa Cruz and with the support of the Guaraní political organization (Asamblea del Pueblo Guaraní). The study was approved by a local Ethic Committee and a written informed consent was obtained by each enrolled participant (or by a parent or a legal guardian, if minor). Baseline Characteristics Of the 140 subjects included in the study, 54 (39%) were men, with a median age of 38 (interquanrtile range 23-54) from 15 to 85 years old. Twenty-four (17%) had family history for cardiovascular diseases and 11 (8%) of sudden unexpected death. Cardiovascular risk profile was generally low, with only five (4%) individuals affected by Type 2 diabetes mellitus, two (1%) with known dyslipidemia and median BMI was 24 kg/m2 (22)(23)(24)(25)(26)(27). Palpitations were reported by 39 (28%) patients, whereas chest pain was the most common complaint, present in 82 (59%) patients. History of loss of consciousness was present in 14% of patients (Table 1) No one had been screened with an ECG or for T. cruzi before the study enrolment. Outcome of Combined T. cruzi and ECG Screening Community screening was carried out in 6 days. RDTs were read locally, and results recorded, while ECGs were sent to the Florence Cardiomyopathy Unit and analyzed within 24 h (average response time: 9 ± 1 h). No ECG recording was lost, and all patients with positive ECG and RDT results combined were actively referred to further evaluation. Medical Referral, Feasibility of Current Screening Strategy, and Cost Analysis Twenty-two patients with a positive CSP testing and possible CD-related ECG abnormalities were recalled from Palmarito Community and referred to the second level Camiri Hospital, where physical examination and chest X-ray were performed. All 22 patients had CD diagnosis confirmed by Chagatest Lisado ELISA (Wiener Laboratories, Rosario, Argentina), performed at the "Elvira Wunderlich" Health Center, Santa Cruz, Bolivia. Of these, two patients had cardiomegaly on the chest X-ray and were referred to further third level examinations. The first person was a 45-year old man, active smoker with history of chest pain; his ECG showed sinus bradycardia with a RBBB. The second person was a 59-year old woman, with history of palpitations, leg edema, chest pain, and loss of consciousness; at ECG, a RBBB and low voltages were present ( Figure 1A-C). People with positive CSP, but normal ECG findings, were referred to the Chagas National Program for serological confirmation and possible benznidazole treatment, and managed according to their guidelines [15]. Two models for cost effectiveness analysis were developed. The first one, comprising start-up and operative costs, is summarized in Table 4. For a 6-day screening for a community of 150 inhabitants, the overall start-up amount was projected to 4.82$/patient and to 8.23$/patient when operative costs (i.e., on-site nurse and healthcare assistant with remote physician on call) were included. For the second model, intended to predict cost of screening continuation, an average of 5.13$/patient was estimated. Discussion In this study, we evaluated the feasibility of a combined mobile-health electrocardiographic and rapid diagnostic test screening for Chagas-related cardiac alterations in a in a low-income setting, hyperendemic for CD. Subjects screened with ECG were also tested for the presence of T.cruzi antibodies, by an easy-to-use RDT. In the surveyed community, seroprevalence for T. cruzi was 70%, and its distribution by age-class was consistent with previously reported data from this [3,4]. Two models for cost effectiveness analysis were developed. The first one, comprising start-up and operative costs, is summarized in Table 4. For a 6-day screening for a community of 150 inhabitants, the overall start-up amount was projected to 4.82$/patient and to 8.23$/patient when operative costs (i.e., on-site nurse and healthcare assistant with remote physician on call) were included. For the second model, intended to predict cost of screening continuation, an average of 5.13$/patient was estimated. Discussion In this study, we evaluated the feasibility of a combined mobile-health electrocardiographic and rapid diagnostic test screening for Chagas-related cardiac alterations in a in a low-income setting, hyperendemic for CD. Subjects screened with ECG were also tested for the presence of T.cruzi antibodies, by an easy-to-use RDT. In the surveyed community, seroprevalence for T. cruzi was 70%, and its distribution by age-class was consistent with previously reported data from this [3,4]. More than one in five patients with CSP positive serology showed ECG abnormalities compatible with CChC (n = 22/98, 22%), in line with the estimate that 20-30% of infected individuals eventually develop heart disease. The most common findings were ventricular conduction defects, including RBBB and left anterior fascicular block. Moreover, we observed a number of ECG with fragmented QRS, considered as a predictor of arrhythmic events in patients with ischemic and non-ischemic cardiomyopathy, previously reported to be highly prevalent among patients with advanced CChC [6,7]. Other abnormalities included AVB and rhythmic disturbances, which are typical CChC manifestations, and low QRS voltage, which has been previously identified as a strong predictor of the risk of death from cardiac causes in CD patients [12]. Notably, 2 of the 22 individuals with positive ECG and CSP were referred for further medical evaluation: in both cases, ECG showed at least two alterations and chest X-ray was abnormal. Multiple ECG abnormalities have already been described as highly prevalent in patients with signs of dilated cardiomyopathy at echocardiogram [12]. Overall, our observations strongly emphasize the potential application of mHealth technology and telemedicine, together with RDT, to improve access to diagnosis and treatment for CD and CChC in remote areas of the rural Bolivian Chaco. In fact, although a pilot study, simultaneous screening by CSP and D-Heart electrocardiograph resulted feasible, with a cost/patient < 10$ to start up. The combined, on-field use of RDT and ECG in large-scale screening campaigns could play a pivotal role within a more comprehensive strategy against CD. Early diagnosis of CD is of paramount importance to start treatment before symptoms progress. In remote regions, easy-to-use RDTs, which use whole blood from digital puncture as sample, would ease access to CD diagnosis, allowing timely treatment. Recently, the use of combined RDTs was shown to be a reliable and accurate alternative to conventional serological assays in order to achieve a conclusive CD diagnosis, in settings where equipped labs and trained personnel are not available [13]. The role of antitrypanosomal treatment in adult patients with established CChC remains controversial. So far, the only published placebo-controlled trial in adults with advanced CChC concluded that benznidazole treatment did not affect the clinical progression of Chagas cardiomyopathy, but important methodological bias has been raised [16,17]. Ideally, etiological treatment should be offered timely in adult patients with chronic Chagas disease before established cardiac damage requires more aggressive management [18,19]. Furthermore, recently published studies support benznidazole use in standard treatment in addition to new alternative regimens for short-course and combination treatments [20]. Screening campaigns that result in early therapy inception are, however, successful as long as effective vector control activities can be achieved, and intensive care be delivered to individuals in need. As a case in point, in 2013, blanket insecticide application was shown to decrease the force of infection in the Bolivian Chaco, though active transmission remained [3,21,22]. Moreover, several pharmacological and non-pharmacological interventions are currently available and have been increasingly used in CChC patients with the intention of preventing or delaying complications [23]. As part of the study protocol, ECG recordings were sent to Florence for analysis. It is tempting to hypothesize that, should combined (ECG and RDTs) screening programs be further implemented, ECGs could be seamlessly transmitted to local cardiologists or community physicians with the intention to monitor individuals through time and create electrocardiographic and serologic 'profiles' to detect conversion (Figure 2A,B). Finally, pocket echocardiography integrated mHealth device assessments are now under scrutiny for potential applications in resource-limited settings. In a randomized trial enrolling 253 patients at a tertiary care center in Bangalore, India, patients who were randomly allocated to a m-health clinic for valvular and structural heart disease, as opposed to standard of care, were associated with shorter time to definitive therapy [24]. In this scenario, adding such instruments to CD screening would allow to reduce lag from infection to diagnosis, increase access to therapy and improve outcomes in patients with signs compatible with early cardiomyopathy, thus limiting disease progression and morbidity. Ultimately, our effort may focus on bringing high-tech instruments at low-cost for effective remote screening therefore allowing for appropriate and timely diagnosis. The study has limitations. The healthy volunteers were not randomly selected, but consecutively enrolled in the Health Centre. Severely ill community residents may have been unable to report to the health Centre for evaluation. Moreover, screening for CD was made on a single RDT, namely, CSP assay, which is in use as standard tool for Chagas disease screening by the Chagas National Program since 2005 and showed an excellent performance in the same geographical area [13,14]. Only people with potential CD-related ECG abnormalities (n = 22) were referred to a secondary level hospital for further investigations, including serology confirmation by ELISA testing. People with positive CSP, but normal ECG findings, were referred to the Chagas National Program for serological confirmation and were offered benznidazole treatment, but such data were not collected, being beyond the objective of the study. Conclusions Early diagnosis of CD and CChC is of paramount importance to provide access to targeted therapy (currently <1% of all seropositive subjects) and maximize treatment benefits. Combined mHealth and RDTs may prove reliable and effective low-cost strategies, especially in rural, highly endemic environments like the Bolivian Chaco, to identify patients at high risk of disease and in need of further cardiologic assessment. Further studies Specifically, patients are screened on site, in a community of the Bolivian Chaco. ECGs are sent to Telecardiology Service in Careggi University Hospital, Italy and reports are back on site; in case of need, patients with a pathologic ECG were recalled back in the community and referred to the nearest second level Hospital in Camiri (distance 80 km). Patients requiring higher level of care were further referred to a III level Hospital in Santa Cruz (distance 250 km). (B) Proposed implementation strategy is described. Patients are screened on site and ECGs are sent to a local telecardiology service; in case of need, patients with a pathologic ECG are recalled back in the community and referred to a second or third level Hospital, according to their condition. Finally, pocket echocardiography integrated mHealth device assessments are now under scrutiny for potential applications in resource-limited settings. In a randomized trial enrolling 253 patients at a tertiary care center in Bangalore, India, patients who were randomly allocated to a m-health clinic for valvular and structural heart disease, as opposed to standard of care, were associated with shorter time to definitive therapy [24]. In this scenario, adding such instruments to CD screening would allow to reduce lag from infection to diagnosis, increase access to therapy and improve outcomes in patients with signs compatible with early cardiomyopathy, thus limiting disease progression and morbidity. Ultimately, our effort may focus on bringing high-tech instruments at low-cost for effective remote screening therefore allowing for appropriate and timely diagnosis. The study has limitations. The healthy volunteers were not randomly selected, but consecutively enrolled in the Health Centre. Severely ill community residents may have been unable to report to the health Centre for evaluation. Moreover, screening for CD was made on a single RDT, namely, CSP assay, which is in use as standard tool for Chagas disease screening by the Chagas National Program since 2005 and showed an excellent performance in the same geographical area [13,14]. Only people with potential CD-related ECG abnormalities (n = 22) were referred to a secondary level hospital for further investigations, including serology confirmation by ELISA testing. People with positive CSP, but normal ECG findings, were referred to the Chagas National Program for serological confirmation and were offered benznidazole treatment, but such data were not collected, being beyond the objective of the study. Conclusions Early diagnosis of CD and CChC is of paramount importance to provide access to targeted therapy (currently <1% of all seropositive subjects) and maximize treatment benefits. Combined mHealth and RDTs may prove reliable and effective low-cost strategies, especially in rural, highly endemic environments like the Bolivian Chaco, to identify patients at high risk of disease and in need of further cardiologic assessment. Further studies are clearly needed to assess if these theoretical advantages are supported by patientcentered outcomes and positive cost-benefit analysis. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by a local Ethics Committee (Colegio Médico de Santa Cruz, TDEM CITE No. 008/2018). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2020-12-03T09:05:25.844Z
2020-11-01T00:00:00.000
{ "year": 2021, "sha1": "0be831ecce006f98780a2482a1dcde8ce3fa4189", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/9/9/1889/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dc2f49a4494ad79a2185c9d46b8ac9d012fcddd3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12241260
pes2o/s2orc
v3-fos-license
Clinical evaluation of commercial nucleic acid amplification tests in patients with suspected sepsis Background Sepsis is a serious medical condition requiring timely administered, appropriate antibiotic therapy. Blood culture is regarded as the gold standard for aetiological diagnosis of sepsis, but it suffers from low sensitivity and long turnaround time. Thus, nucleic acid amplification tests (NAATs) have emerged to shorten the time to identification of causative microbes. The aim of the present study was to evaluate the clinical utility in everyday practice in the emergency department of two commercial NAATs in patients suspected with sepsis. Methods During a six-week period, blood samples were collected consecutively from all adult patients admitted to the general emergency department for suspicion of a community-onset sepsis and treated with intravenous antibiotics. Along with conventional blood cultures, multiplex PCR (Magicplex™) was performed on whole blood specimens whereas portions from blood culture bottles were used for analysis by microarray-based assay (Prove-it™). The aetiological significance of identified organisms was determined by two infectious disease physicians based on clinical presentation and expected pathogenicity. Results Among 382 episodes of suspected sepsis, clinically relevant microbes were detected by blood culture in 42 episodes (11%), by multiplex PCR in 37 episodes (9.7%), and by microarray in 32 episodes (8.4%). Although moderate agreement with blood culture (kappa 0.50), the multiplex PCR added diagnostic value by timely detection of 15 clinically relevant findings in blood culture-negative specimens. Results of the microarray corresponded very well to those of blood culture (kappa 0.90), but were available just marginally prior to blood culture results. Conclusions The use of NAATs on whole blood specimens in adjunct to current culture-based methods provides a clinical add-on value by allowing for detection of organisms missed by blood culture. However, the aetiological significance of findings detected by NAATs should be interpreted with caution as the high analytical sensitivity may add findings that do not necessarily corroborate with the clinical diagnosis. Electronic supplementary material The online version of this article (doi:10.1186/s12879-015-0938-4) contains supplementary material, which is available to authorized users. Background Sepsis is a major cause of morbidity and mortality in all high income as well as middle and low income countries [1][2][3][4][5]. About 20 million cases of sepsis are estimated to occur each year around the world accounting for up to 135,000 deaths in Europe and 215,000 in the United States [1,4]. Recent studies from different countries show that the incidence of sepsis as well as the number of sepsis-related deaths is continuously increasing [3,[6][7][8][9][10][11]. At present, blood culture is considered the gold standard for aetiological diagnosis in sepsis. Although blood culture is associated with high specificity in species identification, it is limited by a substantial time delay and low sensitivity, especially for slow-growing and fastidious organisms [12,13]. The aim of this population-based study was therefore to evaluate the clinical utility of two commercial NAATs, Magicplex™ Sepsis Real-time Test and Prove-it™ Sepsis, in patients with suspected sepsis. Magicplex™ Sepsis Real-time Test is a PCR-based test screening for 73 species of Gram positive bacteria, twelve species of Gram negative, and six species of fungi and three resistance genes (mecA, vanA, and vanB) directly in whole blood samples. The microarray-based assay called Prove-it™ Sepsis combines genome amplification by conventional PCR and microarray technology for simultaneous identification of over 60 bacterial pathogens, and 13 fungal pathogens, and three resistance genes (mecA, vanA, and vanB) in positive blood cultures. Patients and specimens From September 2011 to June 2012, a prospective observational study of the incidence of community-onset severe sepsis and septic shock in adults was conducted at Skaraborg Hospital, in the western region of Sweden. During a limited period of the prospective study, from February to April 2012, NAATs were performed as part of routine patient care in addition to blood culture in all patients >18 years admitted to the emergency department for suspicion of community-onset sepsis. All patients received oral and written information about the study. In patients suspected to have sepsis or severe sepsis, it is mandatory to rapidly make appropriate sampling for microbiological diagnosis before antibiotic treatment is instituted. This microbiological testing, using appropriate and approved tests, needed no patient consent according to the Swedish Law. The diagnostic tests were made upon arrival to the emergency department and no additional sampling was made. Some patients were too sick and/or died within 24 hours and could never give an informed consent. Those patients who could not give an informed consent, were evaluated anonymously, only for diagnosis and results of commercially available tests used for diagnostic purposes of the suspected sepsis. The Regional Ethics Committee in Gothenburg (no. 376-11) approved the study and the consent process. Blood culture In this study, an episode was defined as each separate case of clinically suspected community-onset sepsis treated with intravenous antibiotics. Only one episode per patient and admission were included in the final data analysis to avoid bias. For each episode, two sets of blood cultures from two different puncture sites were collected before administration of the first dose of intravenous antibiotics. However, for a few episodes, due to feasibility, only one set of blood culture was collected. Blood cultures were conducted in BacT/ALERT® FN (bioMérieux, France). Typing and definite species identification with MALDI-TOF MS was performed on a Microflex LT mass spectrometer (Bruker Daltonics, United States) with BioTyper software v2.0 using default parameter settings. Spectral scores above 2.0 were used as cut-off for correct identification. Antibiotic susceptibility was determined by accredited laboratory methods according to EUCAST guidelines (www.eucast.org). Multiplex PCR assay The test procedure for Magicplex™ was performed on 1 mL fresh whole blood (EDTA) collected before administration of antibiotics and not older than 24 hours. DNA extraction from whole blood was performed using the SelectNA Blood Pathogen Kit (Molzym, Germany). The first steps of the extraction, lysis of human cells and digestion of human DNA, were manually performed according to the manufacturer's instructions. Pure bacterial/fungal DNA was then extracted automatically from the prelysed samples on the instrument Nordiag Arrow/Liaison IXT (DiaSorin, Italy). The first PCR, producing amplicon banks, was performed using the kit Magicplex Sepsis Amplification on a GeneAmp PCR System 9700 (Applied Biosystems, United States). Real-time PCR was then performed with the Magicplex Screening Real-time Detection Kit on a CFX96 (Bio-Rad, United States). The screening revealed the presence of Gram positive or Gram negative bacteria, drug resistance genes or fungi. Species identification was performed with the Magicplex ID 1-ID 9 Real-time Detection Kit on samples that became positive in the screening step. All PCR-reactions were set up in a UVbox according to the recommendations from the manufacturer. The dedicated software Seegene viewer was used to interpret the analysis data, where the result from every sample is presented in a table as Detected or Not detected. A whole process control was included in the assay. If this was valid, the assay result could be interpreted. Microarray-based assay Prove-it™ was performed on aliquots derived from blood culture bottles removed from the automated blood culture system. After removal, the bottles were stored in 4°C between 1-2 days until DNA extraction was performed. For each episode, aliquots were taken from that pair of blood culture bottles, one aerobic and one anaerobic bottle, derived from the first puncture site. A volume of 200 μL was withdrawn from each bottle. Regardless if the blood culture bottles were found positive or negative, sample volumes from each pair of bottles was pooled together prior to DNA extraction. DNA was extracted using 400 μL of sample volume and eluted in a final volume of 200 μL using a MagNA Pure Nucleic Acid Isolation Kit I on a MagNAPure Compact System (Roche Applied Science, Switzerland). The procedure for the microarray assay was performed according to protocols provided by the manufacturer. Briefly, both a bacterial and a fungal PCR master mix were prepared. The PCR reactions were set up in a laminar airflow bench where no amplified PCR products were handled. After PCR amplification, the bacterial PCR product and the fungal PCR product derived from the same episode were added to the same well of the microarray. Subsequently, hybridization and staining procedures were performed. For detection and analysis of the samples, the dedicated software Prove-it™ Advisor was used. Based on the outcomes of several built-in controls, the software evaluated whether the performance of the assay is acceptable. All analysis parameters were adjusted automatically without any manual involvement. Data interpretation and statistical analysis All records of the patients and microbial findings were assessed by two senior physicians in infectious diseases (LL and GJ). The clinical judgment was based on the patient history with special reference to sudden onset of fever, rigors, gastrointestinal symptoms, tachypnea, mental confusion, pain out of proportion, and muscle weakness. Physical examination was done with attention to blood pressure <90 mm Hg, respiratory rate >20/min, and oxygen saturation <90%. Standard biomarkers included serum lactate, leucocyte cell count, neutrophillymphocyte count ratio, and C-reactive protein. Judgment was also based on imaging and microbiological testing of suspected infectious foci including culture, PCR, and antigen test, apart from the NAAT assays described in detail above. It remains challenging to determine the aetiological significance of organisms detected in blood. This applies especially to NAATs, but also to conventional blood culture although to a lesser degree. In this study, decisions of aetiological significance of detected organisms were made based on expected pathogenicity (Table 1) and clinical presentation as previously described [25,26]. Detected organisms were interpreted as clinically relevant, of unknown significance or contaminant according to the algorithm described in Figure 1. Organisms detected only by a NAAT were also regarded as clinically relevant, but with more stringent criteria than positive blood culture results. Microorganisms of unknown aetiological significance are findings considered not consistent with the clinical diagnosis having no implications for the medical management of the patients. However, links between the microorganism and the medical condition of the patient cannot be excluded. McNemar's test was conducted for comparing proportions in paired samples, whereas z test was performed for comparing proportions in independent samples. A two-sided p value of <0.05 was considered statistically significant. Concordances between blood culture and NAATs were tested with kappa statistics for inter-rater agreement; cut-off values for the kappa value have been described elsewhere; 0.41-0.60 are considered of moderate agreement, and those of 0.81-1.00 of very good agreement [27]. Statistical analyses were performed using Matlab v. 7.10 (The Mathworks, Inc., United States). Results A total of 375 patients entered this study. Eight patients were admitted twice during the study period, resulting in a total of 383 episodes analysed with blood culture and both NAATs. For the multiplex PCR, the whole process control was invalid or partially invalid in 45 (12%) of the 383 blood samples initially tested. These samples were retested on a ten-fold dilution of the extracted DNA. After the re-run, one sample did still not give a final result and was excluded from further analysis, making a total of 382 episodes. At least one microorganism was detected in 138 episodes by either method. Microorganisms defined as commensals (Table 1) were considered to be contaminants and excluded from analysis if only found on a single occasion regardless of detection method. In total, 89 clinically relevant findings or findings of unknown significance were identified in 77 episodes (20%). A detailed description including clinical comments for these findings can be found in Additional file 1. In eight episodes, a clinically relevant finding was detected by blood culture in a bottle not included in that pair of bottles from which aliquots for the microarray analysis was derived. Thus, these eight episodes were excluded from the analysis of the microarray results. For blood culture, the result of species identification was usually available 6-7 hours after a bottle has flagged positive ( Figure 2). The turnaround time for the multiplex PCR assay was estimated to around seven hours. For the microarray, the turnaround time from positive blood culture bottle to species identification was approximately four hours. The diagnostic performance for blood culture and the NAATs are shown in Table 2. Multiplex PCR assay The rate of episode positivity for multiplex PCR was 14% (53/382), whereas the rate of episode positivity for blood culture was 11% (42/382, p = 0.61). Cohen's kappa coefficient for agreement between the results of multiplex PCR and blood culture was 0.52 (95% CI 0.37-0.66). In total, 37 clinically relevant findings in 37 episodes and 23 findings of unknown significance in 21 episodes were detected by multiplex PCR. For blood culture, 44 findings in 42 episodes were clinically relevant and one finding of unknown significance. The multiplex PCR and blood culture results agreed for 22 clinically relevant findings ( Figure 3A). Multiplex PCR missed 22 clinically relevant findings in 20 episodes and one finding of unknown significance. On the other hand, 38 findings identified by multiplex PCR in 35 episodes were not detected by blood culture. Fifteen of these 38 findings were considered as clinically relevant; five of these 15 findings were considered as proven aetiology of the infections since the same bacteria were found by culture from the site of infection ( Table 3). The remaining ten findings made by multiplex PCR were negative in blood culture and other diagnostic tests, but consistent with the clinical diagnosis and therefore regarded as clinically relevant (Table 3). Twenty-three of the 38 findings detected by multiplex PCR but not by blood culture were regarded to be of unknown significance. Ten were findings of Gram Figure 1 Algorithm for deciding on clinical relevance of microbial findings in blood by blood culture [25] or NAAT. Other cultures were made from clinically relevant sites before administration of intravenous antibiotics. On suspicion of pneumonia or sepsis with unknown focus, a pulmonary X-ray was performed. Ultrasound, computed tomography scan, and magnetic resonance imaging were used when deemed necessary for diagnosing the site of infection. BC blood culture; NAAT nucleic acid amplification test. positive bacteria, eleven were findings of Gram negative bacteria and two were findings of Candida species (Table 4). In total, 32 clinically relevant findings in 31 episodes and five findings of unknown significance in five episodes were detected by microarray. The results between microarray and blood culture showed concordance for 30 clinically relevant findings ( Figure 3B). The microarray failed to detect four clinically relevant findings detected by blood culture. However, two of these findings belonged to species not covered by the microarray panel (Streptococcus anginosus and Streptococcus group G). The microarray identified one clinically relevant finding For each method, all episodes were classified according following criteria: i) true positiveepisode positive for at least one clinically relevant finding; ii) false positive -episode positive only for finding(s) of unknown significance; iii) false negative -episode negative with the method, but positive for at least one clinically relevant finding detected by another method; iv) true negative -episode negative with the method and for which no clinically relevant findings were detected by any method. and five findings of unknown significance not detected by either blood culture or multiplex PCR (Table 5). Discussion The focus of this study is on the clinical utility of two commercial NAATs in the aetiological diagnosis of patients with suspected sepsis. We found that the multiplex PCR assay added diagnostic value by timely detection of clinically relevant microbes missed by blood culture. The results of the microarray-based assay corresponded well to those of blood culture, but its clinical utility is reduced by the prerequisite of time-consuming cultivation. Currently, blood culture is considered the gold standard for aetiological diagnosis of sepsis although it suffers from having a low sensitivity. It can only detect viable microorganisms, and the medium is not optimized for culturing fungi and fastidious bacteria. Thus, blood culture can be considered to be a poor gold standard, which implies difficulties in evaluating novel sepsis tests since no other laboratory reference standard for sepsis diagnosis exists. For that reason, different reference standards have been used in studies assessing sepsis tests. Some have used blood culture results alone as gold standard [20,28], whereas others have considered all pathogenic findings detected by any method [16,29]. We assessed the performances of blood culture and the two NAATs by classifying all findings judged as clinically relevant as "true positives". The diagnostic sensitivity for multiplex PCR (64%, Table 2) was comparable with the rate in a similar study by Carrara et al. (65%, p = 0.91) [16], as well as the specificity (96% vs. 92%, p = 0.05) [16]. However, Loonen et al. have reported significantly lower sensitivity (37%, p < 0.001) and specificity (77%, p = 0.0001) for Magicplex™ [19]. For the microarray assay, we observed a significantly lower diagnostic sensitivity (62% vs. 96%, p < 0.0001, Table 2) whereas the specificity was equal (99% vs. 99%, p = 1.0) compared with a previous study [20]. The use of different gold standards and study populations may explain observed differences in the performance characteristics [16,19,20,30]. Although the multiplex PCR assay could not detect all clinically relevant microbes, it offered added diagnostic value by the detection of several important pathogens not detected by the conventional culture-based method (Table 3). Consequently, the concordance between the results of blood culture and multiplex PCR assay is moderate (kappa 0.50) ( Table 2), whereas the microarray results showed a high degree of agreement with the results of blood culture (kappa 0.90) ( Table 2). These results could be expected since the microarray assay is performed on aliquots derived from blood culture bottles, whereas the multiplex PCR is performed on microbial DNA extracted from a different whole blood sample than the cultured portion. A remarkable finding is that multiplex PCR of only 1 mL whole blood reached a sensitivity of 64% compared to 72% for culture using 32-40 mL blood, despite more stringent criteria for clinical relevance in the PCR case. However, both methods failed to detect a number of sepsis cases. From a strictly quantitative aspect, the detection level for microorganisms in blood samples at a given time, the diagnostic sensitivity, is depending on the concentration of microorganisms and the volume sampled. The extent of variation in the yield of bacteria in time is not known. In bacteraemia, an average concentration of 0.25 CFU/mL blood was reported by Arpi et al. [31]. Jonsson et al. [32] theoretically calculated the probability of detecting bacteria as a function of the concentration in blood and found empirically by blood culture that 29% of all cases with Escherichia coli and 18% of S. aureus bacteraemia had a most probable concentration of only 0.036 CFU/mL. The gain in yield of microorganisms by increasing the volume of cultured blood in more modern automated culture systems was emphasized by Cockerhill et al. [33] and Lee et al. [34]. To obtain a >99% sensitivity with these systems, four blood cultures, each involving 20 mL is needed [34]. The gain is probably due to the detection of the most minute concentrations, either by enhanced culture systems or overcoming the effects of early antibiotic therapy by resins. NAATs introduced to diagnose sepsis by sampling blood may add new information implying higher analytical sensitivity. By definition, the analytical sensitivity of an assay refers to the smallest value of the analyte that can be resolved with a given degree of confidence and is not synonymous with the diagnostic sensitivity. A single bacterial cell is obviously the smallest theoretical unit that can be applied for blood culture or culture in a wider sense. With NAATs, a single bacterial cell, dead or alive, may contain multiples of a certain target sequence, e.g., in bacteria the conserved 16S rRNA gene sequence is often used as PCR template due to its high copy number in each cell. The downside of using detection methods with high analytical sensitivity, such as PCR, is the increased number of findings of unknown significance as well as contaminants. We then have to cope with unexpected microbial findings that do not necessarily corroborate with the clinical picture, as translocation of bacteria and fungi over the mucosal gut barrier in patients with malignancies or on parenteral nutrition is an increasing diagnostic dilemma. Thus, we are urged to tighten the communication between the laboratory and the clinicians to organise, assemble and critically review the unexpected findings. There is also a qualitative aspect. Certain species seem to correlate better with genuine sepsis than others, i.e., Enterobacteriaceae spp. and pneumococci [35]. The detection of identical microorganisms from multiple sampling sites or occasions is another genuine marker for true bacteraemia. This further implies that a single finding of a microbe with one detection system, e.g., culture, may move the interpretation from probable to proven if the same finding is done with e.g., PCR. However, true bacteraemia also occurs in patients void of an inflammatory response. This shall not be regarded as a benign finding until the following factors are ruled out. First, translocation of bacteria from the gut to the bloodstream may occur in patients with malfunctioning mucosal barriers. Well known examples are E. cloacae (occult malignancy) [35], Streptococcus group G, i.e., Streptococcus dysgalactiae subsp. equisimilis (haematological malignancy and solid tumours) [36], and the former Streptococcus bovis group, i.e., S. gallolyticus subsp. gallolyticus and subsp. pasteurianus (colonic cancer), and S. infantarius (cancer of the bile tree or pancreas) [37]. Experimental work on animals showed an increased risk for bacterial translocation for subjects fed exclusively by the parenteral route [38]. Finally, there are a large number of conditions linked to immunodeficiencies where both mucosal barriers and normal inflammatory response are malfunctioning. A significant microbial finding, e.g., single detection of a species with significant pathogenic profile or multiple detection of the same but low-pathogenic species, should alert the clinician independent of clinical signs of sepsis and whether the detection method was culture or molecular. Combining the methods might therefore provide important clinical information concerning not only the acute infection but also underlying conditions. In our laboratory, it usually takes around 6-7 hours after a blood culture bottle has flagged positive before the isolate has grown enough on the plates enabling species identification within minutes by MALDI-TOF MS (Figure 2). At the same time a primary reading of the antibiogram is done and aids to disclose resistant strains of Staphylococcus aureus, Enterobacteriaceae spp. and non-fermenters. However, the time needed for species identification by conventional culture-based methods differs between clinical laboratories depending on routines. For the microarray, the turnaround time from positive bottle to microorganism identification was about four hours including preparation of blood culture bottles and DNA extraction. We estimated that the use of the microarray would save 2-3 hours compared to routine methods, but it is more labour intensive. For both blood culture and microarray, the incubation time of typically 1-3 days must also be considered. However, a time saving was obtained using multiplex PCR with an estimated turnaround time of only seven hours since this assay was performed directly on whole blood sample requiring no incubation time. For the multiplex PCR assay, the whole process control was invalid or partially invalid in 45 (12%) of the 383 blood samples initially tested. According to the manufacturer, the whole process control indicates whether the process has functioned optimally or not. No further information is given when the whole process control is flagged as invalid. We speculated that a very high DNA concentration in the samples either inhibited the PCR reaction or the purification process. Therefore, these samples were retested on a ten-fold dilution of the extracted DNA and all but one sample then gave a final result. The high rate of invalid whole process control thus reduces the clinical utility of multiplex PCR since such samples need to be retested. Our study has several limitations. One of the most important is that TATs were not precisely measured, just roughly estimated, mainly due to handling procedures. In addition, none of the NAATs were run 24/7; multiplex PCR was performed once daily whereas the microarray was ran every second day. Microbial finding of unknown significance.
2016-05-04T20:20:58.661Z
2015-04-28T00:00:00.000
{ "year": 2015, "sha1": "4702b1e1913a64649ff392c175768e0a1d17df53", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-015-0938-4", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c3a82cf68990138f6cbbbfe60b5b80d3f6d2e260", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
271482316
pes2o/s2orc
v3-fos-license
Close Cardiovascular Monitoring during the Early Stages of Treatment for Patients Receiving Immune Checkpoint Inhibitors Background: There is an unmet medical need for the early detection of immune checkpoint inhibitor (ICI)-induced cardiovascular (CV) adverse events due to a lack of adequate biomarkers. This study aimed to provide insights on the incidence of troponin elevations and echocardiographic dynamics during ICI treatment in cancer patients and their role as potential biomarkers for submyocardial damage. In addition, it is the first study to compare hs-TnT and hs-TnI in ICI-treated patients and to evaluate their interchangeability in the context of screening. Results: Among 59 patients, the mean patient age was 68 years, and 76% were men. Overall, 25% of patients received combination therapy. Although 10.6% [95% CI: 5.0–22.5] of the patients developed troponin elevations, none experienced a CV event. No significant changes were found in 3D left ventricular (LV) ejection fraction nor in global longitudinal strain f (56 ± 6% vs. 56 ± 6%, p = 0.903 and −17.8% [−18.5; −14.2] vs. −17.0% [−18.8; −15.1], p = 0.663) at 3 months. There were also no significant changes in diastolic function and right ventricular function. In addition, there was poor agreement between hs-TnT and hs-TnI. Methods: Here, we present a preliminary analysis of the first 59 patients included in our ongoing prospective clinical trial (NCT05699915) during the first three months of treatment. All patients underwent electrocardiography and echocardiography along with blood sampling at standardized time intervals. This study aimed to investigate the incidence of elevated hs-TnT levels within the first three months of ICI treatment. Elevations were defined as hs-TnT above the upper limit of normal (ULN) if the baseline value was normal, or 1.5 ≥ times baseline if the baseline value was above the ULN. Conclusions: Hs-TnT elevations occurred in 10.6% of the patients. However, no significant changes were found on 3D echocardiography, nor did any of the patients develop a CV event. There were also no changes found in NT-proBNP. The study is still ongoing, but these preliminary findings do not show a promising role for cardiac troponins nor for echocardiographic dynamics in the prediction of CV events during the early stages of ICI treatment. Introduction Over the past decade, immune checkpoint inhibitors (ICIs) have brought significant advantages to the field of oncology as they have demonstrated a survival benefit in various cancer types [1,2].As survival increases, patients will become more susceptible to cardiac adverse events [3].ICIs, like other cancer therapies (e.g., chemotherapy, tyrosine kinase inhibitors, vascular endothelial growth factor inhibitors, radiotherapy), can lead to cancer treatment-related adverse events.Cardiovascular (CV) immune-related adverse events (irAEs), in particular, have become a topic of interest, due to the high rates of morbidity and mortality associated with ICI-induced myocarditis [4].Although myocarditis was the first reported cardiac irAE, many other cardiac irAEs have also been reported [5,6].Nevertheless, CV irAEs are still poorly understood and have most likely been under-reported during the last few years, due to the lack of routine cardiac screening (echocardiography, electrocardiography, and troponins), wide varieties in clinical presentation, and the scarcity of real-world studies in patients receiving ICI therapy [5,[7][8][9][10][11][12]. Certain guidelines (ASCO and SITC) do not recommend systematic cardiac biomarker testing, whereas others (ESC and NCCN) suggest its consideration [13][14][15][16].However, these guidelines are based on limited evidence and expert opinions, due to a paucity of realworld data [16,17].Furthermore, the classification system used for grading adverse events in cancer patients, i.e., the Common Terminology Criteria for Adverse Events (CTCAE, Version 5), suffers from different limitations regarding cardiac adverse events [18].First of all, there is no distinction between high-sensitivity troponin T (hs-TnT) and I (hs-TnI) [19].Second, the threshold for a 'positive' troponin level remains unclear.Third, N-terminal brain natriuretic peptide (NT-proBNP) is also considered to be a potential biomarker for predicting cardiac adverse events [20,21].However, it is not listed in the current CTCAE.Thus, the present classification systems are insufficient for assessing deviations in cardiac biomarkers, which are often determined in the detection of cancer treatment-related adverse events. Here, we present the results of a preliminary analysis of the first cohort of 59 patients after three months follow-up.The main purpose of this preliminary analysis is to provide additional insights on the incidence of troponin elevations upon routine monitoring in ICI-treated patients and to explore TTE values that could possibly identify submyocardial damage in a uniform, Caucasian cohort.In addition, it is the first study to compare hs-TnT and hs-TnI in ICI-treated patients and evaluate their interchangeability in the context of screening. Study Population The mean patient age was 68 ± 12 years, and 76% were male.The most commonly used ICI was pembrolizumab (37%), followed by combination therapy, i.e., nivolumabipilimumab (25%).Bladder cancer, melanoma, and renal cell cancer were among the most frequent cancer types.Arterial hypertension, hypercholesterolemia, and diabetes mellitus type 2 were present in 46%, 66%, and 17% of the patients, respectively.In total, ten out of 54 patients had coronary artery disease at baseline, while 63% were either former or current smokers (Table 1).-For some parameters, data appeared incomplete or unknown.Therefore, we only described the proportion of patients for whom the data were available, along with the corresponding percentages. Cardiac Biomarkers: Hs-TnT, Hs-TnI, and NT-ProBNP Troponin T levels were measured prior to each ICI cycle during the first three months of treatment.The cumulative incidence of hs-TnT elevations was 10.6% [95% CI: 5.0-22.5](Figure 1).Thirty-six patients had normal (<14 ng/L) hs-TnT levels, while 23 patients had elevated (≥14 ng/L) levels at baseline.Of the patients with elevated baseline levels 9, 19, and 3 had coronary artery disease, chronic kidney disease (ranging from grade 2 to 3B) and heart failure, respectively.In total, 5 out of 36 patients developed elevated troponin levels within the first three months.One patient, who had elevated levels at baseline, also met the primary endpoint.Three patients did not have a history of CV disease, whereas two patients did.The other patient had a history of chronic obstructive pulmonary disease.Death was accounted for as a competing risk factor, as six patients died within the first 3 months of treatment, of which four had no elevations and two did.Three patients were followed-up for less than 90 days.Despite hs-TnT elevations, none of the patients experienced a CV event. + Calcium score was measured using a computed tomography scan in order to estimate the risk of heart disease based on calcium deposits in the coronary arteries. -For some parameters, data appeared incomplete or unknown.Therefore, we only described the proportion of patients for whom the data were available, along with the corresponding percentages. Cardiac Biomarkers: Hs-TnT, Hs-TnI, and NT-ProBNP Troponin T levels were measured prior to each ICI cycle during the first three months of treatment.The cumulative incidence of hs-TnT elevations was 10.6% [95% CI: 5.0 -22.5] (Figure 1).Thirty-six patients had normal (< 14 ng/L) hs-TnT levels, while 23 patients had elevated (≥ 14 ng/L) levels at baseline.Of the patients with elevated baseline levels 9, 19, and 3 had coronary artery disease, chronic kidney disease (ranging from grade 2 to 3B) and heart failure, respectively.In total, 5 out of 36 patients developed elevated troponin levels within the first three months.One patient, who had elevated levels at baseline, also met the primary endpoint.Three patients did not have a history of CV disease, whereas two patients did.The other patient had a history of chronic obstructive pulmonary disease.Death was accounted for as a competing risk factor, as six patients died within the first 3 months of treatment, of which four had no elevations and two did.Three patients were followed-up for less than 90 days.Despite hs-TnT elevations, none of the patients experienced a CV event.Troponin I and NT-proBNP levels, on the other hand, were only measured at baseline and three months.As opposed to Hs-TnT, none of the patients with normal baseline hs-TnI developed hs-TnI elevations (Table 2).One patient had levels above the ULN at three months; however, this was already present at baseline.A total of 21 out of 49 patients had NT-proBNP levels higher than the ULN at baseline and three months.Two patients, who had normal NT-proBNP levels at baseline, developed elevations during the first three months of treatment.The 3-month blood sample was only available for 49 out of 59 patients (Table 2) (Supplemental Figure 1).Troponin I and NT-proBNP levels, on the other hand, were only measured at baseline and three months.As opposed to Hs-TnT, none of the patients with normal baseline hs-TnI developed hs-TnI elevations (Table 2).One patient had levels above the ULN at three months; however, this was already present at baseline.A total of 21 out of 49 patients had NT-proBNP levels higher than the ULN at baseline and three months.Two patients, who had normal NT-proBNP levels at baseline, developed elevations during the first three months of treatment.The 3-month blood sample was only available for 49 out of 59 patients (Table 2) (Supplemental Figure S1). Echocardiography Parameters at Baseline and Three Months Only 50 out of 59 patients received their 3-month TTE.Two were treated in the best supportive care setting, for which their 3-month cardiology visit was canceled.One patient refused further CV follow-up shortly after treatment initiation, while six other patients died prior to their 3-month visit due to progressive disease (Supplemental Figure S2).Furthermore, due to limited image quality and/or prior valve replacement, it was not possible to measure each TTE variable for all 50 patients (Table 4).There was no change in 3D left ventricular ejection fraction (LVEF) after three months of ICI treatment (56 ± 6% vs. 56 ± 6%, n = 44, p = 0.903).Similar results were found for LV GLS (−17.8% [−18.5;−14.2] vs. −17.0%[−18.8;−15.1], n = 37, p = 0.663).RV function was assessed using TAPSE and s-wave; however, no significant differences were found (TAPSE 23 ± 5 mm vs. 22 ± 4 mm, n = 48, p = 0.335; s-wave 12 ± 3 cm/s vs. 13 ± 3 cm/s, n = 47, p = 0.578).After ICI initiation, the LA area did not dilate (17 ± 4 cm 2 vs. 18 ± 5 cm 2 , n = 47, p = 0.264).There were also no significant changes in the subgroup with coronary artery disease at baseline (Supplemental Table S1).None of the patients experienced a CV event during the first three months of treatment. Discussion In the present study, we assessed cardiac biomarkers along with routine 3D echocardiography in ICI-treated patients.In our cohort of 59 patients, we found that: (1) 10.6% developed hs-TnT elevations, in the absence of CV events; (2) almost half of the patients had elevated hs-TnT and NT-proBNP levels at baseline; (3) hs-TnT and hs-TnI showed poor agreement; (4) no significant changes were found on 3D echocardiography nor in NT-proBNP levels at three months. Cardiac biomarkers play a key role in the diagnosis of CV disease in non-cancer patients.While troponins I and T are biomarkers of myocardial injury [35,36], (NT-pro)BNP marks increased wall stress upon elevation [37].Previous research has demonstrated the beneficial role of measuring these markers in other cardiotoxic anti-cancer therapies, such as anthracyclines [38].Petricciuolo et al. [24] and Waissengein et al. [32], on the other hand, showed that baseline troponin levels can predict future MACEs.However, as some guidelines have also recommended serial monitoring, we aimed to evaluate the role of these biomarkers during treatment.In our cohort of 59 patients, approximately half already had troponin T or NT-proBNP levels above the ULN at baseline, while elevated hs-TnI levels were only present in one patient.Similar results were reported by Kurzhals et al. [30].Asymptomatic troponin elevations in cancer patients have previously been linked to disease progression, other (cardiac) comorbidities, and/or the deterioration of the patient's clinical status [39].In addition, most patients received prior oncological treatment, which could also have contributed to elevated baseline levels.During treatment, 10.6% developed hs-TnT elevations.Notably, there were no clinical CV events in any of the patients.This finding is in line with the results found in a sub-analysis of the JAVELIN 101 trial, a phase 3 trial of advanced renal cell cancer patients treated with a combination of a tyrosine kinase inhibitor and an anti-PD-L1 antibody, in which the routine monitoring of cardiac biomarkers in asymptomatic patients was not useful for the early detection of CV irAEs [9].Unlike the patients in the JAVELIN 101 trial, we assembled a uniform cohort of patients who received ICIs in the absence of other systemic anti-cancer regimens. While hs-TnT and hs-TnI elevations have a good biochemical concordance in patients with acute coronary syndromes, their role in the prediction and screening of CV irAEs remains unclear [40].Hs-TnI is often preferred above hs-TnT as it has been perceived to be more cardio-specific than hs-TnT.The reason for this discrepancy between troponin I and T still remains unclear.As previously mentioned, the majority of studies measure either hs-TnT or hs-TnI, resulting in limited data on measurements of both troponins within the same cohort.This is the first study to prospectively evaluate the agreement between hs-TnT and hs-TnI elevations in cancer patients receiving ICI therapy.We only found a poor agreement between both troponins at baseline and at three months.Our results are similar to the ones reported in a general population cohort [19].However, we did not perform a sub-analysis based on CV risk factors.Furthermore, since none of the patients developed a CV event, we were unable to compare these levels in the context of cardiotoxicity.However, a recent study did show that in patients who were hospitalized for symptomatic ICI myocarditis (n = 60), hs-TnI levels normalized earlier on than hs-TnT, suggesting that hs-TnT could be of superior clinical utility [41].Nevertheless, further data are required to fully understand the role of hs-TnT and hs-TnI in the context of screening for CV events in ICI-treated patients. Echocardiography is currently the preferred imaging technique for the diagnosis and management of myocardial damage and is recommended in moderate-and high-risk patients prior to ICI-treatment initiation.A TTE prior to ICI treatment in each patient, on the other hand, may be considered (level of evidence C in the European Society of Cardiology guidelines).In addition, routine TTEs during ICI treatment are currently not listed.In our study, all patients received a baseline and a 3-month TTE, including 3D LVEF, GLS (class I recommendation, level of evidence C),and an evaluation of LV diastolic function and RV systolic function [16].GLS has previously demonstrated its efficacy in cardiology for identifying subtle left ventricular myocardial dysfunction in CV diseases [42,43].As a result, research shifted towards GLS, since new strategies were needed for the early detection of cancer treatment-related cardiac adverse events to improve prognosis and patient outcomes; LVEF often lacks the sensitivity to detect early LV systolic impairment.Extensive research on the prediction and detection of CV events upon traditional cytotoxic chemotherapies illustrated that a decrease in GLS can serve as an early predictor of CV events and often precedes declines in LVEF [44][45][46].The exact role of GLS in the routine follow-up of ICI-treated patients still remains a topic of controversy.Our results reflect those of Awadalla et al. [47] who also found no significant differences in GLS in their ICI-treated control group (n = 92, both pre-and on-ICI were only available for 14 patients) who did not develop myocarditis.However, it remains unclear at which specific timepoint on-treatment GLS was evaluated.Pohl et al. [48] also found no significant changes in LV GLS, LVEF, LV volumes, diastolic function, and TAPSE (n = 30) in patients with melanoma after one month of treatment (nivolumab or nivolumab/ipilimumab). Contrarily, Mincu et al. [49] did find a significant reduction in GLS after only one month of treatment in a subgroup of 22 melanoma patients who developed non-cardiac irAEs.The discrepancy with our cohort could be attributed either to the fact that Mincu et al. [49] excluded patients with CV disease, which in turn complicates the future representativeness of GLS for a real-world ICI-treated population, or due to the fact that we did not take irAEs, other than cardiac, into account yet.Nishikawa et al. [33] also found a decrease in GLS in five out of the ten patients who developed myocardial injury, of which two had concomitant irAEs.Nevertheless, no statistical analyses were performed.In addition, Tamura et al. [11] found significant changes in deformation imaging in patients who developed troponin elevations (18/129).So far, the small sample size of our study has precluded subgroup analyses based on troponin elevations.Moreover, these findings are from a single center in Japan and cannot be extrapolated to our Caucasian patient cohort, as patient characteristics and tumor types differ [11].Xu et al. [34] also reported the significant deterioration of LV GLS, as well as the RV function (RV GLS and TAPSE) within 220 days of treatment.Hence, RV dysfunction might develop earlier on than LV dysfunction.Notably, more than half of the patients did receive ICIs in combination with other systemic cancer treatments which could have also promoted myocardial injury.Furthermore, these values were investigated over an extended period of time.In our study, a longer follow-up is needed to confirm or challenge these results. Our study has several limitations.It is a preliminary analysis of the first 59 patients included in our ongoing prospective trial.The sample size of the complete trial, i.e., a minimum of 276 patients, was not adjusted for this interim analysis, as study-level conclusions will only be made upon completion. Study Population All patients 18 years or older with a solid tumor eligible for and started with anti-PD-1, anti-PD-L1 and/or anti CTLA-4 treatment in mono-or combination therapy, and who signed informed consent, were included.Patients were excluded if they had received prior treatment with immunotherapy (ICIs, T-cell transfer therapy, cancer treatment vaccines or immune modulators).Patients receiving ICIs in combination with other systemic anticancer agents (chemotherapy, tyrosine kinase inhibitors, etc.) were excluded.The full eligibility criteria of the trial protocol are available online [50].Patients were recruited from four different hospitals: Antwerp University Hospital, AZ Maria Middelares, AZ Sint-Elisabeth Zottegem, and AZ Sint-Vincentius Deinze. The study was approved by the central Ethics Committee of the Antwerp University Hospital and follows the standards of the Declaration of Helsinki, in compliance with all national and local regulatory laws, and is consistent with the Good Clinical Practices guidelines.The protocol was also approved by the local Ethic Committees of AZ Maria Middelares, AZ Sint-Elisabeth Zottegem, and AZ Sint-Vincentius Deinze. Medical History and Biochemical Parameters Upon enrollment, the following data were collected from electronic medical records: informed consent, demographics, medical history, CV risk factors, oncological disease and stage, prior cancer history, prior/concomitant medication, cardiac biomarkers, and other relevant parameters.Troponins (hs-TnT or hs-TnI according to the site's local practice) were measured for all participants at baseline and prior to each ICI cycle.An additional blood sample (serum) was taken at 3 months and temporarily stored in the biobank for the future determination of hs-TnT, hs-TnI and NT-proBNP [50]. Three-Dimensional Transthoracic Echocardiography Three dimensional transthoracic echocardiography (TTE) was performed at baseline and at three months using a Vivid E95 ultrasound system (GE Healthcare, Horten, Norway) by a dedicated cardiologist.Systolic function, diastolic function, and ventricular and atrial geometry were assessed according to the American Society of Echocardiography and the European Association of Cardiovascular Imaging guidelines [51].Full 3D data sets were acquired to evaluate left ventricular volumes and calculate 3D ejection fraction.Two-dimensional speckle tracking was used to perform the semi-automated deformation imaging of the left ventricular (LV) global longitudinal strain (GLS) using three apical views (4-, 2-, and 3-chamber).The tricuspid annular plane systolic excursion (TAPSE) and right ventricular (RV) free wall basic segment peak systolic velocity (s'-wave) using color coded tissue Doppler imaging were measured.The maximal left atrial area (LA) was measured on an apical 4-chamber view.All echocardiographic images were digitally stored on EchoPac workstation (GE Healthcare, Horten, Norway). Study Endpoints The primary endpoint for this analysis was the incidence of an elevated hs-TnT above the ULN if the baseline value was normal, or 1.5 ≥ times baseline if the baseline value was above the ULN within the first three months of treatment.The maximum measured value was taken into account [50]. The secondary key endpoints that were evaluated at baseline and at three months were as follows [50]: 1. The incidence of hs-TnI and NT-proBNP above the ULN; 2. Association between the evolution of troponin/NT-proBNP and TTE and electrocardiography parameters; 4. Agreement between hs-TnT and hs-TnI levels. Statistical Analysis Study data were collected and managed using REDCap electronic data capture tools hosted at AZ Maria Middelares [52,53].Statistical analysis was performed using IBM SPSS statistics 28.0 software (IBM Corporation, Armonk, NY, USA) and R Software version 4.1.3.Frequencies and percentages were reported for categorical variables.Continuous variables were described as mean ± standard deviation for those with a normal distribution.For non-normal distributed parameters, the median and interquartile ranges were noted.Where values were missing, percentages were calculated for the available cases, and the denominator was mentioned.The primary endpoint of hs-TnT elevation was studied in a competing risk framework, treating all-cause mortality as a competing event.Cumulative incidence and 95% confidence intervals were calculated.Cohen's kappa (κ) was used to assess the agreement between hs-TnI and hs-TnT elevations, taking the ULN of each test into account.Only samples taken at baseline and at three months were used for this analysis.TTE parameters and NT-proBNP were compared at baseline and at three months using either a paired sample t-test, for normally distributed variables, or a Wilcoxon Signed Rank test for non-normally distributed variables.The level of statistical significance was set at p < 0.05. Conclusions It remains crucial to provide early evidence-based data on the role of cardiac biomarkers and TTE in the systematic follow-up of patients treated with ICIs to the cardiooncological community, as the recent guidelines are still mainly based on expert opinions and clinical trials that have strict inclusion criteria, which does not reflect the real world cancer population.The early detection of subclinical CV dysfunction is needed to minimize the risks, reduce healthcare costs and keep patients on their life-prolonging therapy.Especially since ICIs are increasingly being administered in an early stage disease setting, where patients often have a better prognosis, side effects can significantly impact the patient's quality of life.However, at present, there is no need for a more stringent follow-up than the current guidelines.Baseline measurements, on the other hand, should be performed in order to have an adequate reference value for each patient.Further enrollment in our study and a future pre-specified analysis will continue to elucidate the role of cardiac biomarkers and TTE, in both a larger group of participants and over an extended period of time. In conclusion, this study provides new insights on the incidence of troponin elevations in ICI-treated patients and explores TTE values that could identify submyocardial damage in a uniform, Caucasian cohort.In addition, it is the first study to compare hs-TnT and hs-TnI in ICI-treated patients and to evaluate their interchangeability in the context of screening.Our preliminary analysis found hs-TnT elevations in 10.6% of cancer patients during the first three months of therapy in the absence of CV events.No significant changes were noted on 3D echocardiography nor in NT-proBNP at three months.The study is still ongoing, but these preliminary findings do not show a promising role for cardiac troponins nor for echocardiographic dynamics in the prediction of CV events during the early stages of ICI treatment. Table 2 . Evolution of cardiac troponin I and NT-proBNP during the first three months of treatment. Table 3 . Agreement in elevation between both cardiac troponins at baseline and three months.
2024-07-27T15:21:24.909Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "7e04b03eb26f342e92e4a4fb1dd269a4c6d1b4ca", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "61312f91b71079524c576acf0250bc9093fdfe1b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119474081
pes2o/s2orc
v3-fos-license
Conductance quantization in etched Si/SiGe quantum point contacts We fabricated strongly confined Schottky-gated quantum point contacts by etching Si/SiGe heterostructures and observed intriguing conductance quantization in units of approximately 1e2/h. Non-linear conductance measurements were performed depleting the quantum point contacts at fixed mode-energy separation. We report evidences of the formation of a half 1e2/h plateau, supporting the speculation that adiabatic transmission occurs through 1D modes with complete removal of valley and spin degeneracies. Since the introduction of compositionally graded buffer layers in the strained silicon modulation-doped quantum well layer structure, continuous improvements in the design and optimization of the heterostructure growth parameters have led to the achievement of high mobility also in Si/SiGe two dimensional electron gases (2DEG). 1 The high quality Si/SiGe 2DEG has come out as a promising system for basic research in the field of 2D electron physics, which was previously mainly restricted to GaAs/AlGaAs heterostructures. Significant studies have been reported as the observation of the 2D metal-insulator transition at zero magnetic field [2][3][4] or the direct measurement of spin and valley splitting of Landau levels in silicon. 5 With mobilities corresponding to mean free paths in the order of the μm, the quality of the Si/SiGe material is adequate to investigate quantum transport phenomena in lower dimensional structures as 1D systems and quantum dots. However, the large majority of 1D conductance investigations have been performed on systems based on GaAs heterostructures. Few works have dealt with the 1D ballistic transport in silicon or Si/SiGe heterostructures. The major reason that has slowed down the progress in strained silicon quantum devices has been the difficulty in obtaining high confinement of charge carriers and an effective gating action. It has been suggested that this is due to leakage currents and parallel conducting path, likely to be caused by dopant segregation at the surface, dislocations and defects inherent in the Si/SiGe heterostructure. 6 Recently, strained-Si has gained considerable interest also for possible applications in the field of quantum information processing. 7 Challenged by the proposal of quantum computing architectures in SiGe quantum dots, 8 different research groups have been exploring alternative fabrication approaches to overcome technological and material related hurdles. Significant progress has been achieved as witnessed by the number of papers published recently that reported a satisfactory gating action on Si/SiGe quantum devices. [9][10][11][12][13] We previously demonstrated that significant quantum confinement can be achieved by introducing geometrical bends on etched Si/SiGe nanowires and reported the observation of single electron 3 charging effect above 4 K in Si/SiGe single electron transistors 14 and electron magnetic focusing in Si/SiGe quantum cavities. 15 In this paper we investigate the ballistic 1D electron transport in highly confined Si/SiGe heterostructure quantum point contacts (QPC). Since the discovery of conductance quantization in GaAs/AlGaAs systems, 16 QPCs were mostly investigated for fundamental studies. Recently, QPCs are attracting more and more interest also for their functional use as charge sensors capacitively coupled to quantum dots (QD). Notably, a QPC was successfully used as the electrical read-out channel of an individual electron spin in a QD 17 or as the local electrometer in a recent experiment that demonstrated coherent control of coupled electron spins in double QDs. 18 The QPCs considered in this paper were defined in Si/SiGe heterostructures by etching away the side material and were effectively controlled by a Schottky gate. We report here and discuss the presence at zero magnetic field of a conductance plateau at ~ e 2 In GaAs systems, the removal of spin degeneracy and the resulting splitting of the conductance plateaus is usually observed by adding an in-plane magnetic field, which causes the Zeeman splitting 4 of the 1D energy subbands. Surprisingly, a conductance quantization in units of approximately e 2 /h, which appear to lack spin degeneracy even at zero magnetic field, was reported recently in gated carbon nanotubes. 24 Closely related to these findings could be the additional conductance plateau at 0.5-0.7 G 0 , usually referred to as "0.7 structure". This is a spin-related phenomenon observed at zero magnetic field in clean 1D GaAs systems, originally evidenced by Thomas et al., 25 that has attracted a great deal of attention recently. [26][27][28][29][30][31] Its presence is assumed to signal the occurrence of non negligible correlation effects, although it does not seem that a general consensus on its origin has been reached as yet. [32][33][34][35][36][37] The QPC devices were fabricated on samples containing a high mobility Si/SiGe 2DEG. The 2DEG's are located 70 nm below the surface of Si/SiGe modulation doped heterostructures, grown by chemical vapour deposition. Details of the layer sequence thickness as well as the structural and morphological properties of the 2DEG's are described elsewhere. 38 For the samples considered in this work, a standard analysis of the low-field magnetoresistance at T=300 mK of mesa-etched Hall bars gives an estimate of the 2DEG carrier density D n 2 =9.8x10 11 cm -2 , electronic mobility μ =4.1x10 4 cm 2 /Vs and mean free path of ~ 500 nm. The QPCs were obtained by carving the 2DEG in a double-bend like geometry by electron-beam lithography (EBL) and reactive ion etching with fluorinated gases. The heterostructures were etched to a depth of 100 nm from the surface. In panels (a), (b) and (c) of Fig. 1 we report, respectively, a schematic of the QPC geometry prior to gate deposition, a side-view schematic of the gated QPC and, finally, a scanning electron micrograph of a complete device. The QPC is formed by the narrow conducting channel (width w) which originate at the junction between two sections (labelled S and D in Fig. 1(a)) protruding from the outer mesa structure. The S and D sections, 400-nm-wide and 200-nmlong, act as source and drain leads for the QPC. Since the overall dimensions of the constriction are smaller than the mean free path, the electronic transport through the narrow channel is expected to be 5 ballistic. With this approach, on the same 2DEG sample, nanostructures with constrictions of decreasing geometrical width w were obtained by reducing the extent of overlap between the S and D sections. As the constrictions become narrow and their effective width comparable with the Fermi wavelength, that in our 2DEG is estimated to be F λ~50 nm, they act as quantum points contacts connecting the source and drain. Due to sidewall depletion caused by the surface states generated by the fabrication process, the constrictions have an effective width much smaller than the lithographic one 39 so that the above condition can be easily met even when the lithographic dimension are larger than F λ . In this paper we investigate devices with constrictions that measure a lithographic width w~160 nm (as the one shown in Fig. 1(c)). We found this width small enough for the constriction to show a clear QPC behaviour in the electronic transport characteristics. Recent simulations of etched strained-silicon quantum wires with metal gates predicted a large 1D subband separation and capability of the gates in controlling the wire conductance. 40 Challenged by these promising results we adopted for the etched QPC a gating geometry similar to that considered in Ref. 40. A 5/30-nm-thick titanium/gold gate was patterned by EBL and lift-off in the shape of a 100nm-wide finger gate crossing the etched double-bend. The gate was carefully aligned to within 20 nm with the central constriction. The metal folds along the etched semiconductor surface actually forming a triple Schottky gate for the conducting channel (see Fig. 1(b)). Etched constrictions have strong lateral confining potentials. Also, the surface states completely screen the electric field imposed by the gate on the lateral walls. 40 As a consequence, the gate varies the carrier concentration without affecting the width of the quantum point contact. Therefore, in our devices we can follow the effect of depleting the 1D channel at fixed mode-energy separation. The leakage from the Schottky gate to the 2DEG was tested on several devices fabricated on different 2DEG chips. At T=450 mK, as the gate voltage was swept from -2 V to +1 V the measured leakage current was smaller than 0.2 pA. This large available working range enables a full control of the conduction through the QPC down to pinch-off. As suggested in Ref 10, the low-leakage level achieved could be due to the small size of the gates, whose active area is less than 100 nm x 160 nm for the devices considered in this work. The deep etch of 100 nm that defines the structures might also play a significant role in reducing the leakage current. Electronic transport characterisation of the QPC devices was performed at T=450 mK in a custom designed 3 He refrigerator 41 using standard ac low frequency lock-in techniques. The source-drain excitation (frequency of 17 Hz) was kept as low as 20 μV root mean square to prevent electron heating. The linear-response conductance (i.e. G=dI/dV SD around V SD~0 ) versus the gate voltage V G is reported in Fig. 2. This is a typical curve we measure in QPC devices with similar geometry. The curve was corrected for a series resistance R S = 19.4 kΩ, originating from both the 2DEG leads and the source and drain contacts. The curve exhibit plateau-like structures close to multiple integers of 0.5 G 0 . It is worthy of notice that in no case we would be able to subtract a R S such as to recover plateaus spaced by 1 G 0 or 2 G 0 . The curve was highly reproducible upon cycling V G from positive to negative voltages or the temperature from 450 mK to 4.2 K or to room temperature. While sweeping the gate voltage we did not observe any hysteresis nor switching event. This is a significant improvement with respect to previous reports on gated Si/SiGe nanowires. 6,14 Significant information on the ~0.5 G 0 (i.e. ~e 2 /h) plateau has been obtained from the non-linear transport measurements, i.e. the curves of the differential conductance G as a function of finite dc source-drain bias V SD for different gate voltages V G . In Fig. 3(a) we report a series of G-V SD curves, measured in sequence, progressively decreasing the gate voltage from -0.4 to -0.2 V in steps of 2.5 mV. This gate bias range covers the region where the linear conductance reported in Fig. 2 develops the ~0.5 G 0 feature. As a preliminary analysis, we point out that for |V SD | >10 mV the conductance value of all the G-V SD curves, irrespectively to the gate voltage bias, start to decrease tending toward zero, a clear indication of current saturation. The likely origin of this saturation will be discussed later on. In 7 the |V SD | <10 mV bias range we observe clear asymmetries in the curves, even around zero V SD , that we address in terms of a self-gating effect. 27 We correct our data for this electrostatic effect as in Ref. considering only the symmetric combination G*(V SD ) = ½[G(+V SD )+G(-V SD )] of the G(V SD ) traces. Adjacent point averaging was performed to highlight the trend of the data. We report the corrected G*(V SD ) curves in Fig. 3(b). The curves in Fig. 3(b) show an overall evolution very similar to that found in both GaAs quantum point contacts 42 for the 2e 2 /h quantization and carbon nanotubes for the e 2 /h quantization. 24 This evolution can be accounted for by using the single mode contribution of the Landauer theory for each of the plateau seen in Fig. 2. We see in Fig 8 Finally, we comment on the drastic decrease of conductance for V SD > ~10 mV. For sufficiently large source-drain bias the bottom of the electron band of the high-energy contact will become higher than the mode onset and, eventually, the electrochemical potential of the low-energy contact will drop below the bottom of the electron band of the high-energy contact. In these conditions the current saturates at a value independent of bias voltage and the differential conductance drops to zero. Another possible effect causing a current saturation is the electron drift-velocity saturation due to carrier heating at large bias and the onset of non-ballistic transport. 22 In Fig. 3(c) we report the curves of the conductance G versus V G as measured, in a successive cooldown, at different V SD dc bias that confirm the evolution we have described. The curves at V SD = 0 mV and 8 mV provide a clear evidence of the presence in the linear conductance of 0.5 G 0 and 1 G 0 steps evolving at large V SD to 0.25 G 0 and 0.75 G 0 structures, respectively. Arrows are a guide for the eyes. In the curve at V SD = +24 mV no significant structures appear due to current saturation. We estimate the energy spacing ΔE 1,0 between the first two 1D subbands by analyzing the non-linear conductance curves at fixed gate voltage with the Zagoskin method. 44 In a quantum point contact, when μ lies between the edges of two successive subbands, the subband energy spacing is ΔE =e/2(V 1 +V 2 ). Here V 1 and V 2 are the source-drain voltages at which the first two extrema occur in the derivative dG/dV SD , i.e. the position of the inflections of the G(V SD ) curves at fixed V G . Depending on the position of μ below or above the midway between the edges of successive 1D subbands, V 1 is a minimum and V 2 is a maximum or vice versa. In Fig. 4 we report two representative dG/dV SD curves obtained by numerical differentiation of the curves at V G = -0.3375 V and V G = -0.2925 V of Fig. 3(b). As depicted schematically in the insets, at these gate voltages the electrochemical potential μ lies below and above, respectively, the midway between the first two 1D subbands. Consistently with the relative position of the chemical potential and the band edges suggested, we found that V 1 is a minimum and V 2 a maximum for the curve at V G = -0.3375 V. The vice versa occurs for the curve at V G = -0.2925 V. 9 The subband spacing, calculated according to ΔE =e/2(V 1 +V 2 ), is ΔE 1,0 ~ 4.4 meV for both curves. This analysis was repeated for other curves, at different gate bias, in which we could mark unambiguously the position of well-resolved extrema. We found that the subband spacing does not vary significantly with the gate voltage. This confirms that, in our quantum point contact, changes in the gate voltage result in a variation of the carrier concentration without altering significantly its width. It is worth emphasizing that, although the overall behaviour of the linear and non-linear conductance More intriguing is the presence of the 0.5 G 0 plateau. Although the features are not as well resolved as in the GaAs case due to the much shorter mean free path of electrons in the SiGe heterostructures, we point out the similarity between the present data and those of the "0.7 structure". The "0.7 structure" was originally related to correlation effects involving the electron spin. 25 Since then a great deal of efforts has been dedicated to the understanding of its microscopic origin. One model attributes the effect to a spontaneous spin polarization in the QPC due to exchange interaction. 33,34 Another model 35 claims the formation of a dynamical local moment in the QPC resulting in a spin splitting due to the local Coulomb interaction energy U. This model would account for the observation of many features of Kondo physics in QPC. 29 Other models suggest electron-phonon coupling 36 or Wigner crystallization 32 as source of the effect. The observation we report of an analogous phenomenon in a completely different system like the Si/SiGe QPC is relevant to the problem, since a possible theoretical model is required to be valid also for the material parameters of the Si 2DEG. Previous investigations on the conductance of Si/SiGe QPC did not find the half G 0 quantization. We speculate that a strong confining potential is required in order to have the degeneracy removal and that the techniques adopted in Ref. 20-22 did not provide it. A strong confining potential is present in Ref. 23 and there the conductance curves do show a structure at 0.5-07 G 0 , although the authors do not mention it. We are currently investigating the relationship between potential strength and shape and the presence of the half G 0 quantization. This work was partially supported by the FIRB project RBNE01FSWY "Nanoelettronica" and the FISR project "Nanotecnologie per dispositivi di memoria ad altissima densità". G. S. thanks A. R. Hamilton for stimulating discussions.
2019-04-14T02:10:19.857Z
2005-12-16T00:00:00.000
{ "year": 2005, "sha1": "4588f53de0f17d8b9e701ae54aa24077601a5789", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0512412", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4588f53de0f17d8b9e701ae54aa24077601a5789", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
113434449
pes2o/s2orc
v3-fos-license
Computational Materials Engineering: Recent Applications of VASP in the MedeA® Software Environment Electronic structure calculations have become a powerful foundation for computational materials engineering. Four major factors have enabled this unprecedented evolution, namely (i) the development of density functional theory (DFT), (ii) the creation of highly efficient computer programs to solve the Kohn-Sham equations, (iii) the integration of these programs into productivityoriented computational environments, and (iv) the phenomenal increase of computing power. In this context, we describe recent applications of the Vienna Ab-initio Simulation Package (VASP) within the MedeA® computational environment, which provides interoperability with a comprehensive range of modeling and simulation tools. The focus is on technological applications including microelectronic materials, Li-ion batteries, high-performance ceramics, silicon carbide, and Zr alloys for nuclear power generation. A discussion of current trends including high-throughput calculations concludes this article. Introduction ore than ever before, our society depends on a perplexing multitude of materials to meet needs such as housing, heating/cooling, clean water, production of food, energy, infrastructure, communication, transportation, and health care, as well as allowing recreational and artistic activities. It is fair to say that until now the materials necessary for all these purposes have been developed by experimental methods. Although the fundamental physical and chemical laws that govern the properties of materials are known, the goal of designing materials with specific properties by a purely theoretical/computational approach remains elusive. The reasons for this are: (i) practical materials such as stainless steel, ceramic thermal barrier coatings of turbine blades, or carbon-fiber reinforced composites contain dozens of chemical elements; (ii) their functional properties depend on the microstructure and complex interfaces as well as on the properties of individual phases; (iii) the length-and timescales underlying engineering properties span more than fifteen orders of magnitude; and (iv) all of the above properties are a function of the processing history such as heat treatment of alloys, sintering of ceramics, or curing of composite materials. For these reasons, materials development will depend on experimentation for a long time to come, yet as we show here, computational methods will make increasingly important contributions to materials engineering going forward. The experimental development of novel materials remains an expensive and time-consuming activity and will become even more so as the requirements for high performance, low cost, and environmental compatibility become more stringent. Hence, any technology which can focus and accelerate the improvement of existing and the development of new materials is highly valuable. This is indeed the role and the challenge for computational materials engineering, as has been expressed in the concept of "Integrated Computational Materials Engineering" (ICME). 1) Simulations on the macroscopic scale are a well-established engineering practice, especially in structural analysis, in computational fluid dynamics (CFD), and in electrical circuit design. In these highly successful macroscopic simulations, materials properties data have traditionally been taken from experiment. As the predictive power of atomistic simulations increases, computed materials property data can become the input for these macroscopic simulations. This opens the exciting and unprecedented opportunity to close the loop from system design to materials design as illustrated in Fig. 1. However, as explained above, it is unrealistic to expect that one can completely replace experiment by simulations. Rather, one needs to create the synergy between atomistic simulations and experiments in the positive spirit of yin-yang or taegeuk. In the following sections, recent applications will be presented, which demonstrate current capabilities of predicting M Review materials properties from ab initio computations as part of materials engineering. These applications include published as well as unpublished work. In all cases the Vienna Ab-initio Simulation Package (VASP) [2][3][4] within the MedeA ® software environment with its various tools 5) has been used. The TiN/HfO 2 interface in high-k dielectrics In complementary metal oxide semiconductor (CMOS) technology, steadily diminishing device sizes have mandated the introduction of high-k dielectrics such as hafnium dioxide, which are replacing pure silicon dioxide dielectric layers. As a consequence, to maintain a low threshold voltage for switching, the material for the gate metal has had to be changed. Titanium nitride has emerged as a suitable choice in this role. A key requirement for energy efficient switching of CMOS devices is the alignment of the Fermi level (i.e. the energy of the highest occupied states) of the metallic gate with the band edges of the semiconducting channel of the device, as illustrated in Fig. 2. Empirically it was found that annealing of the as-deposited TiN in an oxygen atmosphere increased the work function, as desired. Secondary ion mass spectroscopy (SIMS) measurements showed that oxygen atoms had penetrated into the layer of TiN. It was thus concluded that the replacement of oxygen in TiN causes the increase of the work function. Using MedeA ® -VASP, detailed electronic structure calculations of models of the HfO 2 /TiN interface revealed, however, that replacement of N by O inside the TiN layer did not change the work function (cf. Fig. 2(a) and (c)). One could have concluded that computed results and experiment are in contradiction. Actually, this is not the case. While annealing of the stack in an oxygen-containing atmosphere leads to ingress of oxygen into the TiN layer, the O atoms inside the TiN layer are not the cause for the work function increase. Rather, calculations revealed that, driven by the ingress of O atoms, the diffusion of N atoms, and the filling of O vacancies in the HfO 2 layer, the replacement of O atoms by N atoms exactly at the interface between HfO 2 and TiN caused a dramatic increase of the work function, 6) thus reconciling computations and experiment. The origin of this behavior is the different chemical interaction of oxygen vs. nitrogen with the transition metal atoms Hf and Ti. Changes in the distribution of electronic charges between the HfO 2 and TiN layers at the interface determine the effective work function. This detailed understanding and control of the chemistry at the interface is thus critical to fabrication processes of energy efficient transistors. 2.2 Strength of metal/ceramic interfaces and thermal expansion: Al and Si 3 N 4 Silicon nitride is a fascinating ceramic material with a wide range of applications including engine parts and ignition systems in cars, ball bearings, for example in wind turbines, in rocket thrusters due to its resistance to thermal shocks, but also in medical orthopedic devices and in the semiconductor industry as insulator and diffusion barrier. As in many practical applications, interfaces play a critical role. An illustrative example in this context is the strength of an interface between aluminum and silicon nitride. Figure 3 shows two models of Al/Si 3 N 4 interfaces, namely one with Si-terminated silicon nitride and the other with Ntermination. The models were constructed using the interface building tools of MedeA ® combined with energy minimization performed with VASP, resulting in the structures shown in Fig. 3. As a measure of the strength of the interface, the work of separation is computed from the energy difference between the interface model and the energies of the corresponding free and relaxed surfaces. The difference between the two models is striking. While the Si-terminated interface results in a rather weak bonding between the two materials, the presence of N atoms between the surface Si atoms and the Al atoms of the metallic layer leads to a very strong cohesion between silicon nitride and aluminum. The ceramic and the metallic phases have significantly different coefficients of thermal expansion, as shown in Fig. 4. If the interface is formed at ambient temperature, heating of the system will create a high compressive stress within the metallic phase, which is likely to lead to misfit dislocations and partial decohesion. Modeling such a complex non-equilibrium process requires a multi-scale approach, as will be discussed in the last sections of this review. Here, the coefficient of thermal expansion is computed on the ab initio level using the so-called quasi-harmonic approximation. From phonon calculations for a range of different lattice parameters one obtains the vibrational entropy. Combined with the electronic energy this in turn gives expressions for the Gibbs free energy as a function of temperature. As the temperature is increased, the minimum of the Gibbs free energy shifts to larger lattice parameters. Analysis of this temperature dependence gives the coefficient of thermal expansion as used, for example, in the case of Mg 2 SiO 4 . 7) Developed by K. Parlinski,8) the integration and automation of this capability within MedeA ® greatly facilitates this task. Design of low-strain cathode materials for Liion batteries The volume change of active materials that accompanies charge and discharge of Li-ion batteries is a major source of degradation which limits the overall lifetime of such a battery. While a zero-strain anode material exists, namely Li 4 Ti 5 O 12 , there have not been any suitable zero-or lowstrain materials for cathodes. By using systematic DFT calculations, three low-strain materials have been found within the class of LiMn x Cr y Mg z O 4 . The most promising materials have been synthesized and characterized by X-ray diffraction and electrochemical techniques. The results are consistent with the ab initio predictions. 9) This work focused on oxides of the composition LiM 1 x M 2 y M 3 z O 4 crystallizing in the spinel structure. Lowstrain compounds were identified by performing systematic calculations exploiting Vegard's law as shown in Fig. 5 for selected structures. All calculations were carried out using VASP in MedeA ® with the PBEsol exchange-correlation functional. 10,11) The DFT calculations also provide detailed insight into the mechanisms resulting in a near zero-strain behavior. A synergistic compensation mechanism underlies the desired property as illustrated in Fig. 6. With increasing Li concentration, the Mg-O bond lengths tend to decrease, the Mn-O bond lengths remain similar, while the Cr-O bonds tend to increase. As a result, the overall volume of the crystal structure changes little upon charging and discharging with Li ions. This behavior is in close analogy to observation for the zero-strain mechanism for Li 4 Ti 5 O 12 , where local distortions in the crystal structure likewise allow this material to keep the volume nearly unchanged upon lithium insertion. The structure and properties of boron carbide Boron carbide, B 4 C, is one of the hardest materials known, close to cubic boron nitride and diamond. Due to these mechanical properties, it is used in applications such as armor and bulletproof vests. In nuclear power reactors boron carbide is used to control the neutron flux due to the high neutron absorption of 10 B and the radiation hardness and chemical stability of B 4 C. According to the boron-carbon phase diagram, 12) a boron carbide phase exists between approximately 8 at% and 20 at% C with a melting point reaching 2450 o C. The crystal structure of boron carbides consists of icosahedra connected with short linear rods of three atoms. However, the distribution of the C atoms in this structure is far from obvious. Clark and Hoard 13) give a structure for B 4 C where the icosahedra consist only of B atoms connected with C-C-C linear rods. For the boron-rich compound B 13 C 2 the structure given by Larson 14) shows B 12 icosahedra connected with linear rods of the composition C-B-C. The universal cluster expansion (UNCLE) method 15) based on ab initio calculations with VASP as implemented in MedeA ® offers a unique methodology for the investigation of the distribution of C atoms in boron carbides as a function of C concentration. The result for the B-C system is shown in Fig. 7. At a concentration of 20 at% (x B = 0.8) the most stable structure consists of icosahedra of the composition B 11 C with the connecting rods of C-B-C arrangements. This is consistent with the earlier theoretical work by Mauri et al. 16) At the higher boron concentration of the compound B 13 C 2 , all sites of the icosahedra are occupied by boron atoms while the rods maintain the C-B-C motifs. The elastic coefficients, which are readily computed with MedeA ® using a fully automated, symmetry general approach 17) reveal a stiffening of the material with increasing C concentration between B 9 C and B 4 C as shown in the insert in Fig. 7. Furthermore, the computed phonon dispersions for B 4 C reveal an isolated high-frequency mode, which originates from bond-stretching vibrations of B atoms in the C-B-C linear rods as illustrated in Fig. 8 consistent with the analysis of Lazzari et al. 18) 2 The optical properties in the spectral range of visible and ultraviolet light are determined by electronic transitions from occupied to unoccupied states. Quantitative predictions of these states require a level of theory beyond standard density functional calculations. So-called hybrid functionals such as HSE06 19,20) offer a practical approach to compute excitation energies. Using this approach in VASP and the optical analysis tools in MedeA ® , the computed refractive index of Y 2 O 3 (yttria) is in good agreement with experimental data, 21) as illustrated in Fig. 9. Important optical properties can be computed with good accuracy for a range of materials including transition metal oxides such as yttria as illustrated here. Hydride formation in Zircaloy The formation of zirconium hydrides is of high concern in the operation of nuclear reactors. Corrosion of zirconium alloys used in the core of nuclear power plants produces hydrogen, and a fraction of the hydrogen diffuses into the zirconium material. When the hydrogen concentration exceeds the terminal solid solubility, the excess hydrogen starts to precipitate as hydrides. This process may lead to embrittlement with crack formation due to lower ductility of the hydrides than that of the Zr matrix. VASP as integrated in the MedeA ® computational environment has been employed to study structural, thermodynamic, and elastic properties of the Zr-H system. 22) The computational accuracy of this method is needed to quantify and determine the behavior of hydrogen in Zr. This becomes clear considering the small energy difference between the octahedral and tetrahedral sites for hydrogen in the Zr lattice. The electronic total energy difference between the sites is computed to be only 5.9 kJ/mol, with the tetrahedral site being energetically favorable. Vibrational effects can readily be added using MedeA ® -Phonon. Inclusion of vibrational effects change the energy difference between the sites to 0.5 kJ/mol at T = 0 K and to 8.6 kJ/mol at 600 K. Using a thermodynamic model of the solution of H 2 into the octahedral and tetrahedral sites, solubility isotherms and terminal solubility of H in Zr can be computed in very good agreement with experimental data. For example, the simulations predict that the γ-hydride phase forms at H-Zr ratios between 1.1 at high temperatures and 1.4 at low temperatures. The reported existence range of the γ-phase is for H-Zr ratios between 1.1 and 1.5. The calculations also show that the hydrogen solubility increases under tensile strain and decreases under compressive strain. This leads to hydrogen migration and accumulation in expanded regions of the Zr lattice resulting in hydride precipitation. Examples of regions under tensile stress can be at a Zr/ZrO 2 interface, at the front of a crack tip, or even in regions around Zr self-interstitial atoms. Furthermore, hydrogen is attracted to Zr vacancies and voids. The simulations show that up to six hydrogen atoms are strongly bound inside a single Zr vacancy. Clustering of vacancies into dislocation loops can lead to regions with very high local hydrogen concentration. The simulations show that hydrogen inside the vacancy loops can delay or in some cases even prevent collapse of the loops. Each of these situations lead to regions highly supersaturated with hydrogen and could be potential nucleation sites of zirconium hydrides. A systematic study of the zirconium hydrides has been performed by successively filling tetrahedral sites in the zirconium lattice by hydrogen, probing a large number of configurations for H-Zr ratios between zero (pure α-Zr) up to complete filling of the sites at a ratio of 2.0 (ε-ZrH 2 ). Computation of the elastic properties of the hydrides is conveniently carried out using the automated approach. 17) in MedeA ® . Some of the hydride structures display elastic instability, such as cubic δ-hydride with full hydrogen occu- pancy which can be stabilized by introducing vacancies on the hydrogen sites or by a tetragonal distortion into ε-ZrH 2 . The elastic moduli of the most stable hydrides at each stoichiometry are shown in Fig. 10. The bulk modulus increases almost linearly with hydrogen concentration from pure α-Zr to ZrH 2 . The shear moduli of the hydrides are similar to that of α-Zr while Young's moduli of the hydrides typically are lower than for α-Zr. The clear exception is ZrH 1.25 which has high elastic moduli. This is identified as a γ-hydride of P4 2 /mmc symmetry. H-induced formation of nanotunnels on SiC surfaces The final example is the formation of nanotunnels on surfaces of silicon carbide. 23) The discovery of this type of surface features is the result of a close interaction between precise experimentation and systematic density functional calculations as detailed below. Silicon carbide is a fascinating ceramic material with a range of practical applications. It is a wide band gap semiconductor of interest for power electronics and high-frequency applications. The material remains operational at high temperatures and it is resistant to radiation. In nuclear energy technology, silicon carbide is used as fuel cladding, for example in the TRISO fuel. Furthermore, silicon carbide is a bio-compatible ceramic of interest to medical applications. While the ingredients, silicon and carbon, are readily available, the synthesis of high-purity SiC wafers requires highly sophisticated approaches 24) and the hardness of SiC makes processing difficult. The richness of phenomena on silicon carbide surfaces has stimulated a number of detailed investigations of this material. For example, C-rich surfaces exhibit formation of carbon chains; 25) hydrogen, which usually passivates surfaces, can induce metallization; 26) Si atomic lines can form on Sirich surfaces; 27) and complex surface reconstructions have been characterized. 28) One of the origins of this richness of silicon carbide surfaces is the fact that in crystalline silicon carbide, both Si and C atoms are tetrahedrally coordinated as in crystalline silicon and in diamond. The second nearest neighbors are arranged either in a cubic arrangement or in a hexagonal lattice. The energy difference between these two arrangements is small and hence SiC exists in a multitude of polymorphs with cubic and hexagonal SiC forming the end members. The Si-C bond length of 1.89 Å in SiC is close to the geometric mean of the Si-Si bond length of 2.35 Å and that of the C-C bond length of 1.55 Å in diamond. In other words, the lattice of bulk SiC is compressively strained in comparison with the lattice of pure silicon while, conversely, it is in a state of tensile strain in comparison with the diamond lattice. These opposing strains are equilibrated in bulk silicon carbide, but the situation at surfaces is different, especially if these surfaces are either rich in carbon or in silicon. On a Si-rich surface of cubic SiC the release of the surface stress gives rise to a remarkable behavior, when this surface is exposed to atomic hydrogen, namely the formation of nanotunnels. When a clean SiC(001) Si-rich 3 × 2 reconstructed surface is exposed to atomic hydrogen, the most reactive Si atoms in the top layer react strongly exothermally by forming the structure 2H shown in Fig. 11. A plausible binding site for the subsequent H atoms seems to be the bridge position between the Si atoms in the third layer. In fact, this has been assumed in earlier investigations, but it leads to inconsistencies with experimental vibrational data. In particular, the vibrational modes of H-atoms in the bridge position are far too low to explain the presence of a high-frequency Si-H stretching mode which would be expected for Si atoms bonded to C atoms in the form C-Si-H. Using systematic ab initio calculations with VASP in MedeA ® , another reaction scheme has been identified as illustrated in Fig. 11 Rather than binding in a bridge position in the third layer of Si atoms, H atoms can bind exothermally to Si atoms in the second layer forming a structure denoted 6H. This reaction is remarkable, because it breaks Si-Si bonds between the second and third layer leading to an outward movement or "puckering" of the Si atoms in the second layer. The reaction from structure 2H to 6H can be interpreted as a H-induced relief of the stress in the Si-rich surface of SiC(001). An illustration of the nanotunnel thus created is shown in Fig. 12. The computed vibrational frequencies (cf. Fig. 13), obtained from ab initio phonon calculations using MedeA ® -Phonon and VASP for the nanotunnel structure, agree very well with the experimental data obtained from high-resolution electron energy loss spectroscopy for SiC(001) 3 × 2 surface exposed to hydrogen and deuterium. 23) Furthermore, the computed frequencies are also consistent with the notion that Si-H stretch frequencies are shifted to higher values if the Si atoms are bonded to C atoms. Earlier experiments using infrared spectroscopy showed absorption at 2118 and 2140 cm −1 (Δν = 22 cm −1 ). 26) Computations using the nanotunnel model result in frequencies for the stretch modes Si1a-H and Si3a-H of 2087 cm −1 (not marked explicitly in Fig. 13) and 2120 cm −1 (Δν = 33 cm −1 ) as discussed in Ref. 23. The earlier bridge-bonded model is inconsistent with these experimental data. Thus, ab initio calculations have been essential in the clarification of the remarkable nanotunnel structure of a silicon carbide surface. Trends and Perspectives During the past decades we have witnessed steady progress in computational materials engineering of growing industrial value. It is probably fair to say that the ability to compute total energies of ensembles of any types of atoms using density functional theory is a cornerstone for this remarkable development. This fundamental capability has enabled in-depth understanding of rather complex systems and the prediction of a range of materials properties as illustrated in the previous section for selected cases. As example, for electronic applications we have presented a computational analysis of the interface between HfO 2 and TiN in the context of enhancing the efficiency of transistors with high-k dielectrics. The strength of metal/ceramic interfaces as a function of the interface chemistry has been illustrated here for the case of Al/Si 3 N 4 together with the ab initio calculation of thermal expansion coefficients. The accurate prediction of lattice parameters of compounds such as transition metal oxides in the spinel structure has enabled the design of low-strain cathode materials for Li-ion batteries. An application of this methodology to boron carbides using the cluster expansion method has helped to clarify the energetically preferred arrangement of boron and carbon atoms. Furthermore, the computation of the phonon dispersions of boron carbide has allowed the assignment of vibrational frequencies to specific atomic motions, such as the oscillation of boron atoms in linear C-B-C rods in boron carbides. Building on the results of DFT calculations, it is possible to predict optical properties such as the refractive index as a function of energy, which has been shown here for yttria. In the case of metal alloys, this ab initio approach provides detailed insight into the interaction of hydrogen atoms with a metal leading to the formation of hydrides and embrittlement, as illustrated here for the case of hydrogen in zirconium. In surface science, ab initio calculations have proven to be an invaluable tool to unravel complex surface reconstructions and to investigate the interaction of atoms and molecules with surfaces. In fact, an important industrial application of ab initio solid state calculations is related to heterogeneous catalysis. Here, the application to a surface science problem has been demonstrated for the case of H atoms inducing the formation of nanotunnels on Si-rich 3C-SiC(001) 3 × 2 reconstructed surfaces. In combination with precise experimental determinations of the vibrational properties of this system, ab initio calculations have revealed energetically favorable structures and have allowed the characterization of these structures by aligning experimental and computed vibrational frequencies, thus leading to the discovery of novel nanotunnel features. These examples are only a small selection of all the ab initio calculations which are currently being performed worldwide for a vast number of different materials. In fact, it is realistic to estimate that several million DFT calculations are currently being performed per year accumulating an unprecedented volume of data. This signifies a paradigm shift. Electronic structure calculations on solid state materials have a long history dating back into the 1930's, when the first augmented plane wave (APW) calculations were performed in the group of John Slater at the Massachusetts Institute of Technology. Throughout the second half of the past century, electronic structure methods have become increasingly sophisticated, but for the most part remained an academic discipline. Researchers typically spent months and sometimes years to investigate one or a handful of sys- tems. For example, in the early 1980's the all-electron ab initio calculation of the equilibrium geometry of a graphite monolayer (at the time not yet called graphene) took many hours of precious computing resources on supercomputers. 29) Today, such a calculation is completed in minutes on a laptop. However, the level of theory has not changed dramatically in the last 30 years. Indeed, we still rely on density functional theory. In the 1980's the common level of theory was the local density approximation (LDA) and the Kohn-Sham equation in the above case was solved with the then newly developed full-potential linearized-augmented-planewave (FLAPW) method. 30) Now, more than 30 years later the most common approach is the generalized gradient approximation (GGA) using the projector augmented wave (PAW) method, 31) which in essence is not all that different from the original APW method: the wave functions are still a combination of plane waves and localized atomic orbitals with numerical radial functions multiplied by spherical harmonics. The resulting C-C equilibrium distance given in 1982 was 2.450 Å or about 0.4% smaller than the experimental value in graphite. Today's ab initio values are essentially the same. One can say that during the past four or five decades, density functional calculations have evolved to a mature level. In fact, a recent systematic comparison of the major DFT codes resulted in a remarkable consistency of the computed results despite quite different algorithmic implementations. 32) What is novel and perhaps revolutionary at present is the ease and rate new data can be computed for increasingly complex systems. Stimulated by the Materials Genome Initiative in the USA, high-throughput calculations for hundreds of thousands of compounds have become possible and are driving research in a number of academic groups in the world. This is truly exciting and one can expect new algorithms, new computational procedures, and new forms of data analytics to be developed. Large databases of computed results are being created for existing as well as hypothetical structures. Mining this richness of information will undoubtedly reveal new insights and novel materials. While this large volume of data is valuable, it will not resolve other major challenges of computational materials engineering, namely (i) the multi-scale and multi-physics aspects, (ii) the inherently non-equilibrium character of materials, and -last but not least -(iii) the accuracy of the ab initio calculations. Let us consider these aspects as they provide a perspective on necessary future developments. In the overwhelming number of cases, materials engineering has a multi-scale and multi-physics character. For example, if one is interested in designing fracture resistant high-performance ceramics, one needs models of a polycrystalline material which are the result of a sintering process. Such models need to incorporate information about grain size, porosity, composition and properties of the crystalline grains with their defects, the structure of the intergranular interfaces, chemical segregation effects, residual stresses and perhaps charges trapped in defects. The fracture pro-cess involves long-range strain fields, elastic and plastic deformations on the mesoscale, grain boundary sliding, crack propagation, bond breaking at crack tips, and diffusion processes. Bond breaking and chemical rearrangements may cause local changes in vibrational energy which may entail thermal transport phenomena. Quantitative and statistically relevant modeling and simulations of such a system are all but routine with current simulation technology and yet the above case is "simple" from an engineering point of view. A more complicated case would be stress corrosion cracking involving an aqueous phase, which adds electrochemical aspects. While a decomposition of such materials problems into coupled discrete and continuum models is tempting, the dynamic fluctuations of atomistic and continuum domains, the vast configurational space, and the many orders of time scales involved make such a direct approach challenging. Such examples point to the need for novel and innovative theoretical and computational approaches combining coarse-graining in length and time scales in moving from electronic structure calculations to a continuum description with "fine-graining" in regions such as crack tips, where atomic-scale phenomena are decisive. The non-equilibrium character of materials represents a major and fundamental challenge for computational materials engineering. This means that the processing history needs to be included in the modeling. Because of the inherent uncertainties, a statistical approach is needed to establish probabilities and to estimate boundaries. Possibly techniques from manufacturing and quality control may have to be combined with the methods of computational materials science to capture this aspect. Finally, there remains the question of accuracy of current ab initio methods. While quantum chemists have developed approaches, which -at least in principle -can be converged to any desired degree of accuracy, this is not the case for the current form of density functional calculations. No systematic and practical way is known today to converge to the exact density functional. The limitations of DFT-GGA calculations are fairly well known, but despite intense efforts by a number of leading research groups in the world, there is no systematic and practical ab initio many-body approach which would allow one to compute, for example, the temperature of solid-liquid phase transitions to within a few degrees even for systems such as pure silicon. This present situation is by no means unusual in the evolution of science and technology. Rather, it should be taken as stimulus for pushing the frontiers of science. One also has to keep in mind that engineering in all its forms is not and never will be perfect in all respects, but good engineering implies reliable control of the limits. It is not necessary to predict the exact yield stress of a particular sample of a material. What is required is the knowledge of the upper bound where this sample will not fail. This general engineering principle is a good guide in computational materials engineering. Rather than seeking the ultimate accuracy, one needs computational protocols and approaches, which give reliable boundaries. This means a clear understanding of the key physical and chemical mechanisms which determine the properties of a material. The true art of computational materials engineering is knowing which aspects can be neglected while keeping the key characteristics of the problem. Sophisticated computational software environments with all their tools and capabilities facilitate this task, but in final analysis the best multi-scale and multi-physics tool remains the creative human mind abetted by the finest tools developed by the combined effort of the scientific and engineering disciplines.
2019-04-15T13:05:33.392Z
2016-05-31T00:00:00.000
{ "year": 2016, "sha1": "f66ddd3c652c1135ca8e53d7fec957c2af844e16", "oa_license": "CCBYNC", "oa_url": "http://www.jkcs.or.kr/upload/pdf/jkcs-53-3-263.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f66ddd3c652c1135ca8e53d7fec957c2af844e16", "s2fieldsofstudy": [ "Materials Science", "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
231579899
pes2o/s2orc
v3-fos-license
Non-immunogenic Induced Pluripotent Stem Cells, a Promising Way Forward for Allogenic Transplantations for Neurological Disorders Neurological disorder is a general term used for diseases affecting the function of the brain and nervous system. Those include a broad range of diseases from developmental disorders (e.g., Autism) over injury related disorders (e.g., stroke and brain tumors) to age related neurodegeneration (e.g., Alzheimer's disease), affecting up to 1 billion people worldwide. For most of those disorders, no curative treatment exists leaving symptomatic treatment as the primary mean of alleviation. Human induced pluripotent stem cells (hiPSC) in combination with animal models have been instrumental to foster our understanding of underlying disease mechanisms in the brain. Of specific interest are patient derived hiPSC which allow for targeted gene editing in the cases of known mutations. Such personalized treatment would include (1) acquisition of primary cells from the patient, (2) reprogramming of those into hiPSC via non-integrative methods, (3) corrective intervention via CRISPR-Cas9 gene editing of mutations, (4) quality control to ensure successful correction and absence of off-target effects, and (5) subsequent transplantation of hiPSC or pre-differentiated precursor cells for cell replacement therapies. This would be the ideal scenario but it is time consuming and expensive. Therefore, it would be of great benefit if transplanted hiPSC could be modulated to become invisible to the recipient's immune system, avoiding graft rejection and allowing for allogenic transplantations. This review will focus on the current status of gene editing to generate non-immunogenic hiPSC and how these cells can be used to treat neurological disorders by using cell replacement therapy. By providing an overview of current limitations and challenges in stem cell replacement therapies and the treatment of neurological disorders, this review outlines how gene editing and non-immunogenic hiPSC can contribute and pave the road for new therapeutic advances. Finally, the combination of using non-immunogenic hiPSC and in vivo animal modeling will highlight the importance of models with translational value for safety efficacy testing; before embarking on human trials. Neurological disorder is a general term used for diseases affecting the function of the brain and nervous system. Those include a broad range of diseases from developmental disorders (e.g., Autism) over injury related disorders (e.g., stroke and brain tumors) to age related neurodegeneration (e.g., Alzheimer's disease), affecting up to 1 billion people worldwide. For most of those disorders, no curative treatment exists leaving symptomatic treatment as the primary mean of alleviation. Human induced pluripotent stem cells (hiPSC) in combination with animal models have been instrumental to foster our understanding of underlying disease mechanisms in the brain. Of specific interest are patient derived hiPSC which allow for targeted gene editing in the cases of known mutations. Such personalized treatment would include (1) acquisition of primary cells from the patient, (2) reprogramming of those into hiPSC via non-integrative methods, (3) corrective intervention via CRISPR-Cas9 gene editing of mutations, (4) quality control to ensure successful correction and absence of off-target effects, and (5) subsequent transplantation of hiPSC or pre-differentiated precursor cells for cell replacement therapies. This would be the ideal scenario but it is time consuming and expensive. Therefore, it would be of great benefit if transplanted hiPSC could be modulated to become invisible to the recipient's immune system, avoiding graft rejection and allowing for allogenic transplantations. This review will focus on the current status of gene editing to generate non-immunogenic hiPSC and how these cells can be used to treat neurological disorders by using cell replacement therapy. By providing an overview of current limitations and challenges in stem cell replacement therapies and the treatment of neurological disorders, this review outlines how gene editing and non-immunogenic hiPSC can contribute and pave the road for new therapeutic advances. Finally, the combination of using non-immunogenic hiPSC and in vivo animal modeling will highlight the importance of models with translational value for safety efficacy testing; before embarking on human trials. INTRODUCTION Neurological disorders affect over a billion people worldwide (WHO, 2006). Amongst those the most frequent ones are strokes, epilepsy, migraine, Alzheimer's disease (AD) and Parkinson's disease (PD), which have an enormous economic and societal impact as well as diminishing patients' quality of life. At least 12% of all deaths worldwide can be attributed to neurological disorders (WHO, 2006), with the majority still lacking appropriate and curative treatment options. For this reason, attempts of cell replacement therapies have intensified due to their potential to replenish dead and damaged tissues with healthy, and for monogenic neurological disorders, genetically corrected cells. The idea of replacing damaged or diseased components of our body with healthy ones, has been pursued since the late sixteenth century when the Italian surgeon Gaspare Tagliacozzi was the first to perform a skin transplant (Tomba et al., 2014). He observed that transplants from donor individuals very often resulted in graft rejections. This failure was coined as "The force and power of individuality, " which we nowadays know is the immune system (Siemionow, 2018). Since then, our understanding of the immune system has greatly improved, leading to the development of new strategies for successful transplants. The use of cells for transplantation has become increasingly popular due to their accessibility, and less invasive transplant procedures. Despite improvements, the biggest challenge for a successful transplant still lies in the problem Tagliacozzi encountered 500 years ago, namely our individual immune system limiting comparability e.g., in human leukocyte antigens (HLA) and reducing the chances of finding a matching donor. If we are to succeed in efficiently applying transplants and cell therapy treatments, a different and more effective approach to resolve graft rejection is required. In this regard, the recent advances in precise genome engineering has launched new possibilities of designing cells, as it enables correction of pathogenic mutations and insertions of new genetic information. This narrative review updates the reader on the application of cell replacement in the treatment of neurological disorders, with a focus on PD where techniques are currently most advanced. Furthermore, a presentation and discussion of potential strategies to implement CRISPR-Cas9 for generating non-immunogenic human induced pluripotent stem cells (hiPSC), which have shown promising results compared to the currently applied strategies, will be made. NEUROLOGICAL DISORDERS The term neurological disorder spans a wide variety of disorders, as it includes all disorders caused by malfunction of the central and/or peripheral nervous system. Most neurological disorders such as stroke, sporadic PD and ALS and do not have a clear genetic background, even though they have genetic risk factors (Klein and Westenberger, 2012;Boehme et al., 2017;Mejzini et al., 2019). Other disorders such as Huntington's disease, familial AD, and muscular dystrophy have well-known pathogenic mutations. Despite their huge differences, the majority of these disorders share one common trait, which is the vulnerability of specific neurons. This vulnerability manifests in symptoms such as seizures, muscle weakness, cognitive decline, and partial to complete paralysis. Even though diverse neurons and cell types are affected in the various disorders, it is generally accepted that common pathological events lead to degeneration and cell death (Chen et al., 2016). For this reason, beneficial effects of similar treatment targets such as improving mitochondrial function (Mattson et al., 2008) and abnormal inflammatory responses (Skaper et al., 2018) should be investigated. Treatments that have shown benefits in more than one neurological disorder include; electrical deep brain stimulation (Kocabicak et al., 2015), antiinflammatory drugs (Terzi et al., 2018) and anti-epileptic drugs (Bialer, 2012). Pathological commonalities could be exploited to generate treatment options targeting a broad spectrum of neurodegenerative diseases and elucidate early disease hallmarks central to prevent severe and irreversible damage at later disease stages. Those late disease stages are currently the time points at which the majority of treatment is attempted. Amnjiit Podder et al. showed that besides an overlap in symptoms, several neurological disorders including PD, AD, schizophrenia, autism and migraine have overlaps of genes such as BDNF, DRD2, GAD1, GRIN2A, MAOA, and MTHFR, affecting the functionality of dopamine receptors connecting protein-protein interactions network (Podder and Latha, 2017). Such genetic studies have the potential to help elucidate which pathways and genes are common between various disorders allowing for more generalized treatment. However, more detailed knowledge of the individuality of neurological disorders may provide crucial information of specific variation in response including why a generalized treatment, such as cell replacement therapy, might not show equal efficiency for different neurological disorders. The search for treatment and a cure for various neurological disorders is limited by the difficulty of studying the human nervous system and the complex interplay between disease cause, pathology and phenotype. Supplementary Table 1 depicts an overview of neurological disorders, all currently with ongoing clinical trials assessing stem cells as treatment. Additionally, the table lists an overview of the specific pathologies, cell type, areas of nervous system affected and current treatment options. To provide an overview the number of current clinical trials (searchable through www.clinicaltrials.gov) is listed together with a reference to the most recent review paper on stem cell treatment for the specific disorder. CELL REPLACEMENT THERAPY Cell replacement therapy has increased in popularity since it was showed in 1957 that allogenic bone marrow transplants were successful for treating leukemia (Thomas et al., 1957). Since then, hematopoietic stem cells have been tested as a potential treatment for the majority of neurological disorders (Sun and Kurtzberg, 2018). Especially neurodegenerative diseases such as PD, have made great progress since the first allogenic FIGURE 1 | Overview of sources for cell replacement therapy. The source for cell replacement therapy can be either autologous or allogenic. Allogenic cells are associated with an immune response and increase risk of rejection. Therefore, donor cells need to be matched to the patient's immune system. Various allogenic sources such as fetal cells, somatic stem cells, differentiated cells or derivatives from hiPSC can be used for cell replacement therapy. Autologous cells do not cause an immune response. Autologous cells are extracted from the patient either as somatic stem cells or somatic cells for reprogramming. These cells can be injected back into the patient as multipotent stem cells or as differentiated cells. study in humans successfully injected fetal mesencephalic tissue containing dopaminergic neurons, into the striatum of two PD patients in 1989 (Lindvall et al., 1989). The great potential of stem cells for treatment lies in the nature of stem cells. Stem cells are able to differentiate into most cell types found in the body and their continuous proliferation capacity allows for large scale treatment. Figure 1 shows the various sources that can be used for stem cell-based treatment, which can be either autologous or allogenic. Allogenic transplants are generally associated with immune response, whilst most autologous transplants cause no immune response (Champlin, 2003). In general pluripotent stem cells are not considered for transplantation, due to their difficult to control proliferation and oncogenic properties (Mamelak et al., 1998;Keene et al., 2009;Mousavinejad et al., 2016). More commonly precursor cells (Strnadel et al., 2018), or multipotent stem cells, such as mesenchymal stem cells (MSC) (Saeedi et al., 2019), are implemented. Even though autologous transplants are not associated with rejection, studies have shown varying improvements of treatment depending on the disorder. Little improvement has been reported for neurological disorders affecting motor neurons as affected in Amyotrophic lateral sclerosis (ALS) (Goutman et al., 2019) and Spinal Muscular Atrophy (Carrozzi et al., 2012), whereas stabilization and minor improvement is reported in PD (Brazzini et al., 2010;Canesi et al., 2016). Several reasons could underlie those discrepancies in effectiveness such as the source of stem cells and cell type transplanted. For treatment of PD, a substantial difference in results measured by motor scores, non-motor function and cognitive function is seen between transplantation of multipotent stem cells and more differentiated precursor cells. Transplantation of multipotent stem cells gave varying results in progression from no effect (Venkataramana et al., 2010) to stabilization (Canesi et al., 2016) to small improvement (Brazzini et al., 2010). On the contrary transplantation with neuronal precursor cells for dopaminergic neurons have shown consistent improvement in several studies (Piccini et al., 1999;Stoker, 2018). Apart from the various cell type and their sources, an explanation of the different stem cell-based transplantation efficacies, could simply lie within the nature of the disorders. Stem cell-based therapies for disorders such as PD and ALS might be more efficient compared to disorders where multiple cell types are lost such as seen in AD. The difference in efficiency is to some extent caused by the current ability to differentiate certain cell types from stem cells. For PD treatment where dopaminergic neurons are replaced, more than 70 differentiation protocols have been published, resulting in high efficiency (Marton and Ioannidis, 2019). On the contrary, differentiation protocols for motor neurons used for replacement in ALS are not as developed (Gowing and Svendsen, 2011). For treatment of disorders such as AD, transplantation of a single neuron sub type will most likely not be sufficient and co-culture protocols with several cell types still need optimization before being used for stem cell therapy (Goshi et al., 2020). Allogenic transplants have superior treatment outcomes in disorders such as cancer compared to autologous transplants (Champlin, 2003). It may therefore be likely that allogenic transplants will also be more favorable for stem cell-based treatment of neurological disorders, and allogenic transplants have already been widely used in clinical trials for PD treatment where all showed improvement in "on" and "off " -states and a decrease in Levodopa dose for at least 12 months after treatment (Henderson et al., 1991;Kordower et al., 1995;López-Lozano et al., 1997;Piccini et al., 1999;Brundin, 2000;Venkataramana et al., 2010Venkataramana et al., , 2012. Particularly, autologous and allogenic MSCs (Venkataramana et al., 2010(Venkataramana et al., , 2012 have been used as they readily differentiate into neurons (Scuteri et al., 2011) and display protective anti-inflammatory effects on dopaminergic neurons (Kim et al., 2009). A disadvantage of MSCs is the difficulty to grow them in vitro, severely hampering the expansion capability and limiting the use of one donor to treat several patients. Another even more popular source for cell mediated PD treatment has been neuronal tissue from aborted fetuses (Henderson et al., 1991;Kordower et al., 1995;López-Lozano et al., 1997;Piccini et al., 1999;Brundin, 2000). This type of treatment possesses a number of disadvantages. Besides the need for immunosuppressive medication, the use of fetal donors presents serious ethical issues. Moreover, the small number of cells available from aborted fetuses is not sufficient to offer generalized treatment. Cell therapy has a great potential as treatment of a broad variety of neurological disorders, such as the ones listed in Supplementary Table 1, it is of high interest to find the cell type that provides the best platform to initiate personalized treatment. One very promising stem cell type is hiPSCs. They can be generated from various tissues, including nucleated blood cells, which allows easy and pain free access to cellular material. Moreover, hiPSC are widely used to conduct genetic modifications using the CRISPR-Cas9 gene editing tool. If genetic defects can be repaired prior to transplantation in cells which would not be rejected by the host immune system, this would take personalized medicine to an even higher level. Consequently, hiPSC may provide a universal platform for cell therapy especially in combination with gene editing to obtain non-immunogenic cells. IMMUNE REJECTION The greatest obstacles for transplantation of hiPSC is the immune response causing graft rejections. The immune response is triggered when the host's immune cells recognize antigens presented by the major histocompatibility complex (MHC) on the surface of the foreign cells as being different to the hosts. This recognition initiates a cascade of signaling pathways releasing cytokines that varies depending on the type of recognition. There are two classes of MHC. MHCI is expressed on all nucleated cells where they present antigens from the interior of the cell and are required for the activation of CD8+ cytotoxic T-cells (Abbas and Lichtman, 2009). For humans, the MHCI is separated into three major classes called HLA A, B, and C and three minor classes called HLA E, F, and G. The MHCII is expressed on antigen-presenting cells such as dendritic cells where they express antigens from extracellular proteins and are required for activation of CD4+ helper T-cells. The MHCII corresponding HLAs are HLA-DM, HLA-DOA, HLA-DOB, HLA-DP, HLA-DQ, and HLA-DR (Ting and Trowsdale, 2002). Immune rejection can generally be divided into two categories depending on whether it is triggered by the immune cells of the host or by the immune cells present in the graft. An immune rejection triggered by the immune cells of the host is caused by host T-cells recognizing MHCI from the recipient or by host CD4+ cells recognizing peptides from the antigen presenting cells of the graft (Figure 2A). Immune rejection can also be caused by the immune cells of the graft recognizing the MHC of their new host (Same response mechanism as Figure 2A). In order to use cell therapy for treating neurological disorders it is necessary to elucidate how the problem of immune rejection can be avoided. Recent approaches to avoid rejection in general include filtering the recipient's blood, such that only regulatory T-cells that aren't able to recognize the antigens of the donor are kept (Sánchez-Fueyo et al., 2020). A different strategy is to desensitize the recipients immune system by removing antibodies and replacing them with antibodies from the donor (Leventhal et al., 2012;Kawai et al., 2014). These approaches have already showed great promise, with several patients being able to stop immune suppressing medication 4-12 month after transplant, even for allogenic donors (Kawai et al., 2008). Other strategies have used quiescent donor dendritic cells, which induced regulatory T-cells, resulting in tolerance of the transplant (Yates et al., 2007) or using co-stimulation blockade of various antigens (Grinnemo et al., 2008). Despite these new approaches, the main strategy to avoid rejection is still the use of HLA matching, as a better match reduces the risk of hyper acute rejection, acute rejection and host vs. graft rejection (Morishima et al., 2002). However, donor HLA matching which is implemented in combination with immunosuppressive drugs can be very challenging for patients with rare HLA types. Furthermore, HLA matching has not been shown to have an effect to prevent chronic rejection (Aron Badin et al., 2019) and graft vs. host disease can still occur if the minor histocompatibility complexes mismatch (Wood et al., 2016). Immune Rejection in the Brain Until recently it was considered easier to perform allogenic transplants in the brain, since the brain was considered immune privileged (Louveau et al., 2015). However, several studies have shown that this is not the case as allogenic brain transplants can cause immune response from neural transplants (Lawrence et al., 1990;Krystkowiak et al., 2007;Fainstein and Ben-Hur, 2018). Other studies show that allogenic transplants do not result in rejection or cause life-threatening or severe symptoms even though an immune response can be measured (Henderson et al., 1991;Kordower et al., 1995;López-Lozano et al., 1997;Venkataramana et al., 2012). For instance a post mortem study of a PD patient receiving transplant of fetal allogenic neurons showed only a mild immune response 4 years after transplantation despite the fact that the patient had only received immune suppressing treatment for 6 months (Mendez et al., 2005). An explanation for this lack of immune rejection has been the low expression profile of MHCI and MHCII in various cell types of the brain such as non-activated microglia and astrocytes (Adelson et al., 2012). A study in non-human primates confirmed low expression of HLA-I in dopaminergic neurons, causing only a mild immune response and no rejection (Morizane et al., 2013). MHCII expression is not only found in microglia, as initially expected, but also in a subpopulation of neural progenitor cells during development (Vagaska et al., 2016). Both MHCI and MHCII are involved in the recognition process of the immune system. A low expression of these is associated with lower immune response as there will be less MHCI and MHCII present at the cell surface to present antigens causing T-cells activation (Figure 2). In this aspect, stem cells display a very low MHCI expression and therefore lower immunogenicity (Drukker et al., 2006). They will however begin to express MHCI and MHCII during differentiation (Lawrence et al., 1990;Liu et al., 2017). Surprisingly, derivatives from stem cells have shown very different results in regards to immune response (Zhao et al., 2015;Wood et al., 2016). Studies in mice with autologous iPSC derivatives show no immune response when injected into the renal space (Guha et al., 2013) the dorsa (De Almeida et al., 2014) or the tail vein (Araki et al., 2013). Another study in a humanized mouse model show varying immune responses depending on iPSC derivative (Zhao et al., 2015). This difference is believed to be partially caused by the minor histocompatibility complexes, which might have different levels of influence depending on the cell type and differentiation state (Goulmy, 1997;Robertson et al., 2007). In general all studies that showed low immune response differentiated the mouse iPSC in vitro whereas the other study looked at cell types in a formed teratoma (Zhao et al., 2015). Even though the majority of treatments, using stem cell derivatives for neurological disorders, only caused mild immune responses, all allogenic stem cell treatments have to be given in combination with immunosuppressive drugs. Stem cell treatment without simultaneous immune suppressive drugs has been shown to cause a life-threatening inflammatory state in a patient and a concise review presents several cases of adverse events (Alderazi et al., 2012;Bauer et al., 2018). Furthermore, treatment with immunosuppressants has the serious disadvantage of significantly increasing the susceptibility too infections, and can cause cell death (Inglese et al., 2004;Rocca et al., 2007). Despite the need for immunosuppression, it is investigated if the treatment can be terminated after a period to avoid some of the negative side effects of lifelong immune suppression. Two different methods have already showed to be successful for renal transplants. One works by "re-setting" the circulating immune cells through a drug targeting white blood cells for destruction, followed by blocking the co-stimulatory pathway to ensure that newly formed blood cells will not recognize the graft (Kirk et al., 2014). Alternatively, blood cells from the donor can be injected into the recipient resulting in chimerism of the immune system, which has been shown to cause no rejection in several patients 18 months after transplantation in an ongoing phase 2 clinical trial (Leventhal et al., 2015). Even though these studies were conducted with renal transplants, they provide evidence of feasibility and underline the great potential for transplants of other cell types into the brain. GENERATING NON-IMMUNOGENIC IPSC hiPSC By developing non-immunogenic hiPSC, one donor can potentially help numerous patients, as a single biopsy reprogrammed into hiPSC can be grown and expanded indefinitely in culture. hiPSCs, are cells that have been reprogrammed from differentiated somatic cells into a pluripotent state, by expressing four transcription factors expressed in the inner cell mass of early blastocysts (Takahashi and Yamanaka, 2006). It was discovered in 2006 by Yamanaka and has since then been widely used in research based on its convenience, availability and reduced ethical constrains compared to embryonic stem cells. The main use for hiPSCs is in the field of research, serving as human in vitro disease models, mainly from patients with genetic mutations found in the rare familial forms of AD, PD and ALS (Imaizumi and Okano, 2014) (See Figure 3). Differentiation protocols to generate specific cell types such as glutamatergic and GABAergic neurons, astrocytes, oligodendrocytes and microglia from hiPSC have been greatly improved over the past decade, allowing the investigation of cell type specific disease mechanisms caused by the pathogenic mutations (Ebert et al., 2012). Currently, very few clinical trials are made with hiPSCs, however results from a single clinical trial was published in 2020 showing improvement in one PD patient 24 months post transplantation (Schweitzer et al., 2020). The study injected autologous hiPSC, differentiated into midbrain dopaminergic progenitor cells, into the putamen, left and right hemisphere with 6 months between injections, but as only one subject (age 69 with a 10 year PD history) was included, conclusions are severely limited. Currently, several studies are ongoing, including a collaborative study generating dopamine neurons from HLA-matched donor hiPSC, autologous hiPSC and hESC to treat PD (Barker et al., 2017) and two clinical trials using neural stem cells derived from hiPSC for treatment of PD (trial number: NCT03815071 and NCT02452723). Even though clinical trials with hiPSC have only been conducted in PD patients, preclinical studies with iPSC have shown benefits in mice with spinal cord injury (Cummings et al., 2005), Huntington's disease , ALS (Kondo et al., 2014), and stroke (Zhang et al., 2011). hiPSCs hold further potential for treatment as they can be used efficiently in combination with gene editing tools such as CRISPR-Cas9. This allows to generate isogenic controls for in vitro work (Soldner et al., 2011) and to correct pathogenic mutations in patient cells (Pires et al., 2016), hereby providing autogenic cells with a higher therapeutic potential (See Figure 3). Disadvantages of hiPSCs are that the reprogramming procedure may produce minor histocompatibility mismatches causing rejection even for autogenic transplants (Wood et al., 2016) and that reprogramming for up to 50% of all cases are associated with other genetic and epigenetic modifications (Gore et al., 2011). In order to lower the risk of genetic change, non-integrative reprogramming strategies using episomal plasmids, Sendai virus or synthetic mRNA, are favorable for clinical cell lines. hiPSCs that are generated from skin biopsies, may in addition contain unwanted mutations caused by their exposure to UV light. These challenges all have to be overcome before hiPSC can be applied for cell therapy. This underlines the necessity for extensive testing of potential cell lines prior transplantation. Testing should include karyotyping for chromatin validation, validation of differentiation protocol to ensure differentiation potential and whole genome sequencing or targeted sequencing to find unwanted mutations and specific disease genes conferring an external extra risk for other pathologies. Using a single cell line makes it possible to make extensive quality control, which due to time and money restrains will not be feasible for personalized cell lines (Smith, 2012). Furthermore, using a single cell line for several patients, allows for easier comparison due to the identical genetic background of the transplants. The stem cell lines would have to go through quality control on a regular basis, as in vitro culture is known to introduce genomic changes (Peterson and Loring, 2014). Furthermore, the lines should be screened for other genetic risk factors, which could be corrected using CRISPR-Cas9. Correction of genetic risk factor SNPs, or even exchanging them to preventive ones, could result in generation of superior cells for transplantation. Generation of such superior cells falls into a new category of gene editing, posing ethical considerations that must be justified prior to proceeding (Mikkelsen et al., 2019). CRISPR-Cas9 CRISPR is the natural adaptive immune system in bacteria (Mojica et al., 2005), a system in which the Cas9 endonuclease targets and cleaves the genome of bacteriophages via matching of base pairs. In 2012 Doudna et al. designed the now popular and widely used CRISPR-Cas9 system, which can target almost any site in the mammalian genome by designing a 20 nucleotide FIGURE 3 | Human induced pluripotent stem cells (hiPSC) for treatment and modeling of genetic diseases. This figure illustrates the various applications of hiPSC, which can be generated from a healthy donor and afterwards differentiated into specific cell types for cell-based therapy or downstream drug screening in in vitro based cell systems. The patient derived hiPSC can additionally be genetically corrected using CRISPR-Cas9. This isogenic cell-line can be differentiated and compared to cells from the patient to obtain knowledge of disease and mutation specific mechanisms. Differentiated gene corrected hiPSC can be injected back into the patient for autologous cell therapy. RNA sequence in the guide RNA, complimentary to the DNA target site (Jinek et al., 2012). The guide RNA in complex with Cas9 can only attach to the DNA if the guide sequence is located prior to a protospacer adjacent motif which is present in the human genome on average every 42 bp (How Often Are the PAM Sequences Presented in the Mammalian Genome in Average?, 2020). After attachment Cas9 will make a double stranded cut in the DNA 3bp upstream form the protospacer adjacent motif site allowing for either knock-out (KO) of genes or insertion of new genetic material. CRISPR-Cas9, facilitated by short RNA guide molecules, has become one of the most used gene editing tools. CRISPR-Cas9 is superior in regards to efficiency and simplicity of design, compared to other, protein-based gene editing tools such as TALENS and ZINC finger nucleases. CRISPR-Cas9 has mainly been used to generate KO of various genes, which has an efficiency of up to 100% for hiPSC (Li et al., 2018) and over 80% for human embryonic stem cells (Bohaciakova et al., 2017), varying from cell line to cell line. Another popular application, for CRISPR-Cas9 is to edit genomic information (Xu et al., 2018). This is because the gene editing is dependent on the less apparent repair mechanism: homology directed repair, which relies on a template providing the new genetic information. Insertions of various sizes from 1 bp (Okamoto et al., 2019) to several kbp (He et al., 2016) have been successful, allowing the use of CRISPR for a wide range of studies. In hiPSC, CRISPR-Cas9 has generally been used to generate cell models to investigate the cellular pathologies of neurological disorders with defined genetic background. In these models the known pathogenic mutation, such as for example the A53T mutation in the SCNA gene in PD cell lines, can be corrected with the healthy nucleotide to obtain a gene-corrected cell line. If the disease phenotype is mutation dependent all cellular disease phenotypes should be absent and thereby recued via CRISPR-Cas9 gene editing . Another option is to introduce pathogenic mutations into a "healthy" hiPSC lines to generate a cell lines that should show a phenotype similar to the patients cell lines . These corrections or insertions allow for comparative studies (Zhang et al., 2017) (see Figure 3) and furthermore allow for future opportunities of patient specific therapies (Safari et al., 2020). Gene editing has not only been applied in disease modeling. Cell therapy, implementing gene editing, has already been tried in rodents, where Adenovirus was used to therapeutically insert the gene tyrosine hydroxylase in MSC with subsequent transplantation into brains of PD mouse models, successfully increasing dopamine levels (Lu et al., 2005). In humans, the potential of using gene editing for treatment has already been shown for other disorders, such as leukemia, where gene editing has been used to save the lives of two infants. This was done by gene editing T-cells to express chimeric antigen receptor against the B cell antigen CD19 (Qasim et al., 2017). Those genetic engineered T-cells seek out CD19+ B cell acute lymphoblastic cancer cells and eliminate them. Those pioneer works underline that gene editing in stem cells has the potential to enhance a particular cell type allowing for more efficient stem cell treatment. One of the current challenges in using CRISPR-Cas9 for gene editing include off-targets and varying on-target efficiencies, which is lower when using other tools such as TALENs. Off-target effects caused by CRISPR-Cas9 has been highly debated, as one of the greatest problems with CRISPR-Cas9 and several researchers reported genomic changes caused by off-target events (Cho et al., 2014). Variation of on-target efficiencies can be attributed to cell type differences and target sites, which makes it necessary to design and test several guides. One explanation for varying ontarget efficiency is the accessibility of the DNA. Heterochromatin is epigenetically modified to be hypermethylated and tightly packed, which is predicted to be less accessible than euchromatin (Janssen et al., 2019). This varying efficiency of gene editing can affect personalized treatment with autologous cells compared to implementing a universal allogenic donor where no gene editing is needed. As previously mentioned those allografts are subject to rejection responses. Rejection responses could be suppressed by generating KOs of various genes encoding structural components of the immune system. Current Non-immunogenic Cell Lines Creation of a non-immunogenic cell line will allow for transplantation to multiple recipients. This will lower the cost of treatment and reduce the time, as cells can be banked and are readily available in clinical settings. The quest for non-immunogenic cells by gene editing is relatively new but it has already led to a multitude of research and various approaches listed in Table 1. Allogenic teratomas, fibroblasts and cardiomyocytes where shown to be protected from rejection by continuous expression of immunomodulatory molecules such as CTLA4-Ig and Programmed death-ligand 1 (PD-L1) (Rong et al., 2014). Even though this approach showed low immunogenicity, most approaches are based on knowledge from cells with reduced immune responses such as stem cells (Liu et al., 2017) and cells at the feto-maternal interface (Tsuda et al., 2019). A commonality for these cell types is that they have low or no expression of MHCI, which will decrease immune response (Figure 2) (Yang et al., 2020). For this reason, several researchers have generated cell lines where either the exon encoding various HLAs or the B2M gene has been knocked out. Generating a B2M KO leads to efficient ablation of all HLA-I types, as B2M encodes one structural part of the MHCI complex. Amongst the studies listed in Table 1, the ones knocking out the exon encoding HLA-A, B and C all showed decreased immune response when tested in vitro (Torikai et al., 2013;Hong et al., 2017). The studies generating KO or knock down of B2M all implemented guided differentiation of the iPSC or ESC prior transplantation in mice. Neither of these studies showed complete immune rejection or significant increase in immune response. Except for one study, which measured the graft survival after 42 days, all studies made the assessment after a couple of days. Making the assessment short time after transplantation excludes the possibility to consider an immune response mediated by CD4 Tcells and MHCII. The long term assessment of the mice showed a 40% survival of transplanted cells 42 days post injection, which underlines clearly the importance of MHCII for graft recognition and rejection (Deuse et al., 2011). MHCII is expressed on antigen presenting cells and plays a central role in generating an immune response, justifying why approaches focusing on knocking out only the MHCII in hESC showed no immune response in vitro . By making a KO of MHCII in combination with the MHCI it is possible to generate an efficient immune deficient hiPSC that show no immune response in vitro, even after differentiation into cardiomyocytes (Mattapally et al., 2018). It is known that cells lacking the MHCI complex are targets for natural killer (NK) cells which, as expected, is also shown for cells where B2M has been ablated (Sentman et al., 1995;Lu et al., 2015). To avoid cells being targeted by NK-cells, different approaches have been used. One approach has been to keep the HLA-C but make KO of HLA-A, B and the transcriptional coactivator CIITA for MHCII in hiPSC (Xu et al., 2019). This approach showed protection for NK-targeting in vitro. The authors argue that by making a cell line only expressing HLA-C, HLA-matching to reduce the risk of rejection becomes a lot simpler. By matching donor and host according to their HLA-C, only 12 cell lines with different HLA-C profiles would be sufficient to serve as donors for 90% of the population (Xu et al., 2019). A different strategy was pursued by Deuse et al. who had previously found that KO of MHCI in embryonic stem cells was sufficient to decrease the immune response as the embryonic stem cells naturally have a very low MHCII expression profile (Deuse et al., 2011). By investigating the gene expression profiles of cells at the interface of the fetus and mothers blood supply it was found that MHCI and MHCII, as expected, are highly downregulated whereas CD47 is strongly upregulated (Deuse et al., 2019). This led to the design of hiPSCs with CRISPR-Cas9 generated KO of MHCI and MHCII via the B2M and CIITA gene and an upregulation of CD47 by lentiviral transduction to avoid NK-targeting ( Figure 2B). The gene edited hiPSC did not cause an immune response even in HLA mismatched allogenic humanized mouse recipients. Data further showed that a lack of immune response persisted upon differentiation of hiPSC into cardiomyocytes and epithelial cells and that both showed long term survival of at least 50 days in vivo. The long survival supports the efficiency of the design where all HLA type I and HLA-III were knocked out, except for HLA-G. It has previously been shown that KO of CIITA does not affect the HLA-G expression and furthermore have a preventive effect of NK targeting (Zhao et al., 2014;Mattapally et al., 2018). As both the presence of HLA-G and upregulation of CD47 have preventive effect of NK-targeting, the combination of the two, which is the design used by Deuse et al., may potentially decrease the risk for NK recognition further. Risk Management Regardless of the advantages of non-immunogenic hiPSC, a potential problem is their safety. If the immune system is not able to detect the foreign cells, then the uncontrolled proliferation of stem cells can potentially lead to even more devastating effects. Even though the risk of tumorigenesis is low for hiPSC derived cells, the need for the host to be able to target cells if infected or mutated is still an important aspect of the design. One strategy to increase the safety of non-immunogenic cells is to knock in the HLA-E complex into the B2M locus (Gornalusse et al., 2017). Expression of HLA-E did not cause an immune response and the risk of NK-mediated cell death decreased as HLA-E is involved in NK-cell recognition (Braud et al., 1998). A favorable aspect of this design, is that HLA-E can express peptides from bacteria on the cell surface enabling the host's immune system to recognize the grafted cells in case of an infection (Lampen et al., 2013). Expression of HLA-E however, does not prevent tumor growth (lo Monaco et al., 2011). The main design of non-tumorigenic hiPSC, is by the use of a so-called suicide switch. One such example is the enzyme inducible Caspase-9 (iCaspase9) which is critical to the apoptotic pathway (Wu et al., 2014;Ando et al., 2015). The iCaspase9 transgene has been successfully inserted into hiPSC by lentiviral transfection, and lead to apoptosis within 24 h once induced by chemical stimulation, hereby serving as an inducible "suicide-switch" (Yagyu et al., 2015). iCaspase 9 has furthermore showed to be efficient of inducing apoptosis in both hiPSC derived neurons and astrocytes (Itakura et al., 2017). In 2020, the same design was conducted by using TALENs in both hiPSC and macrophages and showed that the system is efficiently inducing apoptosis in 95-98% of hiPSC and 90% of hiPSC differentiated macrophages (Lipus et al., 2020). A different attempt to generate non-tumorigenic hiPSC has been made by generating a safe cell system inserting a transcriptional link between two genes responsible for cell division (CDK1) and cell suicide (HSV-TK) (Liang et al., 2018). In a similar manner as for the iCaspase9, division and cell survival can be controlled by giving a specific drug ganciclovir which hereby can arrest or/and kill potential tumor formation. A study in 2019 showed risks associated with knock-in of suicide switches (Kimura et al., 2019). The study knocked-in the HSV-TK gene in hiPSC, which is sensitive to ganciclovir. In vitro, ganciclovir exposure caused significant cell death, but the same exposure on teratomas in vivo showed varying resistance to the drug. The same study also raises doubt whether human safe loci are in fact safe. These findings highlight the need for more research and risk assessments of various non-immunogenic cells designs. To make such risk assessments, a solid and translational model is needed. As the immune system and nervous system are both highly complex systems, in vitro modeling lacks the level of detail necessary to get an accurate picture. Most commonly rodents are used for in vivo research, but they differ greatly from humans in metabolism, brain structure and immune system (Mestas and Hughes, 2004). Porcine models and non-human primates are highly relevant for studying human diseases, as they both have a long lifespan and great physiological similarities to humans. Even though non-human primates share the greatest similarities to humans genetically, porcine models are preferable in many aspects such as availability, breeding, size, and a 80% overlap of immune parameters (Meurens et al., 2011). Porcine models have already been generated for neurological disorders such as Huntington's disease (Rausova et al., 2017), stroke (Lau et al., 2018), and other neurodegenerative disorders (Perleberg et al., 2018). Such models can provide platforms for testing of non-immunogenic cell lines for treatment prior to human testing. STEM CELL TREATMENT FOR NEUROLOGICAL DISORDERS The promising results obtained with stem cell-based treatment for PD shows that this form of treatment has great potential in diseases where cells are dying, such as neurodegenerative diseases or in stroke patients. Both diseases have a fundamentally different outcomes, which can lead to profound differences in the success of the engraftment of transplanted cells. In the case of stroke patients, cells would be introduced to the affected site in an otherwise healthy brain environment. For success, the biggest challenge would be graft survival, avoiding tumor formation and functional connection to the existing brain cells. Stem cell-based cell replacement therapies for neurodegenerative diseases face the same hurdles as described for stroke, but are additionally challenged by the transplantation into an environment where pathogenic mechanisms are in place, causing degeneration and apoptosis of brain cells. Inserting healthy cells in this type of "hostile" environment can either have a positive effect as healthy neural progenitor cells secrete neuroprotective factors or the transplanted healthy cells can be negatively affected by the environment and result in graft failure (Kelly et al., 2004;Song et al., 2018;Willis et al., 2020). If the healthy cells are able to positively influence the cells of the host, they can potentially delay or even counteract the pathogenic mechanisms, which would result in improvement for the patient [National Institutes of Health (NIH), 2014]. If the host cells affect the healthy cells, worst case would be no effect of the transplant. However, as most neurodegenerative mechanisms are slow to progress it is perhaps more likely that the transplant would result in temporary improvement until the pathogenic mechanisms affect the healthy cells as well. This is supported by findings showing that addition of astrocytes differentiated from MSC to a Parkinsonian rats have beneficial effects likely through the active secretion of neuroprotective factors (Bahat-Stroomza et al., 2009). Another approach showed that injecting non-differentiated MSC in a rat model of neuropathy resulted in decreased level of pro-inflammatory and an increase in anti-inflammatory proteins, supporting that healthy cells can modulate the inflammatory response of other cells via cell to cell interactions (Siniscalco et al., 2011). The properties of donor cells replacing dead cells and shifting host cells toward an anti-inflammatory state make stem cells particularly interesting for cell replacement therapies, as they can be differentiated into various cell types depending on the disorder that needs treatment. For treatment of PD stem cells have been specifically differentiated into dopaminergic neurons in vitro (Arenas et al., 2015). Figure 4 gives an overview of a potential treatment strategy for PD using risk optimized non-immunogenic stem cells differentiated into dopaminergic neurons. It should be noted that differentiation protocols for dopaminergic neurons do not produce pure populations (Kriks et al., 2011;Dell'Anno et al., 2014;Gonzalez et al., 2015;Hallett et al., 2015;Kikuchi et al., 2017;Takahashi, 2017). Research in rodents (Kriks et al., 2011) as well as non-human primates (Hallett et al., 2015) have shown that the success of PD treatment is strongly correlated with the purity level of dopaminergic neurons explaining why differentiation protocols using cell sorting, to obtain a more pure population (Dell'Anno et al., 2014;Hallett et al., 2015;Kikuchi et al., 2017) provides more efficient transplants compared to research without cell sorting (Kriks et al., 2011;Gonzalez et al., 2015;Hallett et al., 2015;Takahashi, 2017). Such findings point out the need for optimization of cell differentiation protocols in order for stem cell treatment to increase its potential. Interestingly, research has shown upregulation of MHCI in murine dopaminergic neurons upon treatment with IFN-gamma and activated microglia, even though they normally do not express MHCI (Cebrián et al., 2014b). This indicates that MHCI is involved in neuroinflammation and could play an important role in the induced cell death seen in neurodegeneration. By transplanting cells lacking the MHCI, the grafted cells might be less susceptible to induced cell death caused by the pathogenic environment in neurodegenerative diseases. For this reason, nonimmunogenic stem cells with MHCI KO might be a better choice for treatment of PD and other neurological disorders causing neuroinflammation, compared to autologous cells. Despite the huge potential for using risk optimized nonimmunogenic iPSC for treatment of neurological disorders, challenges still need consideration. One problem is the variation in recovery and improvement of patients. The variations can be explained by factors such as the amount of tissue transplanted, the age of the donor tissue or injection site, which are factors that can be optimized for better treatment. One factor that cannot be changed, is the disease stage of the patient. Patients early in their disease have showed increased improvement to stem cell therapy compared to patients with advanced disease (Venkataramana et al., 2012). This different response calls for early diagnostics in order for stem cell treatment to be most beneficial. Another issue for using non-immunogenic iPSC is the consequences of a lack of MHCI and II expression. In rats, MHCI has been shown to not only be expressed in neurons (Needleman et al., 2010), but also to play role in the development of the central nervous system (Cebrián et al., 2014a). Two studies in developing human fetuses have shown expression of MHCI in neurons of the lateral geniculate nucleus and the hippocampus during development, with expression changing as development advances (Zhang et al., 2013a,b). Their findings suggest that MHCI is involved in the maturation of neurons, similar to the finding in rodents. If the MHCI is important for neural maturation, it will most likely be problematic to differentiate stem cells with KO of B2M into mature neurons, needed for treatment FIGURE 4 | Treatment of PD using non-immunogenic stem cells. The illustration shows the workflow for generating neural progenitor cells for transplantation into the striatum of a PD patient. A healthy donor provides a biopsy, preferably skin, which is then reprogrammed into iPSC using a non-integrative method. iPSC are then gene edited using CRISPR-Cas9 to generate non-immunogenic stem cells by knocking out MHCI, MHCII, and upregulating CD47. Furthermore, a suicide switch is inserted for safety regulation. Once the non-immunogenic cells pass quality control (QC) they can be differentiated into the required cell type, for PD dopaminergic neural progenitor cells. Cells are then injected into the striatum where they can differentiate and integrate. Integrated cells are expected to not only replace dead cells but also to positively influence neighboring cells and decrease neuroinflammation. of disorders such as PD. It is therefore of high interest to test if full differentiation is possible of non-immunogenic cells such as the ones generated by Deuse et al. As highlighted in this review non-immunogenic hiPSC derivatives have a large potential to treat a variety of disorders and diseases. However, significant advances are required in order to determine if and to what extent this will be applicable for the various neurological disorder. AUTHOR CONTRIBUTIONS HRF and KKF conceptualized and wrote the article. HRF generated the figures. All authors edited and approved the final version of the article. FUNDING This work was supported by the following grants: Innovation Fund Denmark (BrainStem, 4108-00008B; NeuroStem 4096-00001B), Personal Ph.D. fellowship to HFR was granted by the LifePharm Centre for In Vivo Pharmacology under the University of Copenhagen.
2021-01-12T14:05:50.766Z
2021-01-12T00:00:00.000
{ "year": 2020, "sha1": "07d5b58854db40f142cfb1bcefabb616f076217c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgeed.2020.623717/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07d5b58854db40f142cfb1bcefabb616f076217c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247419882
pes2o/s2orc
v3-fos-license
Antioxidant Biomolecules and Their Potential for the Treatment of Difficult-to-Treat Depression and Conventional Treatment-Resistant Depression Major depression is a devastating disease affecting an increasing number of people from a young age worldwide, a situation that is expected to be worsened by the COVID-19 pandemic. New approaches for the treatment of this disease are urgently needed since available treatments are not effective for all patients, take a long time to produce an effect, and are not well-tolerated in many cases; moreover, they are not safe for all patients. There is solid evidence showing that the antioxidant capacity is lower and the oxidative damage is higher in the brains of depressed patients as compared with healthy controls. Mitochondrial disfunction is associated with depression and other neuropsychiatric disorders, and this dysfunction can be an important source of oxidative damage. Additionally, neuroinflammation that is commonly present in the brain of depressive patients highly contributes to the generation of reactive oxygen species (ROS). There is evidence showing that pro-inflammatory diets can increase depression risk; on the contrary, an anti-inflammatory diet such as the Mediterranean diet can decrease it. Therefore, it is interesting to evaluate the possible role of plant-derived antioxidants in depression treatment and prevention as well as other biomolecules with high antioxidant and anti-inflammatory potential such as the molecules paracrinely secreted by mesenchymal stem cells. In this review, we evaluated the preclinical and clinical evidence showing the potential effects of different antioxidant and anti-inflammatory biomolecules as antidepressants, with a focus on difficult-to-treat depression and conventional treatment-resistant depression. Introduction From 1990 to 2017, cases of major depressive disorder (MDD) increased worldwide by nearly 50% [1]. Depression is among the three projected leading causes of burden of disease for 2030 [2], together with ischaemic heart disease, a pathology that in itself also has an increased risk in depressive patients. Not included in the previously mentioned projection is the devastating effect on mental health that the COVID-19 pandemic will probably leave behind: among a sample of COVID-19 survivors, 55% showed psychiatric sequelae, including MDD in 30% of the cases [3]. Pharmacological approaches are the most usual treatments for MDD, with selective serotonin reuptake inhibitors (SSRI) being used in most cases. However, around 50% of MDD patients do not respond to this first-line treatments and require a second-line treatment to achieve remission [4]. In addition, antidepressant drugs commonly used are associated with several adverse effects that have led to their early discontinuation [5,6]. Furthermore, it is urgent to improve the present capacity for diagnosis and treatment of this disease since no reliable biomarkers are available, nor is there a guide to predict the response to treatment and the course of the disease [7]. In this sense, a critical limitation is that regardless of the increasing research efforts, the pathophysiology of depression is still not completely understood. Nevertheless, there are active lines of research devoted to deciphering the cellular and molecular mechanisms involved in the development of depression. In particular, in the last years, there has been an increase in research directed toward the unveiling of the role of oxidative stress and neuroinflammation in depression, and how those interact to decrease monoamine neurotransmission and neurotrophic factor levels and induce mitochondrial dysfunction, alterations known to be involved in the onset of MDD [8]. A key factor connecting all the previous variables is stress and the concomitant dysregulation of the hypothalamus-pituitary axis stress response system [9,10]. In this review, we will discuss how chronic unpredictable stress is related to the alterations associated with the depressive state. The role of diet in depression risk has been deeply explored, showing that antiinflammatory and antioxidant diets may prevent and/or help in the treatment of depressive disorders, which has led to the exploration of antioxidant and anti-inflammatory molecules derived from plants for the treatment of MDD. In this review, we will first explore the spectrum of available treatments for depression and their potential action on oxidative stress in the brain of depressed patients. Then, we will briefly discuss oxidative damage in relation to depressive disorder and how oxidative stress is associated with the inflammatory sate commonly seen in the depressive condition. Finally, we will review: (i) some plant molecules with antioxidant action that are proposed as potential antidepressants, and (ii) molecules produced by the anti-inflammatory and antioxidant mesenchymal stem cells that could act as new therapeutic options for the treatment of this devastating disease. Current Treatments for MDD and Treatment Challenges In most of cases, treatment for MDD consists of psychological or pharmacological treatments, or a combination of both interventions. A large majority of patients prefer psychological treatment over drug-based treatment [11]. Nevertheless, patients with more severe forms of depression have been reported to prefer pharmacological treatment [12]. There are different antidepressant agents with different mechanisms of action, but they appear to share some common targets. For example, there is substantial preclinical and clinical evidence showing that different antidepressant drugs, including the inhibitors of monoamine oxidase and selective inhibitors of serotonin reuptake, have an antioxidant action, apparently mediated by their ability to increase the organism antioxidant capacity [13]. Using these first-line antidepressant drugs, the chances of patients getting into remission in the first attempt is about 36%. After an unsuccessful attempt in treatment due to the lack of response or intolerable adverse effects, further steps can be taken by changing the drug used or augmenting the dose of the drugs. With these further steps, the chances of remission are increased, leading to an overall remission rate of approximately 67% [14]. Nevertheless, 30% of the patients do not achieve remission with the available antidepressant drugs, and a big percentage of treated patients present adverse reactions, mainly associated with the binding of the drugs to serotoninergic, noradrenergic, adrenergic, cholinergic, or histaminergic receptors [15], thus greatly limiting adherence to the treatments and increasing the social burden of the disease [16]. In this regard, investigating whether the antioxidant effects of current antidepressant treatments is sufficient to treat depression symptoms and searching for new antidepressant drugs directed toward the reduction of oxidative stress may help to avoid adverse outcomes and increase adherence [16]. On the other hand, there are many established psychological treatments for MDD. Some of them have been more extensively used and studied; these include cognitive behaviour therapy, behavioural activation therapy, interpersonal psychotherapy, problemsolving therapy, and non-directive counselling, all of which have comparable efficacy in the treatment of depression [17]. Psychological interventions for MDD have a response rate of nearly 50% [18]. These interventions are believed to work by increasing emotional regulation, in consistency with the reported effect of psychotherapy in the reduction of the activation of specific brain structures such as the anterior cingulate cortex, inferior frontal gyrus, and the insula [19]. It is noteworthy that psychotherapy as well as antidepressant drugs reduce peripheral oxidative stress in MDD patients [20]. Trials comparing psychotherapy treatments with antidepressant drugs show either no difference between them or a moderate advantage of psychotherapy over pharmacological treatments [21]. Additionally, second-generation antidepressants have comparatively more adverse effects than psychotherapy, and it is more likely that patients discontinue their treatment because of these adverse effects. Other evidence-based treatments that can be considered as alternatives to conventional depression treatments are: exercise, acupuncture, yoga, meditation, and selected herbal or omega 3 fatty acid diet supplementation. Physical exercise by itself is effective for the reduction of depression symptoms, but it is not superior to psychological treatments or antidepressant medication [22]. Exercising induces acute increases in Brain-derived neurotrophic factor (BDNF) levels immediately after its performance, both in normal subjects and in depressed patients [23]. Depressed individuals have lower basal levels of BDNF, and several effective depression treatments are able to increase BDNF levels. However, the latter is only true for BDNF levels measured immediately after exercise but not for basal levels in individuals that exercise frequently. Yoga has also been shown to be effective in the treatment of depressive symptoms [24]. Naveen et al. reported that three months of yoga practice reduces depressive symptoms and cortisol levels while increasing BDNF basal levels [25]. Furthermore, twelve weeks of yoga and meditation intervention diminished oxidative stress in healthy individuals [26] and in patients with major depression, while increasing BDNF [27]. Likewise, yoga practiced for eight weeks reduced inflammatory markers and depressive symptoms in patients with rheumatoid arthritis and comorbid depression [28]. There is need for further work to determine the effectiveness of yoga as a treatment for depression in the long term. Thus, more randomized control trials with a longer duration as well as more patients are needed. However, in the short term, there is moderate evidence supporting that yoga interventions as an ancillary treatment could be superior compared to usual pharmacological treatments [29]. Manual and electric acupuncture in combination with second-generation antidepressants are more effective in improving depression symptoms to a greater extent than antidepressant treatment alone [30,31]. Electric acupuncture increases glutathione levels (a potent antioxidant molecule) in the urine of depressed patients receiving antidepressant drugs as compared to patients receiving only antidepressant drugs. Acupuncture treatment also affects tryptophan metabolism and fatty acid biosynthesis, which are metabolic changes that could be related with the improvement of sleep and cognitive disturbances as well as with an antioxidant effect [32]. Nevertheless, according to a recent metanalysis, there is still need for more rigorous primary studies to confirm the effectiveness and safety of acupuncture as compared to antidepressants [33]. Acupuncture as opposed to psychological therapy, yoga, or meditation can be tested in animal models, which are valuable tools for studying its possible mechanisms of actions. According to a recent review, acupuncture has been shown in rodent models of depression to significantly reduce the release of a corticotrophin-releasing hormone, which is a marker of chronic stress, as well as to increase the expression of BDNF, to positively affect hippocampal plasticity, and to regulate neurotransmitter levels [34]. Acupuncture can reduce oxidative stress in animal models for different pathologies such as depression in ovariectomized rats where it was shown to be effective in reducing depression-like symptoms [35], but also in other models such as that of multi-infarct rats, a model of dementia [36], and in a poststroke depression model [37]. According to a recent metanalysis of double-blind, randomized, placebo-controlled trials [38], the intake of omega-3 polyunsaturated fatty acids (PUFAs) reduces depressive symptoms. This effect has been observed for eicosapentaenoic acid (EPA) intake; while the intake of pure docosahexaenoic acid (DHA) does not have the same benefits. The amount of omega-3 PUFAs in the membrane of cells competes with omega-6 for the synthesis of antiinflammatory and pro-inflammatory eicosanoids, respectively; therefore, omega 3 content must be equilibrated with omega 6. Thus, keeping a low omega 6/omega 3 ratio in order to avoid pro-inflammatory conditions [39] and/or omega 3 supplementation can reduce neuroinflammation and oxidative stress [40]. Moreover, telomere length, which is critically associated with inflammation and oxidative stress, is inversely correlated with the omega 6/omega 3 ratio [41]. Proposed Treatments for Difficult-to-Treat Depression (DTD) Despite all the advances in treatment alternatives for ameliorating MDD, there are still patients that do not respond to these. Difficult-to-treat depression (DTD) is a term that has been adopted by consensus and is meant to replace the term "treatment-resistant depression", with the aim of conveying that it is a form of depression that is challenging but not impossible to treat. This term is used for patients with depression symptoms that continue to cause significant burden despite the usual treatment efforts [42]. Different pharmacological approaches for the treatment of DTD such as the augmentation of SRRI or tricyclics dose, or a combination with lithium or atypical antipsychotic drugs have shown promising results that still need further confirmation [43]. Similarly, adding psychotherapy to the antidepressant treatment is beneficial for DTD, but evidence as to whether the switch to psychotherapy is better than maintaining the antidepressant treatment is still lacking [44]. Ketamine is an antagonist of the N-methyl-D-aspartate type of glutamate receptor, commonly used as an anaesthetic drug. Nevertheless, it has also shown a fast and robust antidepressant effect when administered in subanaesthetic doses to patients with DTD [45]. Preclinical evidence shows that together with reducing depressive-like symptoms, ketamine reduces oxidative stress and inflammation in the brain [46]. Ketamine is a racemic mixture of esketamine and arketamine. The intranasal administration of esketamine has been approved by the Food and Drug Administration (FDA) for the treatment of DTD; nevertheless, there are concerns regarding its potential for abuse since it also has addictive effects. Preclinical studies in animal models of MDD have pointed out that arketamine could have the advantage of longer lasting and more potent antidepressant effects and a safer profile as compared to esketamine and ketamine [47]. The same potential has also been observed in DTD patients, showing a superior antidepressant effect that could be longer lasting and more potent than the racemic mixture or esketamine alone [48]. Another fast-acting antidepressant drug that has been successfully tested in DTD patients is Psilocybin, a naturally occurring plant alkaloid with psychedelic effects. Psilocybin acts as an agonist for the 2A serotonin receptors [49]. In a clinical study with 20 DTD patients, psilocybin was able to reduce depressive symptoms in one week, and the effects were maintained for three months after the treatment with low doses of psilocybin administered in a supported environment [50]. These promising effects are supported by a recent meta-analysis in which the authors concluded that psilocybin combined with behavioural support may be a safe and effective alternative treatment for depression [51]. Nevertheless, more placebo-controlled trials are needed to validate this treatment, as well as more clinical trials that include DTD patients to validate its effectiveness in this pathology. Interestingly, psilocybin also has a potential as an antioxidant [52], but it has not yet been proven to effectively reduce oxidative stress in the brain of depressed patients. Electro-convulsive therapy (ECT) has a bad reputation, but when administered with anaesthesia and muscle relaxants using new technologies that deliver ultra-brief pulses, it is a relatively safe and highly effective alternative treatment for severe depression and for patients that do not respond to first-choice treatments such as pharmacotherapy and psychotherapy. The principal drawback of this treatment is that it can produce cognitive side effects such as impairments in autobiographical memory that are potentially longlasting [53]. Nevertheless, a cognitive function such as processing speed, which is reduced in depressed patients, is increased by ECT [54]. Moreover, attention and verbal memory that are impaired by ECT treatment are usually recovered to baseline levels six months after the treatment [55]. ECT may be used in geriatric depressed patients as a safe alternative when there is not enough response to conventional drugs or when patients cannot tolerate these drugs [56]. It has been reported that in bipolar depressed patients that respond to ECT treatment, the oxidative stress marker malondialdehyde (MDA) is reduced by ECT treatment [57], suggesting that the antioxidative effect of ECT could be relevant for its antidepressant effect. Neurodegenerative diseases are bidirectionally related with depression [58], and oxidative stress is part of their shared pathophysiology [59]. Patients with comorbid MDD and neurodegenerative diseases such as Parkinson's, Alzheimer's, and Huntington's diseases respond poorly to standard antidepressant treatments and are in higher risk of side effects [58]. Therefore, the search for alternative treatments for depression in patients with neurodegenerative disease is urgent, and oxidative stress appears to be an interesting therapeutic target. Altogether, the presented evidence shows that many proved treatments for depression and treatment-resistant depression share an antioxidant action. Nevertheless, more preclinical and clinical research is needed to state if this shared antioxidative action is critical to their antidepressant efficacy, and to determine how oxidative status reduction is related to depressive symptom amelioration; additionally, whether this relation is associated with some or all depressive symptoms and different types of depressive disorders also needs to be determined. Moreover, these treatments have not been specifically designed to achieve a reduction in oxidative stress. Thus, directing the treatments towards an emphasis on antioxidation might allow for the reduction of some of the unwanted effects while maintaining or even improving their efficacy. Oxidative Damage in the Brain and Its Association with MDD Oxidative damage is a consequence of the oxygen dependence of cellular metabolism. The presence of oxygen in the internal media is, on the one hand, crucial for survival; on the other hand, it is a menace producing oxidative damage through the generation of free radical species, which have to be counteracted by the presence of potent enzymatic and non-enzymatic antioxidants. These molecules are part of a complex system of structurally diverse functional components comprising endogenous and exogenous antioxidant molecules, as shown in (Figure 1). Therefore, there must be an equilibrium among the production of reactive oxygen species (ROS) and the antioxidant defence response. The brain is especially prone to oxidative stress because it has a high amount of transition metals and polyunsaturated fatty acids that provide a substrate for lipid peroxidation, in addition to its high oxygen consumption rate and limited antioxidant defences [60] (Figure 2). Mitochondrial function is tightly related to oxidative stress. The electron transport chain in the mitochondria is coupled with the production of ROS-like superoxide radicals and hydrogen peroxide (H 2 O 2 ), in addition to its presence in the outer membrane of the mitochondria of enzymes such as monoamine oxidase, which also produce ROS. On the other hand, manganese superoxide dismutase (SOD) and glutathione (GSH) molecules inside the mitochondria act as an antioxidant mechanism [61]. Outside the mitochondria, H 2 O 2 is enzymatically degraded by catalase, glutathione peroxidase, and peroxiredoxin. Nevertheless, the overproduction of ROS by mitochondria dysfunction or the reduction in antioxidant defence may cause oxidative damage, particularly in the brain where mitochondrial ROS overproduction is involved in many psychiatric and neurodegenerative diseases [62]. It has been consistently reported that the antioxidant capacity, dependent on both enzymatic and non-enzymatic antioxidants, is reduced in depressed patients. For example, levels of glutathione peroxidase [63], vitamin E [64,65], erythrocyte superoxide dismutase, and glutathione reductase [66] are reduced in the blood of depressed patients, but also in their brains [67]. Consistently, the application of chronic unpredictable stress in rats decreases the expression of different antioxidant enzymes in the brain and in the periphery, and these alterations can be reversed by antidepressant treatments [68]. In fact, mitochondrial dysfunction in the brain is associated with depression [69]. Moreover, mitochondria are instrumental in handling the stress response, supplying the increased energy demands during stress and also producing and responding to neuroendocrine and metabolic stress mediators such as glucocorticoids [70]. Thus, brain mitochondria are affected by chronic stress, and alterations in mitochondrial function have been related to stress-induced cognitive and behavioural changes [71]. The figure shows antioxidant molecules organized according to its origin that can be endogenous, which are synthetized by the organism, or exogenous, which have to be consumed in the diet. In addition to their source (endogenous or exogenous), antioxidants may be classified according to their antioxidant action into primary, secondary, and tertiary. Primary antioxidants are chain-breaking antioxidants that accept free radicals terminating the propagation of oxidative reactions and transform free radical species into more stable and less reactive products. Secondary antioxidants are radical scavenging molecules, and they have a preventive role in suppressing chain reaction initiation. Tertiary antioxidants are enzyme systems that can repair biomolecules that have been damaged by oxidation. Additionally, antioxidant action could be direct or indirect. Indirect antioxidants enhance many of the direct primary and secondary antioxidants. Finally, antioxidants could be enzymes or non-enzymatic molecules. SOD: superoxide dismutase, CAT: catalase, GPx: glutathione peroxidase, Trx: thioredoxin, GR: glutathione reductase, GSH: reduced glutathione, G6PD: glucose 6 phosphate dehydrogenase, GST: glutathione S transferase. The figure shows antioxidant molecules organized according to its origin that can be endogenous, which are synthetized by the organism, or exogenous, which have to be consumed in the diet. In addition to their source (endogenous or exogenous), antioxidants may be classified according to their antioxidant action into primary, secondary, and tertiary. Primary antioxidants are chain-breaking antioxidants that accept free radicals terminating the propagation of oxidative reactions and transform free radical species into more stable and less reactive products. Secondary antioxidants are radical scavenging molecules, and they have a preventive role in suppressing chain reaction initiation. Tertiary antioxidants are enzyme systems that can repair biomolecules that have been damaged by oxidation. Additionally, antioxidant action could be direct or indirect. Indirect antioxidants enhance many of the direct primary and secondary antioxidants. Finally, antioxidants could be enzymes or non-enzymatic molecules. SOD: superoxide dismutase, CAT: catalase, GPx: glutathione peroxidase, Trx: thioredoxin, GR: glutathione reductase, GSH: reduced glutathione, G6PD: glucose 6 phosphate dehydrogenase, GST: glutathione S transferase. Figure 2. Interaction between oxidative stress and neuroinflammation at the onset of major depressive disorder. Figure shows that psychological and/or physical stressors can trigger the pathophysiology associated with major depression. Once the limit of the brain's antioxidant capacity has been exceeded, oxidative stress prevails, inducing neuroinflammation and the deterioration of brain cells, which over time leads to the induction of the main phenotype associated with major depressive disorder. Red rays show the possible targets of plant-derived extracts and acellular products derived from mesenchymal stem cells. Mitochondrial function is tightly related to oxidative stress. The electron transport chain in the mitochondria is coupled with the production of ROS-like superoxide radicals and hydrogen peroxide (H2O2), in addition to its presence in the outer membrane of the mitochondria of enzymes such as monoamine oxidase, which also produce ROS. On the other hand, manganese superoxide dismutase (SOD) and glutathione (GSH) molecules inside the mitochondria act as an antioxidant mechanism [61]. Outside the mitochondria, H2O2 is enzymatically degraded by catalase, glutathione peroxidase, and peroxiredoxin. Nevertheless, the overproduction of ROS by mitochondria dysfunction or the reduction in antioxidant defence may cause oxidative damage, particularly in the brain where mitochondrial ROS overproduction is involved in many psychiatric and neurodegenerative diseases [62]. It has been consistently reported that the antioxidant capacity, dependent on both enzymatic and non-enzymatic antioxidants, is reduced in depressed patients. For example, levels of glutathione peroxidase [63], vitamin E [64,65], erythrocyte superoxide dismutase, and glutathione reductase [66] are reduced in the blood of depressed patients, but also in their brains [67]. Consistently, the application of chronic unpredictable stress in rats decreases the expression of different antioxidant enzymes in the brain and in the periphery, and these alterations can be reversed by antidepressant treatments [68]. In fact, mitochondrial dysfunction in the brain is associated with depression [69]. Moreover, mitochondria are instrumental in handling the stress response, supplying the Interaction between oxidative stress and neuroinflammation at the onset of major depressive disorder. Figure shows that psychological and/or physical stressors can trigger the pathophysiology associated with major depression. Once the limit of the brain's antioxidant capacity has been exceeded, oxidative stress prevails, inducing neuroinflammation and the deterioration of brain cells, which over time leads to the induction of the main phenotype associated with major depressive disorder. Red rays show the possible targets of plant-derived extracts and acellular products derived from mesenchymal stem cells. Genetic vulnerability and stressful life events are important factors that determine depression risk since they interact synergistically [72]. A history of stress increases vulnerability to new stress by exerting epigenetic modification in the risk genes via DNA methylation and miRNA regulation, leading to alterations in the brain, which then increase the vulnerability to developing depressive disorders [73]. Oxidative stress may be involved in the increase in vulnerability, as shown in a rat model of social defeat stress (SDS)-a major acute stress that induces a reduction in BDNF levels, but only in animals vulnerable to depression [74]. This reduction in BDNF levels is maintained for weeks, in association with a greater oxidative stress as compared to animals not vulnerable to depression. In this model, vulnerability is abolished by antioxidant treatments, suggesting that oxidative stress is involved in generating the vulnerability to depression. It is postulated that vulnerable animals have a prolonged oxidative stress response after experiencing acute major stress because the transcription factor that initiates the response to control oxidative stress levels-the nuclear factor erythroid-2 related factor 2 (NrF2)-is downregulated [75]. This downregulation is associated with a reduction in BDNF levels, since BDNF induces NrF2 translocation to the nuclear compartment [75] (Figure 2). Supporting these findings, the induction of NrF2 translocation to the nuclear compartment by stimulating the BDNF receptor TrKB reverses the vulnerability to depression (Bouvier et al., 2017). It has been reported that chronic mild stress decreases glutathione peroxidase (GSH-Px) activity and glutathione (GSH) and vitamin C levels in the brain [76], which are examples of the enzymatic and non-enzymatic antioxidant mechanisms in the brain, respectively ( Figure 2). Furthermore, lipid peroxidation is highly increased in stress-induced depression in different rat tissues, with the brain being the most affected organ [77]. Antidepressant treatment with venlafaxine, an inhibitor of serotonin and norepinephrine reuptake, prevents oxidative stress by potentiating the brain antioxidant defence and reducing stress-induced lipid peroxidation in the brain [76]. Meanwhile, in MDD patients, oxidative stress markers are elevated, and higher baseline levels of F2-isoprostanes, a marker of oxidative stress, is related to a poorer response to antidepressant treatment [78]. On the other hand, patients who respond to the treatments showed reductions in the oxidative markers [78,79]. Therefore, the maintenance of a delicate equilibrium between ROS production and antioxidant defences is essential for correct brain functioning and for the ability of the organism to respond to stress. Hence, oxidative homeostasis alteration is a major player in neuropsychiatric and neurodegenerative diseases, including depression, and represent an interesting target for their treatment. Oxidative Stress and Inflammation Crosstalk in the Brain of Depressive Patients In the last ten years, evidence linking depression to inflammation has been accumulated. It is well-known that inflammatory mediators are elevated in depressed patients; for example, C-reactive protein (CRP) levels are elevated in MDD patients as compared to healthy controls, and a third of them have CRP levels compatible with low-grade inflammation [80]. Similarly, the levels of various interleukins, including tumour necrosis factor alpha (TNF-α) and the soluble intlerleukin-2 receptor (sIL-2R), are elevated in depressed patients [81]. This elevation in cytokine levels is not a generalized response but rather a more specific pro-inflammatory regulation, in which some pro-inflammatory cytokines are elevated and some anti-inflammatory cytokines are reduced in the plasma of depressed patients [82]. Further supporting the idea that inflammation may be involved in generating depression is the data showing that inflammation in elderly patients is associated with depression, but not with Alzheimer disease [83]. The association of certain types of diets with depression may be related to the inflammation induced by gut dysbiosis [84] (Figure 2). Likewise, psychological stress such as marital distress is associated with inflammation in the gut, promoting the translocation of gut bacteria products such as lipopolysaccharide (LPS) to the portal blood, inducing an immune response and a general pro-inflammatory state [85]. Furthermore, higher levels of inflammatory markers are associated with a poorer response to treatment in depressed patients, and treatment success is associated with a reduction in inflammatory markers [86], suggesting that inflammation does not merely coexist with depression or is just a marker for every neuropsychiatric alteration, but rather that it is likely to be related to the manifestation of symptoms of depression. Nevertheless, oxidative stress and neuroinflammation are implicated in many other neuropsychiatric alterations such as Parkinson's disease or posttraumatic cognitive damage. In the same sense, experiencing stress and having a history of major depression are associated with metabolic alterations that promote inflammation, showing an intricate, bidirectional relationship between inflammation and depression that is probably mediated by stress [87] (Figure 2). Treatments with anti-inflammatory molecules may have an anti-depressive potential. This seems to be the case for omega 3 diet supplementation, for example, but when the anti-inflammatory effect is associated with a pro-oxidative action, as is the case for inhibitors of cyclooxygenase 2, there is no anti-depressive potential [88]. Moreover, treatment with pro-inflammatory agents such as interferon alpha, used to treat chronic hepatitis C, frequently induces depression, an outcome more likely in patients with a history of major depression [89]. Therefore, inflammation is not sufficient to induce depression, but it does favour its development. Preclinical research shows that chronic unpredictable stress used in animal models to induce depression also induces neuroinflammation and oxidative stress. In the same way, inflammation induced by simulating the presence of bacteria with LPS injection can also induce depressive symptoms [90]. Subclinical systemic inflammation is a risk factor for developing depression [91]. Adults that have been exposed to childhood maltreatment are at risk of depression and have increased markers of inflammation [92]. Additionally, childhood maltreatment is associated with mitochondrial malfunctioning and oxidative stress [93]. Oxidative stress can damage mitochondria, and in turn, damaged mitochondria can induce inflammation since the components of the damaged mitochondria are recognized by the immune system [94]. Consequently, mitochondrial damage promotes oxidative stress and is involved in depressive pathology, as previously discussed, generating a vicious cycle that is maintained for a long time. As mentioned, oxidative stress is increased in depressed patients [95,96], as measured by the increase in different oxidation markers. Examples are the levels in serum and urine of the excretion of F2 isoprostane, a derivate of free radical-mediated lipid peroxidation [97], and the plasmatic levels of the end-product of lipid peroxidation: MDA [64]. The former reflects peripheral oxidation. Interestingly, brain DNA damage induced by oxidative stress is increased in depressed patients as compared to healthy controls, suggesting that oxidative stress-induced damage in oligodendrocytes and consequent white matter alterations might be involved in depression disorder pathogenesis [98]. Interestingly, the reduction in antioxidant defences in depressed patients is strongly associated with elevated cytokine levels in their blood [66]. In fact, ROS can induce the activation of the inflammasomes. These are protein complexes, including Nod-like receptor protein (NLRP) 3 and 6, the NLR family, CARD domain containing 4 (NLRC4), and absent in melanoma 2 (AIM2), that can activate the caspase and interleukin systems, initiating an inflammatory response [99]. Inflammasome activation can initiate a Caspase-1-dependent programmed cell death named pyroptosis, which has been associated with depression [100] (Figure 2). In line with this, it has been reported that melatonin administration can reduce depressive-like behaviour in an animal model of depression by reducing NLRP3 inflammasome activation through the activation of Nrf2 and the silent information regulator 2 homolog 1 (SIRT1), which have antioxidant actions [101]. In sum, oxidative stress and neuroinflammation are reciprocally promoting each other, perpetuating the conditions for the development of the depressive pathology. Thus, targeting one or both is a promising strategy for the reduction of depressive disorder symptoms. Plant-Derived Antioxidant Molecules in MDD Treatment Diet is clearly associated with depression risk: an antioxidant diet reduces depression risk while a pro-oxidant diet increases it [102]. Therefore, the protective or therapeutic potential of putative diet components is currently being studied. In particular, plant-derived compounds are attracting attention because of their natural origin, in addition to their high therapeutic potential associated with their antioxidant and anti-inflammatory actions. One of the many diet components with antidepressant effects is the regular tea (camelia cinesis). Many different tea compounds can reach these effects, acting on different depression-related alterations. For example, the anti-inflammatory and antioxidant effects of tea polyphenols might be crucial in the depression risk reduction properties of tea [103] (Figure 2). However, relevant limitations in the potential use of these compounds for MDD treatment are their low bioavailability, instability and low intestinal absorption [104]. Stability and bioavailability of tea polyphenols could be improved by their incorporation into nanocarriers. To date, however, the effectivity of this strategy has only been proved in in vitro conditions [105]. Similarly, turmeric curcumin has anti-inflammatory and antioxidant actions that may reduce anxiety and depressive symptoms in patients with depressive disorder and receiving standard care [106]. Accordingly, curcumin was also able to reduce depressive-like symptoms in a stress-induced animal model of depression-an effect mediated by the inhibition of the NLRP3 inflammasome and the regulation of kynurenine and quinolinic acid levels, which are products of tryptophan degradation with neuroprotective and neurotoxic effects, respectively [107]. Another study showed that curcumin reduced depressive-like symptoms as well as diminished stress-induced ROS levels in the same animal model by increasing the antioxidant promoting transcription factor Nrf2, which upregulates the expression of several antioxidant enzymes [108]. Another common culinary ingredient that has high curcumin content is saffron (Crocus sativus). It has been postulated that its use as a diet supplement may reduce oxidative stress, assessed by the reduction in MDA levels and the increase in the total antioxidant capacity of unhealthy individuals [109]. Saffron administration was more effective than placebo, and it was suggested to be as effective as antidepressant drugs in the treatment of depressive symptoms [110][111][112]. These antidepressant effects are believed to be attributable to saffron's antioxidant and anti-inflammatory actions [110][111][112]. Other bioactive compounds in saffron that may exert antidepressant actions are crocins, crocetin, picrocrocin, and safranal. On the other hand, combining saffron with a low-dose curcumin treatment did not enhance treatment efficacy [113], suggesting that all these bioactive components act via similar mechanisms and reduce oxidative stress. Two recent metanalyses showed that curcumin effects are better than those of placebo, and it was concluded that curcumin supplementation may be beneficial for depressed patients as an additional intervention to standard treatment; however, larger randomized controlled trials are needed to improve the low-quality evidence that is currently available [106,114]. Interestingly, curcumin is more beneficial in the treatment of depressive symptoms in patients diagnosed with atypical depression [113], a depression subtype more frequently seen in women who have higher suicidal risk [115]. It is worth noting that atypical depression is associated with increased lipid peroxidation as compared to melancholic depression [116]. As in the case of polyphenols from tea, orally administered curcumin has low bioavailability due to their low intestinal absorption and biotransformation [117]. Its bioavailability can be enhanced by different delivery systems, including its combination with piperine to inhibit its biotransformation, or with lecithin to improve gastrointestinal absorption, among others [118]. The effectivity of the administration of curcumin in different delivery systems has yet to be tested. For now, its combination with piperine has not shown to be more effective than curcumin alone. However, the higher doses (1 g per day) did appear to be more effective than the lower doses [119]. Another intensively studied herbal treatment derived from Chinese medicine that is commonly used for depression treatment since ancient times is St. John's wort (Hypericum perforatum L). It is reported to possess anti-inflammatory, antioxidant, antifatigue, and antidepressant capabilities [120,121]. Its main bioactive components are hyperforin, rutin, and melatonin. St John's wort has been shown to be non-inferior to standard pharmacological treatment (SSRI) in its efficacy and safety for patients with mild-to-moderate depression in two studies [122,123]. However, longer randomized control trials are needed to establish the long-term effects of St John's wort, as well as trials including individuals with severe depression in order to establish its efficacy in this kind of patients. Turra Hypericum triquetrifolium is a plant closely related to Hypericum perforatum, which shares many biologically active compounds. It is reported to have potent antioxidant activity related to its methanolic extractable components and high hypericin content [124]. Accordingly, its administration to chronically stressed rats markedly increases hippocampal BDNF levels and reverts the stress-induced cognitive deficit [124]. As discussed for tea polyphenols and curcumin, the hypericum perforatum extract has a poor pharmacokinetic profile with low bioavailability and a reduced penetration of the blood-brain barrier. In spite of this, it induces its antidepressant effect in 4-6 weeks [125], a time period similar to standard antidepressant drugs. However, a major problem with all these kinds of medicinal plant extracts is that they are commercially available under different categories in different counties, which implicates different regulations [126], as, for example, with herbal supplements. Herbal supplements are not regulated by the FDA; therefore, they may be highly variable in their composition, greatly affecting their possible therapeutic effects [127]. Thus, the efficacy and safety of supplements offered in pharmacies over the counter is questionable. Finally, ascorbic acid or vitamin C is a potent antioxidant synthesised by plants and most animals but not by primates, which is why for humans, this is an essential vitamin in the diet. Nevertheless, its consumption is decreasing worldwide, with a correlated increase in health problems associated with its deficiency. Levels of vitamin C are higher in the brain than in the periphery, and diseases associated with low ascorbic acid levels in the plasma are mainly related to the central nervous system [128]. In fact, there is accumulating evidence showing that ascorbic acid supplementation can reduce physiological alterations and symptomatology of neuropsychiatric disorders [129]. In stress-induced depressed-like rats, vitamin C levels are reduced in the brain, and lipid peroxidation is increased. However, the administration of a single dose of ascorbic acid can reverse depressed behaviour in these animals [130,131] in a way that is comparable to the ketamine effect observed in chronic cortisol-injected depressed-like mice [132]. Moreover, vitamin C intake through the diet is inversely correlated with depressive symptoms in middle aged women [133]. A recent metanalysis reported that vitamin C does not further reduce depressive symptoms in patients taking antidepressant drugs, but in individuals with subclinical depression, vitamin C supplementation is effective in inducing mood improvements [134]. The complex pharmacokinetic of vitamin C should be taken into consideration when studying the effects of vitamin C supplementation in MDD patients since, for example, several diseases can affect its turnover and reduce its plasma concentration [135]. There is an increasing amount of evidence showing the potential of plants and plant derivates in the treatment of neuropsychiatric disorders, including depression and specific depression symptoms such as anxiety or a depressed mood. This opens possibilities for alternative treatments that may help patients who do not respond to conventional treatment or experience complications, associated risks, or unpleasant effects with these treatments. Thus, plant antioxidant-based treatment may offer safer alternatives for the treatment of depression, or more effective alternatives for the treatment of atypical depression (Figure 2). Antioxidant and Anti-Inflammatory Potential of Mesenchymal Stem Cells in the Treatment of MDD Mesenchymal stem cells (MSCs) have potential as a future tool in the treatment of depression. They are multipotent stromal cells able to self-renew and to differentiate into various cell lineages mainly of mesodermal origin, favouring the regeneration of the damaged tissues [136]. MSCs can be isolated from different tissues of an adult organism, including bone marrow, adipose tissue, deciduous teeth, and menstrual blood [137]. MSCs have been tested as autologous and heterologous treatments for different pathologies and injuries, many of them associated with an increase in inflammation and oxidative stress [138]. For example, the systemic administration of MSCs produced immunomodulatory actions and reduced neuroinflammation in an animal model of stroke [139], but also in clinical trials [140]. In the same sense, it was recently reported that human MSCs-which had been activated in vitro by supplementing the culture medium with proinflammatory factors in order to increase their anti-inflammatory and antioxidant potential-when intracerebroventricularly administered to ethanol-drinking rats, they were able to dramatically reduce voluntary ethanol drinking and supress relapse with the concomitant abolition of ethanol-induced neuroinflammation and oxidative stress [141]. It is well-accepted that the main mechanism of action of MSCs is related to the paracrine secretion of several therapeutic molecules with anti-inflammatory and antioxidant activity, known as secretome [142]. However, compared with the administration of living cells, MSC-secretome has the advantage that it can be intranasally administrated to efficiently reach the brain. In fact, a result similar to the intracerebroventricular administration of activated MSCs was obtained with the intranasal administration of secretome derived from activated MSCs in rats that voluntarily consumed ethanol or nicotine, thus inhibiting their chronic selfadministration of the drugs and fully abolishing neuroinflammation and oxidative stress in both models [143]. In the specific case of major depression, it has recently been reported that the administration of MSCs obtained from mouse fat tissue in chronically stressed mice with depressive-like symptoms is able to revert the depressive behavioural phenotype by remediating microglial activation and reducing the expression of inflammatory factors. Additionally, MSC administration also promoted the expression of BDNF, TrkB, and Nrf2 [144], thus increasing the antioxidant defence capacity. Plasma Hydrogen sulphide (H2S) levels are reduced in depressed patients and are directly correlated with depression severity [145]. H2S was first known as a toxic gas, but it is now considered to be part of the same endogenous gas transmitter family as nitric oxide and carbon monoxide and is synthesized by mammalian tissues [146]. It is known to have potent anti-inflammatory and antioxidant capabilities [147]. MSCs produce H2S, and in turn, H2S is essential for the maintenance of MSC function, increasing their survival and proliferation in the context of inflammatory and oxidative conditions [148]. Furthermore, H2S increases the expression of Sirt1 and can revert the depressed-like symptoms induced by sleep deprivation in rats, reducing the levels of the pro-inflammatory cytokines IL-1β, IL-6, and TNF-α and the CCL2 chemokine, as well as increasing the levels of anti-inflammatory cytokines in the hippocampus [149]. MSCs have been proposed to reduce oxidative injury via several mechanisms, including: (i) scavenging free radicals, (ii) enhancing host antioxidant defences, (iii) modulating the inflammatory response, (iv) augmenting cellular respiration and mitochondrial functions, or (v) donating their mitochondria to protect damaged cells [150]. Most of these antioxidant actions can be replicated by the administration of the MSC-derived secretome [151], which contains soluble molecules but also small microvesicles called exosomes containing a broad set of bioactive molecules, including proteins, lipids, and nucleic acids. In this sense, it has recently been reported that MSC-derived exosomes can putatively reverse LPS-induced mitochondrial disfunction in astrocytes and reactive astrogliosis in mice by inhibiting the Nrf2-NF-κB signalling pathway [152]. Thus, MSCs produce a broad set of antioxidant and anti-inflammatory actions that can help to improve a dysregulated oxidative/antioxidant equilibrium commonly seen in MDD ( Figure 2). These effects can most probably be efficiently achieved by the non-invasive administration of exosomes or secretomes derived from MSCs. Conclusions We have shown here that an antioxidant effect is a common property of several very different therapeutic approaches for the treatment of depression, and that oxidative stress is present in depressed patients and clearly related to their symptomatology. The intake of natural compounds with antioxidant activity is promising as a strategy for avoiding or delaying the appearance of depressive symptoms or as a safer alternative treatment compared to currently available drugs. Nevertheless, more evidence is needed to endorse their antidepressant effects as well as stricter regulations in the production and characterization of these natural compounds. Likewise, more research is needed to test if the antioxidant effects of these natural compounds and antidepressant drugs are sufficient to support their antidepressant actions. MSCs and their acellular derivatives (secretome or exosomes) have been proved to be effective in ameliorating depressive symptoms in animal models of depression, converting them into a promising future tool in depression treatment. The characterization of exosome contents and the antidepressant potential of their cargo is an important further step in the investigation that would lead to their future use as antidepressants. Finally, an understanding of the mechanisms that could regulate exosome cargo destination can help in the final goal of creating customized therapeutic exosomes for the treatment of depression in a more effective and safer way. Author Contributions: Conceptualization, M.E.R., A.Á.; K.S. and F.E.; writing-original draft preparation, M.E.R. and F.E.; writing-review and editing, M.E.R., A.Á., K.S. and F.E.; funding acquisition, F.E. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by the ANID FONDECYT 1200287 grant. Conflicts of Interest: The authors declare no conflict of interest.
2022-03-14T15:23:25.397Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "0e80df652c81247f1d808c32966cbf790b4d782d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/11/3/540/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a01e8f842bce5ba51f5968d97bdd1dc434a1330a", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
104327144
pes2o/s2orc
v3-fos-license
Ionothermal Synthesis of Metal-Organic Framework Ionothermal synthesis employs ionic liquids for synthesis of metal organic frameworks (MOFs) as solvent and template. The cations and anions of ionic liquids may be finely adjusted to produce a great variety of reaction environments and thus frameworks. Organisation of the structures synthesised from related ionic liquid combinations give rise to provocative chemical trends that may be used to predict future outcomes. Further analysis of their structures is possible by reducing the complex framework to its underlying topology, which by itself brings more precision to prediction. Through reduction, many seemingly different, but related classes of structures may be merged into larger groups and provide better understanding of the nanoscopic structures and synthesis conditions that gave rise to them. Ionothermal synthesis has promised to enable us to effectively plan the synthesis ahead for a given purpose. However, for its promise to be kept, several difficult limitations must be overcome, including the inseparable cations from the solvent that reside in the framework pore. Introduction Three things are necessary to consider in the preparation of metal organic frameworks (MOFs): the metal, organic ligand, and solvent. Often neglected is the influence exerted by the solvent on the eventual framework, unlike the metal and the ligand that the structure always constitutes of. Varying the key characteristics of the solvent, such as hydrophilicity, is often the deciding factor in the reaction yield and the nature of the final compound [1]. Until 2002, when Jin et al. first used ionic liquids to synthesise metal organic framework [2], the list of solvents in inorganic synthesis was limited to few organic solvents and water [3]. This new synthesis method received growing attention in the field of MOFs to open a new realm of novel structures and provocative findings regarding the very nature of nanoscale synthesis. One aspect of ionothermal synthesis that contributed to its attention must have been its simplicity; the overall process comprises no more than mixing the metal salt and the organic ligand with the ionic solvent and incubating at a high temperature for long enough time. Unfortunately, however, the growth seems to have ceded in the recent years as shown in Figure 1. Given its distinctive potentials, this chapter is dedicated to introduce the field and draw more efforts for the full realisation of what the methodology dare to have promised. Before we move to the discussion of ionothermal synthesis and its potentials, the chemistry of ionic liquids must be first visited since it is this distinctive nature that lies behind all positive aspects, and limitations too, of ionothermal synthesis. Ionic liquids are simply salts in the liquid state as opposed to the liquids typically used as solvents, [4] which are predominantly comprised of electrically neutral molecules. While most salts may be brought to their liquid states by heating, the term 'ionic liquids' is exclusively used for those that stay fluid around or below 100°C to distinguish them from the older phrase 'molten salts' [5]. One reason behind the attention that ionothermal methodology receives may be directly induced from the term 'ionic liquid' itself. The liquids are held by ionic interactions that far outcompetes the most intermolecular interactions in other solvents, including the renowned hydrogen bonds in water. Such strong interaction is responsible for their low vapour pressure [6], which could resolve the safety and environmental concerns associated with conventional organic solvents [5]. Such characteristics function as the exact same advantages in synthesis of MOFs. Nevertheless, the synthesis has greater potential in which the reaction environment can be finely tuned by modifying the solvent ions [7]. There are only several hundreds of molecular solvents, whereas a million binary combinations and a million of millions of ternary combinations possible for ionic liquids [5], hence their nickname 'designer solvent [8]' . Efforts in the field need to be focused not only on collecting outcomes from as many combinations as possible, but also, more importantly, on comprehending the laws of chemistry lying behind the trend observable in those data. This shall, as more than enough possible combinations await, ultimately enable designing the product for a given purpose, rather than vice versa. This chapter will focus on showing the potentials of ionothermal synthesis by presenting a set of related syntheses in an organised manner. A series of such ionic liquids (RMI-X) may be prepared with 1-alkyl-3-methylimidazolium (RMI) and halide ions(X) [9]. This series of solvents exhibit finest tunability, in addition to their stability, with its variable length of alkyl side chain of the cation and the anion species variable along the halide column, which places them among the most extensively studied solvents for ionothermal synthesis of MOFs. To confirm their dominance in structure reports and the focus of our discussion on them, investigations have been made about the number of MOFs synthesised with several popular ionic liquids. The scope of our search-the list of cations and anions comprising the most common ionic liquids-has been illustrated in Figure 2. According to Cambridge Structural Database (CSD), it was shown that much of the reported MOFs is synthesised from ionic liquids that contains imidazolium and halide ions. There are no MOF crystal containing pyridium cation, and only few crystals synthesised from tetramethyl ammonium is reported as MOF. Synthesis using pyrrolidinium cation shows about 100 crystals, which corresponds to co-crystal form, showing that no crystal exhibits MOFs including pyrrolidinium cation. Extensiveness of data is the foundation of all successful discussions. With the extensiveness of RMI-X now taken for granted, structures synthesised from conditions with piecemeal differences, namely the length of the side chain of the cation, halide ions, and core metal atom of the structure were analysed to explore the effect on the final product arising from such variations. Gradual difference in the solvent brings about gradual difference in the product An important characteristic of ionothermal synthesis is that the characteristics of the solvents may be gradually varied and investigate the difference induced in the final product. While the solvent can be substituted with a complete different class of cations or anions to provide a completely reshaped environment, more minor changes can be made to the ions so that the change is gradual and quantifiable. Changing the length of the alkyl side chains attached to imidazolium cations, or changing the anions within the halide column to gradually change the size of the solvent ions is one example that will be mainly discussed in the chapter. This way, we may grasp a better understanding of the relation between the beginning and the end of this nanoscopic synthesis. Actually, organic solvents hold the exact same advantage, seeing that even the size variation of imidazolium cations is actually an organic one. However, in ionic liquids, this variation is expanded to a twodimensional table for binary combinations, and possibly to even four-dimensional construct for ternary combinations, which can provide more organised data is obvious. A better understanding of the nature is a foundation for a better utilisation of chemistry for many types of benefits. This section will guide you to the exploration that searches for new meaningful correlations in the sea of ionothermally prepared materials as the size of the solvent. Correlation between the solvent and the product is often very simple The correlation between the solvent and the product is perhaps the easiest to perceive in the system of frameworks synthesised with nickel and 1,3,5-benzenetricarboxylic acid(BTC). In Table 1, the structure shifts from the A-topology to the B-topology as we move down to the table and increase the cation size [1]. The shift occurs at smaller cations when we move right to the table to increase the anion size. It appears the size of the solvent, the cation and the anion considered together, is the key factor in determining the topology between A and B. From just this trend alone, it may be inferred that the B topology has a larger pore size, that is more empty space in the framework, than the A topology, which complies with the framework analyses by X-ray diffractions. There are certainly more reasons to this and we will come back to this later in the chapter, but for now, it is enough to just appreciate the simplicity of trend analysis. The shift in the size of the ionic liquid exerted strong enough a pressure to give rise to two totally different topologies, but sometimes, the shift may be minor. In manganese-BTC system presented in Table 2, all three combinations in the [EMI] row gave rise to the exact same structure, α1 [10]. However, in the [PMI] column, only chloride and bromide gave rise to α2, and iodide to a slightly different α3. It is predicted that [EMI] cation is too small to induce a structure transition to occur in the row, but [PMI] is big enough to do so. Even though all the reported cases [1], and B3 in [7]. in the system belong to the same topology class, but when the smaller differences were accounted, the table again shows a similar stair-shaped pattern that may be explained using the exact same argument. Ionic liquids function both as a solvent and template Similar trends may also be found in other metals, despite less well-pronounced than nickel. The similarity may not be noticed at first glance, but it is the same stair-shaped pattern to nickel system. The topology shift just takes place with smaller ionic species. Again, increase in size of the ionic solvent has changed the preferred topology to another class with a larger pore volume to incorporate the ions. As some readers might have noticed by now, here is a good point to introduce another interesting aspect of ionothermal synthesis; ionic liquids function not only as solvents, but also as a template that physically exerts a pressure to determine the final topology by residing in the framework [11] (Table 3). Many reported syntheses are yet to fit into an organised system In theory, many choices of organic linkers available in the field of chemistry may add to the large number of ionic combinations to create nearly infinite possible cases, but it takes time for a theory to become reality. While many valuable efforts are being made to contribute, those with 1,3,5-bezenetricarboxylic acid(BTC) and 1,4-benzenedicarboxylic acid(BDC) have done its part particularly extensively and the reported structures were organised in Tables 4 and 5. Tables 4 and 5 are great to appreciate the variety of ionothermally prepared MOFs, plus for searching purposes, but give hardly any information on the chemical reaction that brought about the structures. In order to get a closer grasp on how ionothermal synthesis produced this variety, they must be organised into systems of related syntheses. However, many cases in both tables are rather discrete. Efforts need to be made, starting from what have been reported, to expand the literature by applying the ionic liquid to gradual variations. The correlation between the reaction solvent and the product In the previous section, we have explored through the chemical trend observable in nickel, manganese, and cadmium-BTC systems. Some basic explanations have been provided by relating the size of the ionic species to the pore size of each structure. However, many questions still remain to be answered. For example, why does the topology shift has to occur right there, not anywhere else? If even larger cations were used, will the topology remain unchanged or will a new one appear? In order to answer this question, we have to get a deeper knowledge about the structures and the ions of the solvent. Qualitative analyses were simple, but it becomes necessary to add numbers to our logics to advance further. Trends to predict future outcomes In the nickel-BTC system we first examined, increasing the cation size caused the topology to shift from the more condensed A-class to the B-class with a larger cavity volume. In Table 1 cations. In addition, taking a deeper look into the A-class topology might also strengthen our previous reasoning. In the A-class topology, we can see that the cavities are not so well connected with each other [1,7]. On the other hand, the B-class topology has cavities connected to their neighbours to create a linear channel-shaped pore as illustrated in Figure 3. The maximum length of the cation that can fit into the A-class topology is limited, but it is not so in the B-class topology. After looking at how the structures actually look like, we can now more confidently say that elongating the cation by a carbon is unlikely to exert enough pressure on the framework to give rise to a new topology. The next step is to test whether the predictions are correct. When a completely new material is synthesised and its structure is to be determined, advanced tools like single-crystal X-ray diffractions must be used to resolve all positions of the atoms in the unit cell. However, when we have a reference material with a known structure, simpler techniques like powder X-ray diffractions (PXRD) are enough to tell whether the new material has the same structure to the reference. Therefore, the common step is to obtain the PXRD graph and compare it to the graph of some suspected structures. It is only when the new graph is different enough, the new material is subjected to complete structure analysis. Since our expectation was that the entries for the [HMI] column will have the same topology to the [BMI] column, we took the PXRD data for two and compared as in Figure 4. To assure the topological identity, several different combinations have been selected. All major peaks occur at the same angles, and our prediction by extrapolation has been proven valid [7]. We have now seen that correlations are valuable in that they may be used to predict future outcomes. This is the moment where prediction is no more but only an extrapolation. The effect of the final framework on the solvent properties The trend extrapolation introduced above was a success, but by no means is a guarantee that similar arguments will always hold true. All scientific explanations are based on reductive models where many details in reality must have been missed. However, discussions until now have only focused on how the variance in solvent properties, namely size, give rise to another variance in the product framework, but never vice versa. Chemistry is a study of interactions, and the term 'interactions' imply that there may always be bidirectional influence. It surely is the solvent that was in the reaction system first and then frameworks were built on top of the influence of the solvent, and thus, its influence on the product is more pronounced and also more important. However, the ionic species residing in different structures are in fact different even if they were in the same bottle before being deployed to the reaction. We would now like to guide your attention to an interesting system where the framework exerts a strong pressure to the cation to alter its shape. In such cases, extrapolations may not give accurate results. Table 6 shows three reported entries in the not-so-extensively-studied cobalt-BTC system. Nevertheless, simplicity is not to be confused with incompleteness. While it may be true that the entire trend may not be fully described, deeper analysis may follow in a more complete manner for the part that has been, or at least it reduces the burden to describe so many entries in full details. The bromide column is never an exception to the systems we have been through. The α-class topology has pore volume smaller than the β-class, and it is PXRD data presented in pairs or triplets to illustrate the similarity and difference of the topologies occurring in the Ni-BTC system. (a) Structures synthesised from [PEMI]Cl, [PEMI]Br, and [PEMI]I show that all three in the [PEMI] row belong to the same topology group. (b) Three structures from [HMI] row show that they belong to the same topology group similarly to (a). (c) Two structures from the Cl column show that the topology groups of [HMI] row and [PEMI] row are identical. (d) The structure synthesised from [HMI]Cl is compared to that from [BMI] Br and shows that each two belong to the same class. The magnitude may be different, but the positions of the main peaks coincide [7]. the increase in the cation size that caused it. However, a question that never has been addressed in previous systems was, is the difference between [EMI] to [PMI] the same as that between [PMI] and [BMI]? In other words, they are gradual, but are they in scale? They both differ by a carbon, and carbon-carbon bond length is nearly universally conserved. It seems they should differ only by an iota, if they were even different after all. The in situ conformations of the guest cations were taken and subjected to computational analysis [31]. The difference in volume between [EMI] and [PMI] was calculated to be 21.8 Å 3 , which is significantly larger than 14.9 Å 3 , the difference between [PMI] and [BMI]. It is apparent that this difference arose from the bent conformation of the butyl chain of [BMI] cation; the distance between the terminal carbon to the first carbon in the chain was 2.918°A in compound β2, exceeding 2.567°A of compound β1 only by a small difference. The carbon-carbon bond is free to rotate about each other, but the β-class framework is stable enough to fix the conformation severely bent as they appear; a remarkable example of the framework influencing the property of the solvent. Moreover, just because it appears as the same one step on the table does not mean the actual size difference between the ionic species is the same. Even though β1 and β2 structures belong to the same topology class, they may have minor differences like the ones described in the manganese system. Even by a small bit, [BMI] is still larger than [PMI] and is expected to exert pressure on the framework towards retaining a larger void volume. However, this straightforward prediction is actually far from the truth. The β-topology framework is so rigid that the void volume and the framework volume stay nearly unchanged for [PMI] and [BMI]. It also deserves some attention that the β-topology occurs very rare in other metal systems, suggesting that it is not so chemically favoured in many other environments [31]. While the rigidity of the framework can also be viewed as how favoured it is over other possible outcomes, it is interesting that this rare topology is so strongly preferred in the system and in cobalt system only. Also, attempts to synthesise crystalline frameworks with [PEMI]Br and [HMI]Br in the system all failed but only acquired amorphous solids. This further supports the absence of any other stable framework possible in the cobalt system. Additional studies must follow to provide explanations for the strikingly different preference of framework in the cobalt system. Reducing topologies can easily deliver deep insights into the structures The structural details of nanoscopic frameworks are often difficult to perceive. Some basic discussion may be made even with the structures completely ignored, but we already have seen many limitations to that. Understanding the structure is necessary to provide more thorough explanations for the chemical trends appearing in the organised systems of ionothermally prepared MOFs, including many unusual cases unexplainable by simple intuition. Just like organic chemistry cannot be approached without molecular formulas, inorganic chemistry cannot be explained without framework structures. We would like to dedicate this chapter to suggest a method to break down the complications of nanoscopic structures to see the forest beyond the trees, and lastly, tour around that forest. Metals atoms tend to exist in clusters In order to bring down the structures to simpler diagrams, the patterns, or segments of atoms, that occur frequently throughout the framework must be well noticed. After taken the knowledge of the building blocks, we will look into a representative building to see how the blocks are assembled to a building. It is obvious that the organic linker will stay as it is used before the reaction in most structures, as it is very difficult for the benzene ring to disassemble in our BTC example. One thing, however, may fluctuate greatly from structure to structure: the coordination mode. Often there are many atoms, or sites, that are capable of coordinating to metal atoms, but almost always, not all of them do. It is very difficult to predict which coordination mode the ligand will take, since even under the same topology, the ligands are found to take structures with many different coordination modes [1,30,31]. Attempts have been made to collectively study coordination modes [34], but for successful discovery of any laws governing them, acquisition of more data is necessary. In collaboration with the coordination modes, though it is difficult to distinguish causation from correlation at this level, the reaction environment determines the shape in which the metal atoms exist in the framework. From Tables 4, 5 and 7, it has been shown the nuclear types the metal atoms take in the framework, but the concept has never been visited yet. This 'nucleus' is a small collection of metal atoms and atoms from the organic ligand coordinating to them and is more commonly called 'metal clusters' because many metal atoms are found together in most structures. These metal clusters are one of the most important character to determine the topology of MOFs, and the frameworks are named as binuclear, trinuclear, etc. according to the number metal atoms present in the metal cluster. If small variations within the same topology are ignored, the framework can be viewed as a collection of simple connections between the unvarying organic ligand and the metal clusters, just like vertices and edges of a mathematical 3D figure. The simplification illustrated in Figure 5 exemplifies the power of reduction in brining different structures together. Although it could have been inferred from the same molecular formulas, a great number of structures introduced in Tables 4, 5 and 7 actually have the exact same framework. Structure explains the popularity of [RMI][metal(BTC)] topology Some of the most commonly occurring structures need attention, not only because they will be frequently met in trials of novel conditions, but also they will provide a valuable starting point in relating to other structures occurring in the same system to understand the correlations like the ones we have visited. The topology [RMI][Metal(BTC)] occurs in most metal systems that have been reported and in the highest frequency. With this topology as an example, we will show how a complex structure may be simplified. This way, details unnecessary for understanding of the topology can be ignored and attention may be more easily focused on the topology itself. The characteristics that may vary within the topology without changing it include coordination modes, bond angle, and bond in certain ranges. The simplification above is itself beautiful but is meaningless if description of the topology is not accompanied. Description gives meaning to the structure and explanations for many of the observed phenomena. Based on face-centred cubic lattice (FCC), the unit cell of [RMI][Metal(BTC)] is very compact. Its binuclear metal cluster occupies all the FCC sites, and BTC occupies the interstitial sites. There are eight BTC ligands, and the rest of the interstitial sites appear empty in Figure 6. These sites, however, are not actually empty. There are eight metal clusters and eight BTC ligands in the unit cell, but each metal cluster has double positive charge while BTC ligand has triple negative. The framework is negatively charged, as nearly every ionothermally synthesised framework is, and the charge balance is maintained by the guest cations occupying the rest of the interstitial sites. This allows no void for the structure and is thus stable. Nevertheless, the structure may not house longer cations regardless of how preferred it is over other possible options; it is just impossible. This complies with the observation from Table 5 that the structure is very much preferred with [EMI], but only with [EMI] and the preference drops greatly as we move on to longer cations. Seemingly different structures may belong to the same topology class A large number of syntheses have been reported to the literature, but the number of novel topologies is much smaller. It will be very interesting to see so many structures that once appeared all different converging into one topology. In this example, a group of structures with a different formula and a different nuclear type will be merged with the [RMI][Metal(BTC)] class that has been described above. Figure 7 depicts the [RMI][Metal(BTC)] structure. This same structure, however, is shared by [EMI] 2 [In 2 Co(OH) 2 (BTC) 2 Br 2 ] structure that has a remarkably different molecular formula. The formula is the simplest tool to represent frameworks, but it can sometimes be misleading. Figure 8 shows an even more striking Although it is very difficult to catch any similarities from the formula nor the structure if at first glance they actually fall under the same topology umbrella. This remarkable similarity is possible because some coordination sites of the trinuclear metal cluster are occupied by another molecular moiety, OAc in this case as shown in Figure 7. These places the trinuclear clusters in the octahedral coordination mode, which is the maximum coordination that binuclear metal clusters can have. Outstanding properties of ionothermally prepared MOFs In previous sections, we have explored through the diverse structures prepared by ionothermal synthesis and several perspectives through which the groups of structures may be analysed to gain deeper insights. The last step is to find a practical use for those insights. The versatility of ionothermal synthesis, that its reaction environment may be easily altered and related to the change in products, directly leads us towards the diversity of structures that may be prepared through the methodology. As such, ionothermal synthesis promises a variety of potential uses, although most of them have obstacles yet to be resolved along the way towards practical employment. Ion exchange is the key to make use of the pores Many of the most popularly studied application of MOFs make use of the frameworks as molecular sieves. The nanoscale pores of MOFs can selectively filter out any chemicals that do not fit into them and this selectivity can be chosen by the industry among the diversity of reported structures. The first use of ionothermally prepared MOFs is probably also the same. In this case, ionothermal synthesis has one advantage that the solvent functions as a template and can be varied in size to modify the pore size. However, it is a double-edged sword that actually limits the practicality of ionothermal synthesis. To make any use of the pores, the templates occupying the pores must be removed. The problem is that they hardly ever do. The void volume of the structures synthesised with the cation varied in size has been compared in Tables 4, 5 and 7. Frameworks with the solvent resident in their channels, or cavities, tend to have compact structure with the void volume as low as 0% of the unit cell volume. For your reference, MOF-5, a representative framework, has a void cavity as large as 70% of unit cell volume. This absence of void volume arose because of the large solvent cations stuck in the cavity, rather than the framework itself. When calculated with the resident cations completely removed, void volume was increased to approximately 50% of unit cell volume. In theory, the large volume occupied by the cations may be decreased by subjecting the framework to ion exchange with smaller cations, so that the rest may be used purposefully. Unfortunately for now, this possibility seems to stay only in theory. Given its important position-the first step in bringing ionothermal synthesis to practicality, tremendous efforts have been put into making this exchange possible, but they rarely succeeded. In one case that we tested, evacuation of cations was observed in [BMI] 2 [Co 2 (BTC) 2 (H 2 O) 2 ] crystals upon treatment with water, but only when accompanied with significant destruction of the framework [31]. Nevertheless, Li et al. reported partial but stable ion exchange with [EMI] 2 [In 2 Co(OH) 2 (BTC) 2 Br 2 ] crystals [32], suggesting a new possibility for the ionothermal synthesis methodology. Placing metal atoms in proximity to yield novel characteristics The limitations posed by the irreplaceable templates have indeed disappointed the researchers and presumably many of you, too. However, even if the pores of ionothermally prepared MOFs are totally unusable, they still have some valuable characteristics. It is very common in the world of nanoscience that a substance acquires some characteristics completely different from those of its macroscopic bulk. One of the most frequently reported application is detection of chemicals via photoluminescence that changes upon encounter with specific chemicals. This includes the photoluminescence of europium ions in [HMI][Eu(DHBDC) 2, where DHBDC indicates 2,5-dihydroxytelephtalic acid, capable of detect Ba 2+ ions quantitatively [25], and [RMI][Eu 2 (BDC) 3 Cl] for detection of aniline [18]. In addition, ionothermally prepared [EMI][Dy 3 (BDC) 5 ], a rod-shaped polymer, has been shown to exhibit slow magnetic relaxation behaviour like single-molecule magnets [22]. It seems like ionothermally prepared structures may be applied for any purposes that exist in the field of nanochemistry.
2019-04-10T13:13:16.787Z
2018-11-05T00:00:00.000
{ "year": 2020, "sha1": "d9ac792f6057947978a5a61c0ce3eb7e64b03ea2", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/63366", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "9a019f084c232c2180eb493acaf2051616d1a49a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
258303643
pes2o/s2orc
v3-fos-license
Analyzing the performances of squash functions in capsnets on complex images Abstract Classical Convolutional Neural Networks (CNNs) have been the benchmark for most object classification and face recognition tasks despite their major shortcomings, including the inability to capture spatial co-location and the preference for invariance over equivariance. In order to overcome CNN’s shortcomings, CapsNets’ hierarchical routing layered architecture was developed. The capsule replaces average or maximum pooling techniques used in CNNs with dynamic routing between lower level and higher level neural units. It also introduces regularization mechanisms for dealing with equivariance properties in reconstruction, improving hierarchical data representation and improving hierarchical data representation. Since capsules can overcome existing limitations, they can serve as potential benchmarks for detecting, segmenting, and reconstructing objects. As a result of analyzing the fundamental MNIST handwritten digit dataset, CapsNets demonstrated state-of-the-art results. Through the use of two fundamental datasets, MNIST and CIFAR-10, we investigated a number of squash functions in order to further enhance this distinction. When compared to Sabour and Edgar’s models, the optimized squash function performs marginally better and presents fewer test errors. In comparison to both squash functions, the optimized squash function converges faster as it is more efficient, scalable, and similar and can be trained on any neural network. Introduction For years, convolutional neural networks (CNNs) have been the tools of choice when it comes to solving computer vision problems.Due to their feature extraction approach, CNNs are the most widely used algorithm for learning meaningful and hierarchical information (Ayidzoe, Yongbin, Kwabena, et al., 2021).When CNN features are applied to images and videos, spatial localization is greatly useful; however, these networks have their own limitations.Convolutional layers require kernels to learn how to identify all relevant features in input data.Their performance, however, is highly dependent upon the availability of a large volume of data in different variations (LaLonde & Bagci, 2018).It is thus important to augment the training dataset before performing transformations such as rotations and occlusions.Nevertheless, the burden of learning visual and modified features on a traditional CNN can be great.But one issue with CNNs is that they require parameter pooling to keep translational invariance and regulate the number of parameters.However, they do not explicitly depict the relationship between the positions of the characteristics (Xiong et al., 2019).Due to the fact that two identical objects in different orientations are not represented identically, as humans do, a vast amount of training data, augmentation operations, and network bandwidth are required.CNNs also have the issue that pooling leads to information loss in forward pass progress, making it more difficult to locate smaller objects during localization and segmentation tasks. An innovative class of neural networks was proposed in (Sara et al., 2017) using the concept of a "capsule".According to the definition of capsules given by (Sara et al., 2017), capsules are described as a group of neurons that represent the existence of a feature as well as parameters related to its instantiation.These capsule vectors provide a richer representation of information in the network than the scalar activations of kernels in a traditional CNN.Capsules should therefore be able to encode both a visual feature's existence and its transformation within the application it is used to.Despite the potential of capsule networks, there are still many uncertainties surrounding how they function (Nair et al., 2021).All classes of neural networks can be compared to "black boxes", not only those with capsules.As neural networks have always been difficult to interpret, it is difficult to evaluate the benefits of capsules without comparing them with CNNs.Typical architectures of a CapsNet with the decoder and encoder parts are shown in Figures 1 and 2 respectively. Using a deep visualization technique, this study will generate images that visually represent information contained in capsules in an effort to clarify them.By comparing these images to other CapsNets research works done in a similar fashion, visual evidence can be provided to support the hypothesized benefits of capsule networks.Additionally, the visual impact of modifying a capsule's value is examined for a more accurate assessment.Additionally, a reconstruction network and dynamic routing will be examined as part of the study of the original ca psule network architecture, which was proposed in (Sara et al., 2017). Summarily; • In this paper, two benchmark datasets are examined to see how the squash function affects CapsNets. • Perform extensive analysis on the performance of the squash functions using visualizations. In the remaining sections, we introduce relevant works in Section 2. The methodology and squash functions are described in Section 3, followed by the experimental setup and results in Section 4, and finally the conclusion is presented in Section 5. Related work Hinton's 2017 paper (Sara et al., 2017) presents the capsule vectors as convolutional architectures, based on the concept of a capsule neural network.An alternative to traditional down-sampling methods such as max pooling is proposed that selectively links units within a capsule together.In 2018, Hinton published a follow-up article (Hinton et al., 2018) that extended capsules to matrix form and further developed the routing scheme; however, our study will primarily focus on the architecture discussed in the baseline study (Sara et al., 2017), and we will perform experiments using the dynamic routing algorithm(see Algorithm 1) in parallel with those in (Sara et al., 2017). Several other modifications to the original architecture have also been proposed, such as in (Edgar et al., 2017;Yaw et al., 2022a), where the number of layers and capsule size is increased as well as changes to the activation function is made.Although the dynamic routing procedure recently proposed by (Sara et al., 2017) is effective, there is no standard formalization of the heuristic. According to (Wang & Liu, 2018), the routing strategy proposed by (Sara et al., 2017) can be partially expressed as an optimization problem that minimizes a clustering-like loss and a KL regularization term between the current coupling distribution and its last state.In addition, the authors introduce another simple routing method that exhibits a number of interesting features.As described in (Rawlinson et al., 2018), capsules without masking may be more generalizable than those with masking.According to (Martins et al., 2019), multi-lane capsule networks (MLCNs) are a resource-efficient way to organize capsule networks (CapsNets) for parallel processing and high accuracy at low costs.With CapsNet, MLCNs consist of several (distinct) parallel lanes, each contributing to a dimension of the result.In both Fashion-MNIST and CIFAR-10 datasets, their results indicate similar accuracy with reduced parameter costs.In addition, when using a proposed novel configuration for the lanes, the MLCN outperforms the original CapsNet.Furthermore, MLCN has faster training and inference times than CapsNet in the same accelerator, over twofold faster.By combining pairwise inputs with the capsule architecture, the authors in (Neill, 2018) construct a Siamese capsule network.Siamese Capsule Networks outperform strong baselines on two pairwise learning datasets, exhibiting the greatest performance in the few-shot learning setting where pairwise images contain unseen subjects. A wide range of applications have also been found for capsule networks.For instance, CapsNets are well suited for predicting traffic speed because of the spatiotemporal character of traffic data expressed in images (Kim et al., n.d..).The work of (Steur & Schwenker, 2021) contributes to the development of CapsNets for text classification on six datasets selected.Based on empirical results, the authors demonstrate the robustness of CapsNets with routing-by-agreement for a wide variety of net architectures, datasets, and text classification problems.There have been good results with CapsNets in other areas, such as hyperspectral image classification (Using & Training, n.d..) (Ding et al., 2021), where labelled data is harder to obtain.Agricultural (Kwabena et al., 2020) and health (Afriyie, 2021;Ayidzoe, Yongbin, Kwabena, et al., 2021;Yaw et al., 2022c) applications of CapsNets have been widely explored. Though these capsule networks demonstrate great potential, their justification for performing so well is less clear.As (Sara et al., 2017) indicates, capsules have several potential advantages, such as encoding feature transformations and enhancing information aggregation through dynamic routing.While impressive, the results of the experiments cannot prove that these characteristics are present in the capsules.A number of experiments carried out in (Mukhometzianov & Carrillo, n. d..) (Lian et al., 2023;Marchisio et al., 2020;Zhang et al., 2017) suggest that certain object features may be controlled via capsule manipulation, but this is not fully explored.This methodology is limited in scope again, with the authors in (Sun et al., 2021) taking a more concerted approach to explainability by varying output capsules in multiple dimensions.To analyze the advantages of a capsule network over a traditional CNN, activation functions must be applied to the capsule network.Due to the lack of thorough exploration of capsule networks at a feature level, understanding capsules is crucial before adopting them in the field.In the next section, we present activation functions employed by various researchers, including a baseline activation function developed by (Sara et al., 2017). A comprehensive review of CapsNet based methods was presented by (Goceri, 2020), followed by the design of a new CapsNet topology, the application of the proposed topology to three types of tumours, and the comparative evaluation of the results obtained by other methods.In the proposed approach, 92.65% accuracy is achieved on tumor classification with efficiency according to the numerical results presented by the Author.According to comparative evaluations, the proposed network is more accurate at classifying images than other approaches.By using the Capsule network, (Tiwari, 2021) proposes a deep learning-based approach for detecting melanoma.Based on a comparison of a multi-layer perceptron and convolution network with a Capsule network model, the author concluded that the classification accuracy was 98.9%.As a result of the study, a CapsNet model with fewer learning parameters was found to be more generalizable and performed better in detecting skin cancer.According to (Tiwari & Jain, 2021) an X-ray diagnostic system can be used to detect the presence of COVID-19 based on a decision support system based on the image.The visual geometry group capsule network (VGG-CapsNet) is described in their paper as a CapsNet-based diagnostic system for COVID-19.VGG-CapsNet performs better for COVID-19 diagnosis than CNN-CapsNet, according to simulation results. Methodology A deep visualization technique of activation maps is applied to trained CapsNets in our study as a first step.In this study, we compare the resulting datasets using some squash functions in order to distinguish different feature representations on the CapsNets and gain insight into their potential.To determine whether capsule vectors represent transformation parameters directly, the second experiment further scrutinizes capsule features.A description of the capsule network architecture and a presentation of various squash functions will be presented in this section.Detailed results and experimental details will follow for the different squash functions. Capsule network architecture As first described by (Sara et al., 2017), whole vectors are used for representing internal properties (also referred to as instantiation parameters, including pose) of entities within an image, and each capsule represents one instance of an entity within the image.Pooling is used as a crude way to route outputs in CNNs, which use single scalar outputs.Subsampling is performed by pooling so that neurons are invariant to viewpoint changes; capsules, on the other hand, seek to preserve the information in order to achieve equivariance.To achieve translation equivariance, the lower-level capsules (such as the nose, ears, etc.) are sent as input to parent capsules (such as the face) representing part-whole relationships through linear transformations.Thus, pooling is replaced with dynamic routing.Originally developed in computer graphics, where images are rendered based on their internal hierarchical representations, this theory proposes that the brain solves an inverse graphics problem by deconstructing an image to its latent hierarchical properties when presented with an image.The CapsNets proposed by (Sara et al., 2017) use dynamic routing (DR) and a CNN to solve the MNIST dataset (images of 28 × 28 pixels).In the architecture, the first capsule layer uses two convolutional layers as the input representations, which are then routed to the final class capsule layer.It is possible to reuse and replicate learned knowledge from local feature representations in other parts of the receptive field because of the initial convolutional layers.An Iterative Dynamic Routing algorithm determines capsule inputs.A transformation W ij is used to output the vector u i of the capsule C K i .An object's state (e.g.orientation, position, relationship with upper capsule) is indicated by the direction of the vector u i , which represents the probability that the lower-level capsule detected it.A prediction vector ûj=i ; is created from the output vector u i where ûj=i ¼ W ij u i : In the next step, log prior probabilities b ij from a sigmoid function are multiplied by a coupling coefficient ∑ k e b ik and softmaxes are then applied.When ûL j=i is multiplied by u Lþ1 j , its scalar magnitude increases.The coupling coefficient C ij is likewise increased, while the remaining potential parent capsule coupling coefficients are decreased.Routing by Agreement is then carried out via coincidence filtering to find clusters of predictions that are close to each other.Nonlinear normalization (also known as Squash function) uses entities output vector lengths to represent probability of entity presence. Squash functions In capsule networks, a non-linear activation function called the squashing function is used after the iterative routing procedure.This was proposed by (Sara et al., 2017) in their work on the performance of capsule networks in complex images.The squashing function transforms the length of the output vector into the probability of the existence of the entity present within the capsule.It performs shrinking of the long output vectors slightly below length one and short vectors almost close to zero.This study therefore analyses the performance of different squash functions on complex datasets images.As a result, the following squash functions (see Table 1) are tested in terms of performance on complex images. Loss function For image classification task, for each image capsule, we used a separate margin loss (Sara et al., 2017) function to identify where a given image category is present within a capsule.For image capsule s, the margin loss, L s ; is given by; 7: for all capusule i in layer land capsule j in layer 8: Here, T s ¼ 1 if the image category exists within the image capsule, otherwise it is set to 0. m þ and m À are set as 0.9 and 0.1 accordingly.The down-weighting λ is set to 0.5 with the optimal performance. Datasets Experiments were conducted on three benchmark datasets purposely for image classification.The details of each dataset are shown in Table 2. Implementation For the experimental analysis, we utilized Keras for the front-end and TensorFlow for the back-end. Our Python code was implemented in conjunction with an NVIDIA GPU GeForce GTX 1050 with 16GB RAM, a Windows OS, and an Intel Core i5 @ 3.70 GHz CPU from the 8th generation.Based on the default parameters and with 100 batches running for 200 epochs on the FMNIST and 100 epochs on the CIFAR-10 dataset, the proposed optimized CapsNets were trained.In the dynamic routing algorithm proposed by (Sara et al., 2017), three routing iterations are performed.The learning rate was further adjusted to 0.001 during the training and the learning rate decay to 0.9 during the testing.The margin loss function was then employed (see Equation 1) to train the models.In our experimentation, we applied standardization over each image, and we trained all the networks from scratch.Only the best model is saved during training, which is controlled by patience, an early stopping hyperparameter set to 10.In the primary capsule layer, 8-dimensional vectors were instantiated for each capsule, and 16-dimensional vectors for each convolutional and image capsule.Within image capsules, the length of each capsule indicates the existence of a specific image category within a dataset, which is then utilized to identify the image categories within the dataset. Experimental results and discussion Our experiments were evaluated according to accuracy based on related research in the same domain (Harilal & Patil, 2022).A summary of the experiment results, compared to two benchmark datasets, is shown in Table 3. A graphical representation of the performance of the various squash functions used in this study is presented in Figure 3. Based on the comparison of performance, (Sara et al., 2017) 2017) squash functions produce large activation functions even for smaller values s j compared to the optimized squash function (Yaw et al., 2022b) resulting in faster initial growth of the function.Therefore, the optimized squash function outperforms (Sara et al., 2017) and (Edgar et al., 2017) squash function.Moreover, the optimized squash function (Yaw et al., 2022b) can compress short vectors to almost zero and long vectors to just below one.Hence, this shows that the optimized squash function produces better sparsity, preventing capsules from holding on to high activation values.In order to obtain the capability of capturing information from images with varied backgrounds, sparsity is used to discriminate and optimize high discriminative capsules.Figure 3 illustrates the performance improvement achieved by the optimized squash function. In order to determine the effectiveness of any proposed classification model, there are several methods available.In order to evaluate the performance of the various squash functions on standard datasets, the following evaluation parameters were used: Accuracy: A measure of how many categories are correctly classified compared to how many total categories exist.As a result of all experiments, we quote the overall accuracy. Loss: This metric measures how far the model's predictions differ from the true labels.These experiments use margin loss as a measure of loss. Clustering: We derive and analyze the clustering achieved by the class capsule layer.The routing algorithm on the datasets prove to be effective in this instance. Area Under Curve(AUC): In order to analyze the performance of the model on imbalanced datasets, the receiver operating characteristics (ROC) and the precision-recall curves (PR curves) are calculated. The performance of the different models was compared at the 200 epoch mark rather than training each model until convergence because of computational constraints which are depicted in Figure 4. FMNIST showed that the optimized squash function (Yaw et al., 2022b) achieved 92.78% accuracy, and Edgar's model (Edgar et al., 2017) and Sabour's model (Sara et al., 2017) achieved 92.78% and 92.49% accuracy, respectively.With the FMNIST dataset as a training set (see Figure 5), the optimized squash function, Edgar's model, and Sabour's model all showed similar classification error rates of 7.20%, 7.22, and 7.51. Since the CIFAR-10 is complex and computationally constrained, we trained the images for 100 epochs.A detailed analysis of the performance assessment for the CIFAR10 dataset can be found in Table 4.A model's performance on imbalance datasets does not depend on the class in which the data is distributed.On the CIFAR10, the optimized squash function (Yaw et al., 2022b) achieved the highest accuracy of 86.7%, compared to Edgar et al (Edgar et al., 2017) squash function accuracy of 85.63% and Sabour et al (Sara et al., 2017) original squash function accuracy of 84.57% (see Figure 6). According to our analysis, the optimized squash function (Yaw et al., 2022b), Edgar et al model (Edgar et al., 2017), and Sabour's model (Sara et al., 2017) all had classification error rates of 13.21%, 14.37%, and 15.43% when training with the CIFAR10 dataset.The optimized squash function performed marginally better than Edgar's model and Sabour's model when used on the same dataset, achieving 87.79%, 85.63%, and 84.57% accuracy, respectively.On the basis of the per class accuracy, all the models are assessed on the individual classes.Using a different dataset or setting a different hyperparameter can lead to better results and greater accuracy.Due to the imbalance nature of the dataset, we generated and analyzed receiver operating characteristics(ROC) and precision-recall(P-R) curves for all the models for CIFAR 10 and FMNIST.A receiver operating characteristic (ROC) curve and a precision-recall (PR) curve were used to determine how effectively the models distinguish between the different classes.There is a paradoxical relationship between accuracy and performance when datasets have highly imbalanced classes since classes with large samples tend to overshadow smaller classes (Zhao & Cen, 2014).Since the area under the curve (AUC) measures the sensitivity and specificity of the model's predictions across thresholds (Hajian-Tilaki, 2013), we use it to summarize the model's performance across thresholds.As shown in Figure 7, the ROC curves for the two imbalanced datasets all lie above the diagonal, which indicates that the optimized model is effective at discriminating between categories.Edgar's and Sabour's models show weaker discriminative power than the optimized squash function. The PR curves shown in Figure 8 are also appropriate for evaluating highly-imbalanced datasets.Even with the class imbalance, the optimized model was able to discriminate between the different categories effectively regardless of the class imbalance. Similar experimentation for the ROC and PR curves on the CIFAR10 dataset for all the three models are shown in Figure 9 with the optimized model discriminating better among the different classes. For analyzing the separability of the clusters formed at the class capsule layer, we used t-distributed stochastic neighbor embedding (TSNE) (García-Alonso et al., 2014).The formation of distinct clusters confirms that the model is able to classify each test image correctly.On FMNIST and CIFAR10 datasets, Figure 10 shows clusters for all three models at the secondary capsule layer.In contrast to the original and Edgar's CapsNet models, the optimized model forms distinct clusters for the datasets (although they overlap).It is possible to observe a few outliers from each model's cluster; however, they are not too far from their respective clusters.Based on these results, the optimized model has a good discriminative ability in comparison with the other models. Conclusion Our paper is unique in two respects: 1) We evaluated the performance of a variety of squash functions on complex images using CapsNets.2) A comparison of the optimized squash function with other squash functions showed that the optimized squash function performed better, significantly reduced the number of parameters, and introduced interesting changes to the CapsNet model.Using two standard datasets with complex backgrounds, we tested three different squash functions: the optimized squash function, Edgar's squash function, and Sabour's squash function.Based on the comparison of these squash functions in CapsNets, the optimized squash is clearly superior to the other models, since the entities in the images are well preserved.The optimized squash function also improves CapsNet performance by preventing information sensitivity in addition to shrinking vectors.The Sigmoid function was chosen instead of the softmax function for all dynamic routing models in order to achieve better normalization of the coupling coefficient.The optimized squash function also employs feature extraction so that images can be classified better based on their feature information.Using the feature extraction technique in the encoder, more discriminable feature representations could be created when dealing with complex background data.The optimized squash function achieves state-of-the-art results when compared to the standard datasets, demonstrating its effectiveness.The optimized squash is a new method and an important implementation idea to alleviate the problem of CapsNets information sensitiveness.The optimized squash is a new method and an important implementation idea to alleviate the problem of CapsNets' information sensitiveness.We hope to study more squash functions and modify them in the future so that they can perform better in classifying complex images in the future. Figure 1 . Figure 1.A typical architecture of CapsNet encoder with an image from MNIST dataset. Figure Figure 2. A typical architecture of CapsNet decoder with an image from MNIST dataset. Figure 3.Comparison between different squash functions. Figure Figure 9. Multi-class Receiver Operating Characteristic (ROC) curves and Precision-Recall curves for CIFAR10.The (a), (b) and (c) represents the ROC curves for (a) Afriyie et al (Yaw et al., 2022b) model (b) Sabour et al (Sara et al., 2017) model.(c) Edgar et al (Edgar et al., 2017) model and the (d), (e) and (f) consists of the Precision-Recall curves of the respective models. Figure Figure 10.Visualization of the clusters formed at the: (a) FMNIST-caps-amp layer of the optimized model (b) FMNISTcaps-amp layer of the Sabour's model (c) FMNIST-caps-amp layer of the Edgar's model (d) CIFAR10-caps-amp layer of the optimized model (e) CIFAR10caps-amp layer of the Sabour's model (f) CIFAR10-caps-amp layer of the Edgar's model. Table 2 . Properties of datasets and(Edgar et al.,
2023-04-25T15:03:30.696Z
2023-04-23T00:00:00.000
{ "year": 2023, "sha1": "200e28b56f26b412bdcd428619583996f3a1277f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/23311916.2023.2203890", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "e4ec52be26607139c6f41a29aee6911ff1fa2a9d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
131776337
pes2o/s2orc
v3-fos-license
Quality in perinatal care: applying performance measurement using joint commission on accreditation of healthcare organizations indicators in Italy Background Maternal and child health are internationally considered to be among the best measures for assessing health-care quality. The study was carried out with the following aims: 1) to assess the quality of perinatal care (PC) by measuring the frequencies of the five PC indicators developed by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) and comparing results with international standards; 2) to examine whether maternal, pregnancy care and neonatal characteristics could be factors associated with the quality of perinatal care hospital performance, measured through these indicators. Methods We retrospectively reviewed medical charts of women over the age of 18 who experienced delivery in Gynecology/obstetrics wards between January–December 2016, and those of their newborns hospitalized in the Neonatology or Neonatal Intensive Care Unit (NICU) of a public non-teaching hospital in Catanzaro (Italy). Indicators were calculated according to the methodology specified in the manual for JCAHO measures. Univariate and multivariate analyses were performed to test the independent association of maternal, pregnancy care and neonatal characteristics on the adherence to JCAHO PC indicators. Results The records of 1943 women and 1974 newborns were identified and reviewed in order to be included in at least one of the PC indicators. Elective/early-term delivery, was performed in 27.6% of eligible women, far from the recommended goal (0%); cesarean section in nulliparous women with a term, singleton baby in a vertex position exceeded the suggested target of < 24% and the adherence to antenatal steroids administration was suboptimal (87%). Results of the exclusive breastfeeding indicator achieved a better performance (81%) and compliance with the PC-04 indicator was satisfactory with only 0.4% healthcare-associated bloodstream infection developed in eligible newborns. Conclusions This is the first study performed in Italy that has evaluated the quality of PC by using all the five JCAHO indicators. The application of this feasible set of indicators allowed us to measure several aspects of PC for which there is no standardized monitoring system in Italy. Our findings revealed significant deficiencies in the adherence to recommended processes of PC and suggest that there is still substantial work required to improve care. Electronic supplementary material The online version of this article (10.1186/s12874-019-0722-z) contains supplementary material, which is available to authorized users. Introduction Maternal and child health is a public health priority, because pregnancy, childbirth and puerperium are leading causes of hospitalization for women, and birth-related events are internationally considered to be among the best measures for assessing health-care quality. The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) developed the Perinatal Care (PC) core measure set that includes five metrics with sufficient evidence that better performance are clinically important and are possible with system and process improvement [1,2]. These core measures were chosen from a broader set among those recommended by the National Quality Forum (NQF) by a technical advisory panel of experts in perinatal care. The benefit of the core measures is that they provide a national, standardized set of quality metrics that hospitals can use [3]. In Italy, since 2010, the Outcomes National Plan (Piano Nazionale Esiti -PNE), started up by the National Agency for Regional Health Services (Agenzia Nazionale per i Servizi Sanitari Regionali-Age.Na.S.), has provided an active evaluation of hospital performance, but the PNE's indicators regarding perinatal care focus attention on cesarean section only [4]. Also, the 2011-2013 National Health Care Plan underlines the need for development and implementation of certification programs for hospital birth centers, by involving scientific societies, as well as associations of obstetricians and nurses [5,6]. The quality of care provided to the adult hospitalized Italian population has been scrutinized in the past years by use of adequate indicators [7,8], whereas only sparse data is available for perinatal care, although an interesting approach suggesting the use of a set of 19 indicators for the performance assessment in the maternity pathway in one region of Italy has thoroughly taken into account also perinatal care indicators [9]. In this context, the primary aim of this study was to assess the quality of perinatal care in a specific geographical area of Italy using the JCAHO indicators, since, compared with PNE indicators, they involve more aspects of perinatal care and include indicators for which there is no standardized monitoring system in Italy, such as breastfeeding or elective delivery [10]. Further aims of the study were to assess the feasibility of these quality indicators in our healthcare setting that could be used to monitor the effects of quality improvement interventions, and to analyze whether maternal, pregnancy care and neonatal characteristics could be associated with the quality of perinatal care hospital performance, measured through JCAHO PC indicators. Materials and methods The study was carried out by retrospectively reviewing medical charts of women over the age of 18 who experienced delivery in Gynecology/obstetrics wards between January 1 and December 31 2016, and those of their newborns hospitalized in the Neonatology or Neonatal Intensive Care Unit (NICU) of a public non-teaching hospital in Catanzaro (Italy). The medical charts of the women were matched with those of their newborns and reviewed concurrently. Medical charts were selected according to the list of the International Classification of Diseases, 10th Revision, Procedure Coding System (ICD-10-CM/PCS) codes and other diagnosis and procedures codes. We used the JCAHO perinatal core measure set, which considers the following: Elective deliveries (PC-01); Cesarian section (PC-02) in nulliparous women with a term, singleton baby in a vertex position (NTSV); Antenatal steroids (PC-03); Healthcare-associated bloodstream infections in newborns (PC-04) and Exclusive breast milk feeding (PC-05). Indicators were calculated according to the methodology specified in the manual for JCAHO measures and summarized in Additional file 1. Whenever the condition described by one of the indicators appeared in the medical record, a score of 1 was assigned if the procedure had been performed consistently with that defined by the indicator, otherwise a score of 0 was attributed. Since the manual for JCAHO measures does not provide clear target rates, we have used the updated reference goals proposed by NQF and Healthy People 2020, to compare calculated indicators. Regarding the PC-01 indicator, although the optimal rate is considered to be 0% [11], Clark et al. considered that the rate of this indicator will never be and should never be consistently zero [12]. For the PC-02 indicator Healthy People 2020 chose a target of a 23.9% NTSV rate [13]. For the PC-03 and PC-04 indicators, the optimal rate is considered the best performance, that is 100 and 0%, respectively [14,15]. Finally, for PC-05, a goal of 75% is considered acceptable [16]. Two physicians not involved in patient' care, but who had been acquainted with the specification manual released at the time of the study design, collected the data and retrieved them on a standardized electronic report form. Maternal, obstetrical and neonatal data were obtained. Specifically, information included socio-demographic and clinical characteristics, obstetric history and pregnancy, delivery, and characteristics of the newborn. Statistical analysis Data analysis was performed through the following steps: 1) for each indicator, frequencies were calculated as the proportion of patients who satisfied the condition of a specific indicator, divided by the total eligible population; 2) then univariate analysis using χ2 test for categorical variables, and Student t-test for independent samples for continuous variables was performed, to explore the association between some PC indicators (elective delivery, cesarian section and exclusive breastfeeding) and several maternal, pregnancy care and neonatal characteristics; 3) furthermore two multivariate logistic regression models were performed to test, after controlling for other variables, the independent association of each of the variables already evaluated at the univariate analysis with the cesarean section in NTSV (model 1) and the adherence to exclusive breast milk feeding measure (model 2). Independent variables for which p was 0.25 or less in univariate analysis were included in the multivariate stepwise logistic regression models. The significance level for variables entering the logistic regression models was set at 0.2 for inclusion and at 0.4 for removal from the model. A two-sided p-value of 0.05 or less was considered as indicating a statistically significant difference. The Ethics Committee of "Mater Domini" Hospital of Catanzaro (Italy) approved the protocol of the study (Prot.E.C.No. 2016/245) in 22 Dec 2016. Considering the nature of the present study, which was based on reviewing medical records of discharged patients, no written consent was needed by the patients. Results A total of 1943 women records were reviewed and, of these, 1172 were eligible for at least one of the PC-Mothers measures (PC-01, 02 and 03). Considering a total of 36 twin-births of which 5 with a single eligible newborn, 1974 newborns' records were identified and reviewed in order to be included in one of the Newborn PC (PC-04 and 05) subpopulations. The medical records of 297 women were reviewed for the "Elective delivery" indicator, 904 for the "Cesarean section" indicator and 31 for the "Antenatal steroids" indicator. Moreover, 473 newborns' medical records were reviewed for the "Healthcare-associated bloodstream infections in newborns" indicator, and 1687 for the "Exclusive breast milk feeding" indicator. Frequency distribution of maternal, pregnancy and prenatal care characteristics according to "Elective delivery", "Cesarean section" and "Exclusive breast milk feeding" rates are shown in Additional file 2, whereas the scoring of the five PC indicators is reported in Table 1. "Elective delivery" indicator (PC-01) Elective delivery at > 37 and < 39 weeks of completed gestation (early-term delivery) was performed in more than 25% of the eligible patients. Results of the multivariate stepwise logistic regression analysis confirmed those of the univariate analysis, except for maternal age, IUGR and maternal comorbidities that were no more significantly associated with cesarean section in NTSV; location of membrane rupture was removed from the model (Model 1 in Table 2). "Antenatal steroids" indicator (PC-03) Only 31 women delivered at > 24 and < 34 weeks of gestation and were considered eligible for antenatal steroids administration. Of these, 87.1% received at least one dose of antenatal steroids before delivering preterm newborns; specifically 9 (29%) did not receive any therapy, 6 (19.3%) received one dose, 14 (45.2%) received 2 doses, and 2 (6.5%) received more than 2 doses. For all women each dose consisted of 12 mg intramuscular betamethasone. All 36 newborns, whose mothers were eligible for antenatal steroids administration, required NICU admission. Respiratory distress syndrome (RDS) occurred in about 60% of these newborns regardless of having been exposed to steroids, whereas mechanical ventilation was required by 60% of newborns who were not exposed to steroids compared to 48.4% of those whose mothers received steroids, although no significant differences were revealed by univariate analysis between the two groups. "Healthcare-associated bloodstream infections in newborns" indicator (PC-04) Of the 473 eligible newborns, only 2 (0.4%) developed a microbiologically confirmed Healthcare-associated bloodstream infection caused by Coagulase-negative staphylococci. The mean birth weight was 2834 g (range 620 to 4770 g), 42.3% of newborns needed NICU admission, and the most common primary diagnoses were prematurity, small for gestational age (SGA), and RDS. Neonatal comorbidities, mostly RDS, occurred more frequently in newborns with a gestation period of ≥ 30 weeks (77.5%). Multivariate stepwise logistic regression analysis results underlined those of the univariate analysis, except for IUGR, pregnancy weight gain and amniotic fluid volume that were no longer significantly associated with adherence to exclusive breast milk feeding; moreover, maternal education and maternal comorbidities were removed from the model (Model 2 in Table 2). Discussion Perinatal care quality and safety is a complex process aimed at achieving maximum health potential for the fetus, the newborn and the mother, and, similarly, evaluating quality of perinatal care is complex because it involves different populations. This is the first study performed in Italy that has evaluated the quality of perinatal care by using all the five JCAHO PC indicators. The application of this feasible set of indicators allowed us to measure several aspects of perinatal care for which there is no standardized monitoring system in Italy, as well as the factors associated with an eventual suboptimal performance. Overall, the results pointed out that the quality of perinatal hospital care, measured through the JCAHO PC indicators, is indicator dependent, with exclusive breastfeeding performing well, whereas for most indicators there is room for improvement. The most critical result of our study pertains to the elective/early-term delivery, that has been performed in 27.6% of eligible women, far from the goal (0%) set by Clark et al., that strongly confirmed the commitment to the elimination of early term elective delivery [12]. This result is concerning, since this indicator has been reported to be one of the most important performance measures due to its impact on clinical practice, on healthcare costs, and on patients' morbidity [17]. Albeit, Salemi et al. in a recent cohort study, highlighted that no excess risk of respiratory morbidities, neonatal sepsis, and NICU admission was associated with the elective induction of delivery at 37-38 weeks of gestation with respect to infants expectantly managed and delivered at 39-40 weeks. Only the early cesarean section significantly supports 13 to 66% increase of several adverse outcomes occurrence in neonates, when compared with the full-term group [18]. In Italy, the practice of induction of labor and elective cesarean section are investigated separately. The Europeristat project has detected that inductions were performed in 15.9% of the total births in Italy in 2010, but data do not allow a separate evaluation of the induction performed in early term deliveries [19]. Therefore, there is a need to standardize definitions and evaluation methods of elective early term deliveries, in order to improve the validity of comparisons among countries. Moreover, as reported by Clark et al. [17], the calculation of this indicator is prone to errors particularly related to the selection of clinical indications for inclusion/exclusion of elective delivery. The result of the PC-02 indicator is in line with the Italian national figure of cesarean section rate that is [20]. Several reasons may be related to the high cesarean rate found in our study: first of all, the cesarean section high rate may be associated with the changed attitude of the physicians aimed at reducing exposure to malpractice litigation [21]. Indeed, in our results cesarean delivery was significantly more frequent in pregnancy with AROM, meconium-stained amniotic fluid and newborns with > 4000 g birth weight, that are not medical indications for cesarean section, suggesting therefore that choice to perform cesarean section was related to a cautious approach of physicians to delivery. Moreover, women's choice of cesarean delivery is increasing since it is perceived as an effective procedure to avoid pain and the other disadvantages associated with vaginal delivery [22,23]. Finally, cesarean section has been reported to be more frequently performed in the private healthcare sector than in the public one [24][25][26] and one of the main reasons for this is that cesarean sections receive higher reimbursement than normal vaginal births, regardless of the risks to women [27] and private healthcare facilities are commonly involved in deliveries in Southern Italy [24,28,29]. It is well known that cesarean sections should be discouraged because they create serious complications for mothers such as infections [30], obstetric haemorrhage [31], uterine rupture, stillbirth and pre-term birth [32]. Also, there is emerging evidence that children born by cesarean section have an increased risk of altered immune development, allergy and asthma, and reduced intestinal gut microbiome diversity [32]. Moreover, as our results have shown and according to previous studies, cesarean sections have a negative effect on the exclusive breast milk feeding, probably because they limit the practice of rooming-in, so delaying the mother-child interaction [33,34]. Thus, cesarean section should not be considered an alternative to vaginal delivery, and should be viewed with caution. Indeed, among the measures taken to discourage unnecessary cesarean sections, several countries have also narrowed the gap in hospital payment between a cesarean section and a vaginal birth [35]. The exclusive breast milk feeding indicator achieved the best performance (81%) in our study. In particular, our result is higher than the set goal (75%) [16], and higher than the value found in 2015 in Italy (77%) [36]. However, it is well known that in Italy there is a tendency to wean children from breastfeeding at an early age, on average at 4 months [29] although, as previously reported, the start of exclusive breast milk feeding during hospitalization positively influences the continuation of this practice in the following months [33,34,37]. Health professionals' approaches to breastfeeding during antenatal care are crucial to promote exclusive breast milk feeding. Our results highlight that exclusive breast milk feeding significantly increased in women who underwent prenatal tests; probably this practice contributed in encouraging increasing educational activities in physician-patient relationship also regarding breastfeeding. Also, avoiding in-hospital formula supplementation appears to be a key step for breastfeeding success, together with the appropriate implementation of the other Baby Friendly Hospital Initiative (BFHI) steps [38]. As highlighted by recent meta-analyses, the BFHI approach by steps requiring implementation at the maternity ward, followed by home and family support through counselling, appears to be crucial for breastfeeding success in expectant and/or nursing mothers [39,40]. The significantly lower adherence to PC-05 of newborns with a birth weight < 2500 g as well as in women with previous cesarean section, AROM and, as previous mentioned, with cesarean delivery, are important concerns of our results. These findings suggest the involvement of several non-clinical factors that would seem to be attributable to the overly cautious attitude of the physician concerning patients' management, suggesting the need for improvement in the training of healthcare professionals. Antenatal steroids are intended to reduce the burden of prematurity-related illness (respiratory distress, intraventricular haemorrhage, necrotizing enterocolitis, and Hypertension and diabetes f SROM spontaneous rupture of membranes, PROM premature rupture of membranes, AROM artificial rupture of membranes (amniorrhexis) patent ductus arteriosus) in preterm newborns. The prevalence of early preterm infants revealed by our study (1.8%) is in line with previous data [41], nevertheless there are concerns in international comparisons for this quality measure due to the differences in registration criteria and definitions across countries [42]. This indicator was satisfied in only 87.1% of the eligible patients, highlighting a suboptimal process of care that might have led to increased morbidity or mortality. Indeed, in our study, when the PC-03 indicator was not satisfied, newborns more frequently experienced mechanical ventilation compared with newborns whose mother received antenatal steroids; however, these results are to be interpreted with caution, because of the limited number of included patients. As underlined in a recently published meta-analysis, there is continuing uncertainty about the most appropriate method to calculate the healthcare-associated bloodstream infections burden in NICUs. Cumulative incidence of these severe complications is reported to be variable from 2.9 to 22.8% [43]. Only two healthcare-associated bloodstream infections occurred during 2016 (0.4%) and this result allows us to interpret compliance with the PC-04 indicator as to be satisfied, although also in this case a cautious interpretation of the results is needed regarding the above mentioned concerns in the calculation of this indicator. Although the application of JCAHO PC quality indicators was feasible and intuitive, results of this study should be evaluated in light of potential limitations, considering the fragmentary availability of required data. First, the poor comparability among multiple classification systems is the most substantial barrier that we met. Gilbert et al. suggested that improvement of performance's quality depends on the improvement of the accuracy of data recording and its transparency [1]. Second, patients were recruited from a hospital located in Southern Italy, and may not be representative of the entire country. Third, most of the previous studies were conducted on large numbers of hospitals and therefore were based on aggregated data. Instead, by focusing on one hospital, our results were derived from a smaller number of patients, but detailed information was gathered from each of them. In conclusion, our findings revealed significant deficiencies in the adherence to recommended processes of perinatal care and, consistently with previous studies conducted by some of us to estimate the adherence to evidence-based processes of care in several settings [7,44,45], suggest that it is essential to increase efforts to implement evaluation processes that reflect the healthcare quality based on current evidence and related practice guidelines. The application of the JCAHO PC indicators has demonstrated to be feasible, intuitive and useful to measure perinatal hospital performance, and, although the poor comparability among multiple available quality measures represents a barrier, these performance metrics can be reliably used within an institution, thus enabling comparisons of performance over time, particularly after the implementation of quality improvement interventions. Additional files Additional file 1: Quality indicators for perinatal care improvement. Methodology of indicators calculated specified in the manual for JCAHO measures. (DOCX 17 kb) Additional file 2: Overall adherence to JCHAO indicators and according to several maternal, pregnancy, prenatal care and neonatal characteristics. Frequency distribution of maternal, pregnancy and prenatal care characteristics according to "Elective delivery", "Cesarean section" and "Exclusive breast milk feeding" rates. (DOCX 34 kb) Abbreviations Age.Na.S: Agenzia Nazionale per i servizi sanitari Regionali (National Agency for Regional Health Services); AROM: Artificial rupture of membranes; CI: Confidence interval; IUGR: Intrauterin growth restriction; JCAHO: Joint commission on accreditation of healthcare organizations; NICU: Neonatal intensive care unit; NQF: National quality forum; NTSV: Nulliparous women with a Term, Singleton baby in a Vertex position; OR: Odds ratio; PC: Perinatal care; PNE: Piano Nazionale Esiti (Outcomes National Plan); PROM: Premature rupture of membranes; RDS: Respiratory distress syndrome; SGA: Small for gestational age; SROM: Spontaneous rupture of membranes
2019-04-25T04:21:57.755Z
2019-04-24T00:00:00.000
{ "year": 2019, "sha1": "3aedfb0cc345b481340bb98dcfcf201e4c18d405", "oa_license": "CCBY", "oa_url": "https://bmcmedresmethodol.biomedcentral.com/track/pdf/10.1186/s12874-019-0722-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3aedfb0cc345b481340bb98dcfcf201e4c18d405", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55635577
pes2o/s2orc
v3-fos-license
Saturation and hysteresis effects in ionospheric modification experiments observed by the CUTLASS and EISCAT radars The results of high latitude ionospheric modification experiments utilising the EISCAT heating facility at Tromsø are presented. As a result of the interaction between the high power pump waves and upper hybrid waves in the ionosphere, field-aligned electron density irregularities are artificially excited. Observations of these structures with the CUTLASS coherent HF radars and the EISCAT incoherent UHF radar exhibit hysteresis effects as the heater output power is varied. These are explained in terms of the two-stage mechanism which leads to the growth of the irregularities. Experiments which involve preconditioning of the ionosphere also indicate that hysteresis could be exploited to maximise the intensity of the field-aligned irregularities, especially where the available heater power is limited. In addition, the saturation of the irregularity amplitude is considered. Although, the rate of irregularity growth becomes less rapid at high heater powers it does not seem to fully saturate, indicating that the amplification would continue beyond the capabilities of the Tromsø heater – currently the most powerful of its kind. It is shown that the CUTLASS radars are sensitive to irregularities produced by very low heater powers (effective radiated powers <4 MW). This fact is discussed from the perspective of a new heating facility, SPEAR, located on Spitzbergen and capable of transmitting high frequency radio waves with an effective radiated power ∼10% of that of the Tromsø heater (28 MW). Introduction Artificial ionospheric modification was first discovered in the 1930s when large broadcasting stations started transmitting high power radio signals.The effect was first noted when the modulation of Radio Luxembourg could be heard in the Correspondence to: D. M. Wright (darren.wright@ion.le.ac.uk) background of other radio signals which passed through the region of the ionosphere illuminated by its beam.The Luxembourg effect, as it was first called, was later explained as cross-modulation between the two radio signals caused by the high-power transmission modifying the radio propagation characteristics of the ionosphere for the other radio path (Tellegen, 1933;Bailey and Martyn, 1934).However, the first purpose-built radio frequency (RF) ionospheric "heaters" were not constructed until the early 1970s.These were then employed to perform experiments in the natural plasma laboratory provided by the ionospheric medium. The first reported ionospheric modification (heating) experiments, performed at Platteville, Colorado, revealed that the transmission of a high power pump wave led to the generation of field-aligned electron density irregularities (FAIs; Fialer, 1974;Minkoff et al., 1974).These occur as a result of coupling between the electromagnetic heater wave and upper hybrid waves at the upper hybrid resonance height.Since X-mode polarised radio waves reflect below this height, the irregularities are only generated by O-mode radiation.These structures can then act as intense targets for HF coherent scatter radars (Robinson et al., 1997).Detailed discussions of these heater-induced phenomena are given in the reviews by Robinson (1989) and Stubbe (1996).In the F-region, it is the anomalous self absorption of the O-mode heater wave at the upper hybrid altitude that gives rise to the enhancement in the electron temperature that gives it name to heating.Throughout the region of electron temperature enhancement the electron density mainly increases due to the temperature dependence of the recombination, although on a longer time-scale than that exhibited by the electron temperature. A number of theories relating to the stimulation of artificial FAIs by a high power pump beam have been put forward over the years.One invokes the thermal oscillating twostream instability (TOTSI; e.g.Dysthe et al., 1983) for the creation and rapid growth of the FAIs.More recently, Gurevich et al. (1995) in this study.In addition, a drawback of the theory of Gurevich et al. (1995) is that it predicts abnormally high increases in electron temperature caused by the heater despite a relatively small electron density perturbation (Borisov et al., 2005).These large enhancements in electron temperature have not so far been detected. Initially, the TOTSI causes a linear conversion of electromagnetic pump wave energy into upper hybrid waves (Vaskov and Gurevich, 1977;Dysthe et al., 1983;Robinson, 1988).This coupling requires the presence of plasma density gradients (pre-existing FAIs) and leads to an increase in the FAI amplitude.Once this amplitude exceeds a threshold value (typically after a few milliseconds) the interaction becomes nonlinear and the irregularity amplitude increases explosively.One hypothesis suggests that it is the anomalous absorption of the pump itself (i.e. the conversion of the electromagnetic pump wave into high frequency electrostatic waves at the upper hybrid height) as a result of the interaction with the heater-induced irregularities, that ultimately limits the growth of the FAIs (e.g.Robinson, 2002).Under these circumstances, the relationship between the pump power, P , and the level of anomalous absorption, s is given by where P 1 and P 2 , respectively, are the required power thresholds for the initial and explosive stages of FAI growth, 0 is the level of anomalous absorption before the heater was activated and a is a factor relating to the field parallel scale length of the FAIs (see Robinson, 1989).The two terms in Eq. ( 1) represent the initial and explosive instabilities.Since anomalous absorption ∝n 2 , where n is the FAI amplitude, then irregularity saturation as governed by Eq. ( 1) is shown as the solid curve in Fig. 1 in which is plotted as a function of P .Grach et al. (1978) first postulated that a thermal parametric instability such as the TOTSI should be expected to exhibit hysteresis effects.The existence of hysteresis was confirmed by the ionospheric modification experiments reported by Erukimov et al. (1978) and subsequently by Stubbe et al. (1982) and Jones et al. (1983).A hysteresis effect occurs in the generation of FAI because the threshold power, P t , (shown in Fig. 1) required for the onset of FAI growth is higher than the critical pump power, P c , at which the FAI can no longer be sustained and, hence, collapse.The effective threshold power is given by where P t is larger than both P 1 and P 2 .So, once the heater power P >P t then the FAI form explosively and saturation is rapid.This is demonstrated in Fig. 1 by the path ABCD. If the pump power is then steadily reduced, the FAI do not collapse until P <P c , therefore following path DEFA.This paper will mainly focus on observations of hysteresis effects caused by ionospheric modification experiments using the EISCAT Heater at Tromsø during a campaign in 1997, and diagnosed by the CUTLASS high frequency (HF) coherent radars and the EISCAT UHF incoherent scatter radar at Tromsø.The hysteresis effects are known to be related to the generation of FAIs since they do not occur when X-mode polarised heater waves are transmitted which, as noted previously, are not associated with irregularity generation.The CUTLASS data presented here represent direct measurements of the FAI amplitude and are significant for defining the irregularity power thresholds and saturation levels associated with backscatter received by the CUTLASS radar.They have provided a way of estimating the expected performance of the new SPEAR (Space Plasma Exploration by Active Radar; Wright et al., 2000) high power heating facility which has just been deployed at the high latitude location of Spitzbergen in the Svalbard archipelago. The EISCAT high power heating facility The high power HF heating facility located near Tromsø transmits up to 1 MW of power which is radiated through one of three 6×6 phased arrays of rhombically broadened crossed dipole antennas.Different radiated frequencies, within the range 4 to 8 MHz, can be achieved by selection of arrays 2 or 3, providing an effective radiated power of 276 MW (M.Rietveld, private communication).However, all of the data presented in this paper were taken with the heater operating in O-mode at a frequency of 4.544 MHz with its beam pointing along the field line.During the experiments in October 1997, specially designed modes permitted the slow variation of the heater's output power in order to test power thresholds, irregularity saturation and ionospheric preconditioning.The maximum effective radiated power (ERP) on these occasions was 155 MW (using 10 out of an available 12 transmitters) radiated with a full width at half maximum beam width of 15 • .Further technical information on the EIS-CAT high power HF heating facility can be found in Rietveld et al. (1993). The CUTLASS radars The global SuperDARN radar network (Greenwald et al., 1995) currently consists of 15 HF coherent radars, 9 of which operate in the Northern Hemisphere.Two of the Northern Hemisphere SuperDARN radars comprise the CUTLASS (Co-operative UK Twin Located Auroral Sounding System) system.CUTLASS is a frequency agile bistatic HF radar system (e.g.Milan et al., 1997) operating in the range 8-20 MHz and consisting of stations at Pykkvibaer, Iceland and Hankasalmi, Finland. Figure 2 shows the fields of view of these radars, whilst operating in their standard mode, on a ground projection along with the locations of the facilities at Tromsø and on Spitzbergen.The signals returned to the radars have undergone a Bragg-like backscattering process from FAIs in the ionosphere.There is an aspect angle dependence for scattering, which requires that the radio wave k vector is close to orthogonal to the magnetic field.The experiments described here utilise the EISCAT high power HF Heating facility at Tromsø which can generate artificial field-aligned irregularities as described earlier and thus provide a region of backscatter in the CUTLASS fields of view (e.g.Robinson et al., 1997) when backscatter may not already be present.This effect is illustrated schematically as the inset in Fig. 2. The detection of artificial backscatter by HF radar then provides a powerful way of diagnosing plasma processes (e.g.Robinson et al., 1997) and observing geophysical phenomena (e.g.Yeoman et al., 1997). During the experiments relevant to this paper, the CUT-LASS radars employed a range resolution of 15 km, compared to a normal gate length of 45 km.The radars were, as a result, sounding over a reduced field of view compared with that shown in Fig. 2, centred over Tromsø during these experiments.The nearest range sounded on the Hankasalmi and Pykkvibaer radars were, respectively, 480 km and 1470 km.Tromsø lies at approximately 900 km from the Hankasalmi radar and is twice as remote from Pykkvibaer.Typically, the time resolution of the radar data presented varies from 1-10 s.Only data from beam 5 of the Hankasalmi radar and beam 15 of Pykkvibaer have been employed as these beams overlie Tromsø, the location of the EISCAT heater.The high backscatter powers that are characteristic of artificially generated irregularities make it possible to integrate data over such short dwell times since the signal to noise levels are high. The EISCAT UHF radar The European Incoherent Scatter (EISCAT) UHF radar (e.g.Rishbeth and Williams, 1985;Rishbeth and van Eyken, 1993) is often operated in support of artificial modification experiments using the Tromsø heater (see Sect. 2.1).The SP-UK-HEAT experiments in October 1997, included UHF radar operation, with the transmit/receive antenna at Ramfjordmoen near Tromsø, Norway, aligned approximately along the local magnetic field direction (geographic azimuth: 183.2 • , elevation: 77.2 • ).Four pulse schemes were transmitted; long pulse, alternating code and two power profiles; only data from the former is included here.The long pulse scheme provides observations of electron density, ion and electron temperature and line-of-sight ion velocity over 21 range gates along the Tromsø beam, from approximately 140 to 600 km altitude with a gate separation of ∼22 km in altitude. Radar observations of hysteresis On 6 October 1997 an experiment was undertaken where the heater output power was, for a number of cycles, increased from 0% to 100% output, using 2.5% steps and then decreasing the power again to 0 % in the same fashion.Each step was maintained for 9 s so that the entire cycle took exactly 12 min to complete.The upper and middle panels of Fig. 3 show colour-coded range-time-intensity plots of backscatter power from the Hankasalmi and Pykkvibaer radars, respectively, with observations from both HF radars revealing the artificial irregularities generated by the heater over the course of the experiment.The data shown in Fig. 3 were measured at a sounding frequency in the range 19.0-20.0MHz and a time resolution of 1-2 s for the Hankasalmi radar (top panel), whilst Pykkvibaer (middle panel) transmitted in the range 12.0-13.0MHz with a time resolution of 10 s.The lower panel shows the heater output power over the same interval.Five of the total of six full cycles were preceded by extended periods during which the heater was off to remove the possibility of any ionospheric preconditioning (see Sect. 3.2) being caused by the heater.The Pykkvibaer radar was not switched into its high time resolution mode until half way through the second cycle.The intense backscatter from Hankasalmi was centred on range gate 28 (900 km distant) and that from Pykkvibaer was centred on range 37 (2025 km from the radar).The latter observations exhibit a range ambiguity from the expected ∼1800 km ground range.This is related to the fact that the range calculation algorithm is less accurate over the 1.5 hop ray path geometry required for Pykkvibaer to observe scatter over Tromsø.In contrast, Hankasalmi acquires scatter over Tromsø over a 0.5 hop path.These observations are commensurate with the findings of Yeoman et al. (2001), who employed artificial backscatter to make a range evaluation for the CUTLASS radars.The extra distance travelled by the Pykkvibaer radar signal accounts for the lower backscatter power received by this radar, which are roughly 10-20 dB down on those received by the Hankasalmi radar. Figure 4 presents the Hankasalmi HF radar data from the first cycle only, from 12.24 to 12.36 UT.Panel (a) shows the time series data of backscatter power from range 28 of beam 5, which is in the middle of the patch of artificial scatter.Panel (b) illustrates the power stepping heater cycle at the same time.By plotting the radar backscatter power as a function of heater output for the increasing and decreasing parts of the power cycle (Fig. 4c) a difference in the received radar power is immediately apparent.During the powerincreasing part of the heater cycle the backscatter power increases rapidly until the heater output reaches 20% and thereafter continues to increase more steadily approaching a saturation level of about 40 dB.On the way back down the twostage backscatter power fall-off is less rapid than the equivalent steps on the way up to maximum heater output.The result of this is that right down to the lowest power transmitted by the heater (2.5%, equivalent to an ERP of only 3.9 MW) the difference in backscatter power caused by the hysteresis, P d−u , is ∼20 dB. Figure 5 shows the equivalent plots to Fig. 4c for all six cycles in the experiment.A similar hysteresis signature can be seen in all panels a-f although there appears to be less distinction between the upand down-going parts of the cycle at heater powers greater than 20% in panels (b-f).The smallest separation P d−u at low heater powers is observed to occur for the second cycle (Fig. 5b).However, it is likely that since there was no recovery time (i.e. and extended heater off interval) following the first cycle then the second cycle was affected by ionospheric preconditioning.This will be discussed later.Simultaneous field-aligned EISCAT UHF measurements of the electron temperature (T e ) and density (N e ) in the modified ionosphere are illustrated in Fig. 6 as a function of altitude for the UT period that covers all 6 heater cycles.Large enhancements of the electron temperature correspond to the intervals when the heater was operating and, moreover, the temperature can be seen to increase with the Heater power and lead to changes in the electron temperature T e /T e of ∼100% at maximum heater output.The interaction altitude, where the pump wave couples to upper hybrid waves, was derived from the location of "overshoot" effects observed in the ion line measurements made by the UHF radar and was found to vary from 180-200 km for this interval.There is, as has been demonstrated by Jones et al. (1986) and Robinson (1989), a linear relationship between the electron temperature enhancement T e of the modified ionosphere and the level of anomalous absorption, , and hence irregularity amplitude.Figure 7 shows the variation of T e (normalised with respect to the heater output power) as a function of heater output for the first cycle in our interval.It should be noted that the response time for changes in T e is of the same order as the time between steps in the Heater power cycles (Stocker et al., 1992) and as a result a slight time offset in the overall response throughout the heater cycle is expected to exist.However, again a hysteresis effect is clearly seen where the electron temperature corresponding to the final steps in the heater cycle is far higher than the value of the temperature during the initial heater steps.There is also some evidence to suggest that hysteresis is exhibited by the electron densities (not shown) where the maximum value of N e /N e was ∼10%.However, these relatively small changes are masked by large changes in the ambient electron density.The electron temperatures on the up-and downgoing parts of the heater cycle only differ significantly for heater powers below 30-40% where CUTLASS backscatter power and irregularity amplitude are changing the fastest. Ionospheric preconditioning Once the ionosphere has been modified it may remain in this state for some time after the heater has been turned off.This is most clearly apparent on occasions when the artificial FAI take several minutes to decay.In order for unbiased heating effects to be observed it is often necessary to leave long off periods (several minutes) between intervals of heater on to give the ionosphere time to return to its unmodified state.Changing the ionospheric conditions prior to another type of experiment is known as preconditioning.A study of CUTLASS backscatter from heater-generated irregularities by Bond (1997) demonstrated that the decay time of FAI (and hence the duration of this preconditioning) was dependent on the time of day and was observed to vary from ∼50 in the early afternoon to ∼200 s towards dusk. On 7 October 1997, an experiment was performed to investigate the effects of irregularity saturation, by heating the ionosphere at low power levels, and preconditioning by using relatively short-lived high power pulses prior to longer but low power heater cycles.The backscatter power received by the CUTLASS Hankasalmi radar during this experiment is reproduced in the middle panel of figure 8, in which a series of patches of artificial backscatter can be seen.The upper panel of the figure shows a time series of the backscatter power at range gate 30, located at the centre of the heated patch.Indicated in the lower panel is the heater power.From 12:46 to 13:04 UT the underlying cycle was 1 min of heater on followed by 2 min of heater off.These cycles were arranged in pairs with each transmitting the same final output power.However, for the first 20 s of the second cycle in each pair the heater transmitted at full power.This shall hereafter be termed the "seed pulse".The final output power in each of the three pairs was, in turn, 2.5%, 5% and 10% of full power, as is evident in Fig. 8.The whole process was subsequently repeated from 13:07 to 13:31 UT but now using an underlying 2 min on, 2 min off heater cycle (see Fig. 8).It is evident from the upper panel of Fig. 8 that the backscatter power observed by the Hankasalmi radar was considerably higher in those cycles with an initial seed pulse.Furthermore, in the absence of the seed pulse a larger backscatter power was received during a 2-min heater on than for a 1min heater on.This implies that at heater powers below 10% irregularity saturation required longer than 60 s in each case.The increase in backscatter power, P seed , obtained by the use of the seed pulse, compared to that without, is indicated in Fig. 9.The upper panel of the figure corresponds to the 1-min on-1-min off cycles and the lower panel, to the 2-min on-2-min off cycles.For each pair of cycles corresponding to each final heater power (i.e.2.5, 5 and 10%), the average backscatter power from Hankasalmi is plotted as a cross for the cycle containing a seed pulse and a plus sign for the cycle without.At low heater powers such as those employed in this experiment (0-10%) the effects of using the seed pulse (ionospheric preconditioning) are dramatic and the duration that the heater is active is also significant for the saturation of irregularities created by the heater.P seed was as large as 25 dB when using a seed pulse followed by the short duration heating interval at 2.5% output. Discussion and conclusion This paper has presented the results of ionospheric modification experiments on two consecutive days in October 1997 employing the high power heating facility at Tromsø.The CUTLASS HF coherent radars received backscatter from irregularities artificially generated by the heater.These irregularities provide a very coherent and intense target for the radars facilitating high spatial and temporal resolution diagnosis of the plasma physical processes that lead to their existence. These experiments provide important insight into the growth and saturation of artificially stimulated FAI.A clear hysteresis effect is observed between the up-and down-going parts of the heater cycle, the difference in radar backscatter Fig. 9.The difference in average backscatter power received by the Hankasalmi radar as a function of heater output when the ionosphere is (x) and is not (+) pre-conditioned by a preceding high power pulse from the heater.The upper (lower) panel shows data from 1 (2)-min on, 1 (2)-min off heater cycles. powers being most marked at low heater powers, where P d−u is typically 20 dB (or 100 times) more intense on the down-leg.This is consistent with the theoretical curve shown in Fig. 1.As expected a similar effect is also observed in the EISCAT measurements of electron temperature, which is modified by the heater interaction with the ionospheric plasma.Since the electron temperature is known to be proportional to the level of anomalous absorption (and hence related to irregularity amplitude) then the curve in Fig. 1 should also be applicable to the EISCAT observations.The growth of the irregularities indicated by the CUTLASS measurements (see Fig. 4) occurs in two stages.For heater powers less than ∼20% there is a very rapid increase in backscatter power and as the heater power is increased further the irregularity amplitude grows more steadily looking as though it is approaching a saturation value.However, in the observations presented here, the CUTLASS backscatter power never appears to quite saturate fully and presumably would continue to rise if the heater power could be increased further.It has been hypothesised that the extinction (self absorption) of the heater itself by interaction with its own FAI would limit the amplitude of the FAI, and therefore the backscatter powers observed by the radars.The hysteresis effects observed are the result of a form of preconditioning.This is borne out by the ionospheric seeding experiments, the results of which are shown in Figs. 8 and 9. Again, it is consistently observed that after a short high-power pulse much higher backscatter powers can be achieved for a given heater output level.This is particularly pronounced for low heater powers where the backscatter observed is as much as 20 dB more intense than for simply transmitting heater radiation at low powers.This indicates that, although a power threshold, P t , has to be exceeded for FAI growth to occur (see Fig. 1), once the irregularities exist they can be maintained with much lower heater powers, as long as this remains greater than P c .This might be important where available heater power is limited. One important motivation for these experiments was to evaluate the expected performance of the new heating facility, SPEAR, which is located almost 10 • further north than Tromsø in Svalbard (78 • N geographic).Currently, the maximum power output of SPEAR is only 10% of that of the Tromsø heater.In addition, SPEAR is located ∼2000 km from each of the CUTLASS radars (1800 km from Hankasalmi and 2000 km from Pykkvibaer).This is similar to the separation of the Pykkvibaer radar from Tromsø (about twice the distance of the Hankasalmi-Tromsø path).Hence observations of Tromsø artificial scatter from Pykkvibaer can provide an initial estimate of the levels of backscatter power that might be provided using SPEAR.As is evident from Fig. 3, the Pykkvibaer radar certainly receives strong backscatter from artificial FAI over Tromsø, however this is some 10-20 dB weaker than that observed by the Hankasalmi radar.This can be accounted for by the extra signal loss over the longer (1.5 hop) radio path.But, how sensitive is the Pykkvibaer radar to irregularities generated by the Tromsø heater at lower powers?This question can be addressed by examination of Fig. 10, which shows the time series of backscatter power received by Pykkvibaer from the centre of the heated ionospheric volume.In this case the data were recorded from 14.35 to 14.39 UT on 21 October 1999, another example of low power heating.This interval was selected since the radar data had a time resolution of 1 s which is higher than those shown in Fig. 3.The dashed sloping line overlaid on this figure indicates the power output of the heater, given by the scale on the right-hand side of the plot.Figure 10 demonstrates that the Pykkvibaer radar detected artificial irregularities certainly for a heater output of 5% and also possibly at even lower powers.Since the Tromsø heater's experimental configuration on this date was such that maximum ERP was 199 MW, this implies that the CUTLASS radars should be sensitive to irregularities generated with heater ERPs of 10 MW or less.The current configuration of the SPEAR high power facility enables an ERP of ∼28 MW of RF radiation to be transmitted.A proposed future developmental phase would see this increased to about 68 MW. Although the radar observations indicate that the irregularity amplitude was still not fully saturated at the maximum power of the Tromsø heater -currently the most powerful facility of this type -they also show that FAI can also be generated and sustained for much weaker heater powers (less than 10 MW ERP).This is therefore very promising for future experiments employing the SPEAR radar in its unique location on Svalbard. Fig. 2 . Fig. 2.A map illustrating the locations of the various instruments relevant to this study.Inset: a schematic representation of field-aligned irregularity generation by a high power heater and a half-hop radio path of an HF radar which receives backscatter from these structures.
2018-12-05T12:44:37.988Z
2006-03-23T00:00:00.000
{ "year": 2006, "sha1": "703120897dea772f44cb67a651bce70da93f707e", "oa_license": "CCBY", "oa_url": "https://angeo.copernicus.org/articles/24/543/2006/angeo-24-543-2006.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "dcef0e9715d79d2b1cc3b35ef17875c81ebb1e25", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
4382015
pes2o/s2orc
v3-fos-license
The involvement of Bcl-2 family proteins in AKT-regulated cell survival in cisplatin resistant epithelial ovarian cancer Many studies involving patients with cisplatin-resistant ovarian cancer have shown that AKT activation leads to inhibition of apoptosis. The aim of this study was to examine the potential involvement of the Bcl-2 family proteins in AKT-regulated cell survival in response to cisplatin treatment. Cisplatin-sensitive (PEO1) and cisplatin-resistant (PEO4) cells were taken from ascites of patients with ovarian cancer before cisplatin treatment and after development of chemoresistance. It was found that cisplatin treatment activated the AKT signaling pathway and promoted cell proliferation in cisplatin-resistant EOC cells. When AKT was transfected into nucleus of cisplatin-resistant ovarian cancer cells, DNA-PK was phosphorylated at S473. The activated AKT (pAKT-S473) in these cells inhibited the death signal induced by cisplatin thereby inhibiting cisplatin-mediated apoptosis. Results from this study showed that the combination of cisplatin, DNA-PK inhibitor NU7441, and AKT inhibitor TCN can overcome drug resistance, increase apoptosis, and re-sensitize PEO4 cells to cisplatin treatment. A decrease in apoptotic activity was seen in PEO4 cells when Bad was downregulated by siRNA, which indicated that Bad promotes apoptosis in PEO4 cells. Use of the Bcl-2 inhibitor ABT-737 showed that ABT-737 binds to Bcl-2 but not Mcl-1 and releases Bax/Bak which leads to cell apoptosis. The combination of ABT-737 and cisplatin leads to a significant increase in the death of PEO1 and PEO4 cells. All together, these results indicate that Bcl-2 family proteins are regulators of drug resistance. The combination of cisplatin and Bcl-2 family protein inhibitor could be a strategy for the treatment of cisplatin-resistant ovarian cancer. INTRODUCTION Ovarian cancer is a devastating malignancy, causing more deaths in the female US population compared to any other gynecologic cancer [1]. Of the three subgroups -epithelial, stromal, and germ cell tumors -majority of ovarian cancer cases are classified as epithelial carcinomas [1,2]. Epithelial ovarian cancer (EOC) at first detection will already be at advanced stage in approximately 70% of diagnoses and of these, 30% of women will survive 5 years [1]. 80% of EOC patients will soon relapse after first-line chemotherapy [3]. This is in part due to the difficulty in diagnosis and treatment, particularly the rising concern of drug resistance. The median progression-free survival (PFS) is 18 months before most of these patients relapse [2]. Those with tumors that progress during or recur within 6 months of treatment are considered platinum-resistant [3]. Overall response rates with other treatments in these platinum-resistant patients are only 10-25% with relatively short durations of response [2]. Treatment failure is thought to be attributed to drug resistance in over 90% of cases with metastatic malignancy [2]. It can arise from multiple factors such as pharmacokinetic interactions, tumor micro-environment, and most likely, cancer-cell-specific abnormalities [2]. Therefore, there is a dire need to better elucidate the mechanisms behind ovarian cancer drug resistance and how to overcome them. While prognosis is initially determined through the extent of initial surgical resection, chemotherapy is the cornerstone of maintenance of progression-free survival. Research Paper This article has been corrected. Correction in: Oncotarget. 2020; 11:488-489. www.impactjournals.com/oncotarget Current standard drug treatment includes a combination regimen of paclitaxel and a platinum compound such as cisplatin [2]. The mechanism of action of cisplatin is DNA double-strand damage, which induces phosphatidylinositol 3-kinase (PI3K)/AKT activation downstream of DNA-PK [2,4]. Hyper-activation of AKT is often seen in cisplatinresistant epithelial ovarian cancers through the inhibition of p53 phosphorylation [5,6]. Lee et al. demonstrated that the addition of an AKT inhibitor enhanced platinuminduced apoptosis in EOC cell lines [7]. Cisplatinmediated cytotoxicity creates a downstream effect on a variety of molecular factors including activation of p53 and subsequent modulation of Bcl-2 family proteins including the pro-apoptotic proteins such as BAX and BAK, and anti-apoptotic proteins such as Bcl-2 and Mcl-1 [2,8,9]. Interestingly, AKT inhibition resulted in downregulation of Bcl-2 and upregulation of Mcl-1, suggesting a compensatory mechanism. Thus a more selective therapy was investigated by van Delft et al. who utilized a known Bcl-2 inhibitor, ABT-737 in mouse lymphoma cells [10]. Conversely, Bcl-2 can block p53-mediated apoptosis and is also a potential predictor of cisplatin-resistance in EOC [11]. Kassim et al. showed that overexpression of Bcl-2 correlated to decreased overall survival in ovarian cancer patients, therefore concluding the prognostic value of Bcl-2 in aggressive EOC [11]. Therefore, there may be abundant potential for Bcl-2 as a marker to aid in diagnosis and as an agent of overcoming cisplatin-resistance. The aim of this study was to further examine the role of Bcl-2 family in AKT-regulated cell survival by evaluating how various known inhibitor compounds such as NU7441 (a DNA-PK inhibitor), TCN (an AKT inhibitor), and ABT-737 (a Bcl-2 inhibitor) affect cell apoptosis in cisplatin-sensitive and cisplatin-resistant EOC cells. Modulating this signaling pathway may help reverse drug resistance and reduce toxicity in these platinumresistant patients, leading to novel EOC treatment methods. Caspases activity remain constant under various drug treatments Caspase 8 and caspase 9 cleavage activation are crucial for the extrinsic and intrinsic apoptosis pathways, respectively [12]. Both caspase 8 and caspase 9 are important members of the cysteine aspartic acid protease family. Upon stimulation, the pro-caspase 9 (47 kDa) is associated with cytochrome c, forming apoptosome complex to process pro-caspase 9 into an active fragment (35 kDa or 17 kDa) [12]. Only the cleaved form of caspase 9 can further process other caspase members, including caspase 3 and caspase 7, to initiate caspase cascade and cell apoptosis. Caspase 8 was also activated by cleavage to induce apoptosis. In both PEO1 and PEO4 cells, caspase 8 did not change significantly in response to various drug treatments, including 10 µM TCN, 10 µM NU7441 and 25 µM cisplatin ( Figure 1). Similarly, expression of full length of caspase 9 was not affected by drug treatment. However, the active cleaved form of caspase 9 (caspase 9s) was detected in PEO1 cells but not in PEO4 cells, and this was elevated in the presence of cisplatin. Although these results showed no clear role for caspase-8 or -9 in apoptosis in the response of PEO1 and PEO4 cells to cisplatin, they are downstream mediators of apoptosis and may be activated following treatment for a longer period (e.g. 24 hrs). Therefore, the expressions of the Bcl-2 family proteins, which are upstream of these caspases were next examined (Table 1). were prepared and were resolved by 12% SDS-PAGE agarose gel as described in Materials and Methods. The blots were probed with anticaspase 8/9 antibodies and all blots were re-probed for b-tubulin expression as a loading control. www.impactjournals.com/oncotarget Expression of pro-apoptotic Bcl-2 family proteins are altered differentially in cisplatinsensitive and -resistant EOC cells in response to treatment with AKT and DNA-PK inhibitors Bcl-2 family proteins are essential molecules involved in the intrinsic apoptosis pathway and may be downstream targets for AKT activation, therefore contributing to the development of cisplatin-resistance. Identification of such targets could provide clues for drug discovery in reversing the cisplatin-resistance in EOC. Following the activation of intrinsic apoptotic pathway, formation of Bax and Bak oligomers at the outer mitochondrial membrane increases the permeability of this membrane, permitting the release of pro-apoptotic mediators such as cytochrome c [12]. Therefore Bax and Bak are crucial for this pathway and whether AKT inhibition in combination with cisplatin therapy stimulates the Bax/Bak oligomerization directly or indirectly is still unknown. To investigate the potential role of a Bax/Bak-mediated mechanism underlying reversal of cisplatin resistance in response to AKT inhibition, the expression levels of Bax and Bak were examined using Western blotting (Table 1) (Figure 2A) and the densities of the proteins were measured and normalized to their corresponding beta-tubulin densities ( Table 2). In PEO1 cells Bak expression was found to be decreased when the cells were treated for 8 hrs with 10 µM AKT inhibitor TCN, and this decrease was prevented when the cells were co-treated with 25 µM cisplatin. Bak expression was also reduced, although to a greater extent, when cells were treated for 8 hrs with 10 µM DNA-PK inhibitor NU7441 alone. This was only partially reversed when cells were treated with 10 µM in combination with although the presence of cisplatin suppressed the inhibitory effect exerted by these two inhibitors. In PEO4 cells, TCN and NU7441, either alone or in combination with cisplatin enhanced Bax expression. The upregulation of Bax may be associated with AKT inhibition and DNA-PK inhibition, implying that Bax rather than Bak could become a potential therapeutic strategy. Inhibition of pro-apoptotic BH-3 only proteins is also possible to be involved in AKT-promoted cell survival signaling because of their close association with other Bcl-2 family members. Thus the expression of proapoptotic BH-3 only proteins in PEO1 and PEO4 cells was also evaluated. Expression of Bcl-2 interacting killer (Bik) protein in cisplatin-sensitive PEO1 cell line was found to be increased following treatment with 25 µM cisplatin ( Figure 2B). This increase occurred to a lesser extent when the cells were co-treated with TCN, and was prevented in cells co-treated with NU7441. In comparison, basal expression of Bik was not detected in cisplatin-resistant PEO4 cells. However, 25 µM cisplatin treatment in PEO4 cells induced a low Bik expression and TCN or NU7441 treatments prevented this induction completely. Another pro-apoptotic BH-3 only protein Bid, exhibited completely different expression patterns in PEO1 and PEO4 cells. Bid expression was not changed in PEO1 cells under all conditions but a slight decrease was seen in PEO4 cells treated with TCN, TCN/cisplatin, NU7441 and NU7441/cisplatin ( Figure 2B). Moreover, Bid expression was not stimulated by AKT inhibition directly. From these evidences we assumed that Bid could be a natural killer in cisplatin-sensitive PEO1 cells but not in cisplatin-resistant PEO4 cells due to its high endogenous expression in PEO1 cells only. Bim has shown distinct expression patterns in PEO1 and PEO4 cells. There are three isoforms of the BH3-only protein Bim: BimEL (23 kDa), BimL (15 kDa) and BimS (12 kDa). BimS is the most cytotoxic isoform and is only transiently expressed during apoptosis [13]. In contrast, the apoptotic activity of the longer isoforms may be inhibited by phosphorylation and normally the BimEL and BimL are sequestered dynein motor complex and only released during apoptosis [13]. In PEO1 cells, three bands were observed: the top band representing the full length of Bim (BimEL 23 kDa) while the other two bands indicated the two additional isoforms of Bim (BimL: 15 kDa and BimS: 12 kDa) ( Figure 2B). In PEO4 cells, expression of BimEL was found to be increased after co-treatment with 10 µM TCN and 25 µM cisplatin. In addition, a slight increase in expression of BimL was detected in cells treated with 10 µM TCN and 10 µM NU7441, both alone and in combination with cisplatin. More importantly, the changes of the short forms of Bim between PEO1 and PEO4 cells were striking. The evidence presented here may suggest that Bim in PEO1 is poised for apoptosis whereas that in PEO4 is not to the same extent. Puma, a pro-apoptotic BH-3 only Bcl-2 family protein identified recently, was also screened in this study [15]. Puma was increased in PEO1 cells treated with 10 µM TCN or 10 µM NU7441 in combination with 25 µM cisplatin at the same time ( Figure 2B). In contrast, TCN and NU7441 treatments repressed Puma expression in PEO4 cells in the absence of cisplatin but this inhibitory effect was abrogated when these compounds were combined with cisplatin treatment. AKT inhibition, as a result, may interfere with puma expression in cisplatinsensitive PEO1 cells but not in its cisplatin-resistant counterpart PEO4 cells. Bad is involved in the responses of EOC cells to cisplatin The active form of AKT phosphorylates Bad at Ser136 residue, directly inhibiting cell apoptosis and decreasing the chemo-sensitivity to cisplatin in EOC cells [16]. Therefore Bad protein expression and pBad (Ser136) expression levels before and after AKT inhibition in the presence or absence of cisplatin were of particular interest and were investigated to determine the correlation between Bad activation (i.e. phosphorylation) and AKT inhibition. In cisplatin-sensitive PEO1 cells, treatment with 10 µM TCN exerted a significant inhibitory effect on Bad expression and co-treatment with 25 µM cisplatin restored the expression to the basal level, which was observed in the untreated PEO1 cells ( Figure 2C). Moreover, treatment with 10 µM NU7441 mediated a slightly greater inhibitory effect on Bad expression, compared with TCN alone, but the combination of cisplatin and NU7441 did not restore Bad expression. On the other hand, pBad (Ser136) expression was greatly increased in PEO1 cells in response to treatment with 25 µM cisplatin. This was not prevented in cells co-treated with 10 µM TCN or 10 µM NU7441, although treatment with these agents alone decreased expression of pBad (Ser136) in these cells. This indicated that Bad inactivation by AKT was not influenced by AKT signaling directly. In other words, the AKT inhibition resulted from direct inhibition by TCN or indirect inhibition by NU7441 did not interfere with the Bad inactivation in cisplatin-sensitive PEO1 cells ( Figure 2C). In contrast, Bad expression was increased 3-fold in PEO4 cells treated with 25 µM cisplatin ( Figure 2C). Moreover, 10 µM TCN and 10 µM NU7441 treatments also enhanced Bad expression by 4.8-and 6-fold, respectively. Cisplatin used in the presence of TCN enhanced the Bad expression further, with a 5.5-fold increase compare with basal levels measured. In comparison, Bad protein in PEO4 cells treated with the combination of NU7441 and cisplatin presented a similar increase in expression compared with those treated with NU7441 alone (fold-increases of 6.16 and 6.26, respectively) ( Figure 2C). Expression of pBad (Ser136) was slightly increased in PEO4 cells treated with 25 µM cisplatin. However, a 3-fold increase in expression was detected in these cells following treatment with 10 µM NU7441, and this up-regulation increased to 4.5-fold when NU7441 was combined with cisplatin. pBad (Ser136)/ Bad ratios in PEO4 cells were obtained by dividing the densitometry values of pBad (S136) to the corresponding densitometry values of total Bad ( Figure 2D). It is obvious that pBad was rarely presented in PEO4 cells and cisplatin did not affect the phosphorylation of Bad due to the similar values of pBad/Bad was obtained in presence of cisplatin (0.43) in comparison to that in cells incubated with DMSO control (0.49). In comparison, pBad/Bad ratios obtained from PEO4 cells when they were treated with individual TCN or TCN in combination with cisplatin were relatively low (0.03 and 0.11, respectively), indicating that AKT mediated the inhibitory Bad phosphorylation at Ser136 ( Figure 2D). Nevertheless, individual use of NU7441 or its combination with cisplatin in PEO4 cells increased pBad levels significantly and the pBad/Bad values were 0.54 and 0.72 under these two conditions. This implies that the DNA-PK inhibition does not fully mimic AKT inhibition in terms of the modulation of pro-apoptotic proteins ( Figure 2E). The increased expression of total Bad in response to TCN and NU7441 clearly suggests that this could be an important mechanism by which AKT inhibition resensitizes cisplatin-resistant PEO4 cells to cisplatin. To investigate this further, Bad siRNA knockdown transfection in PEO4 cells was carried out in the rest of study. Anti-apoptotic Bcl-2 protein is an important modulator of chemosensitivity in EOC cells Bcl-2 was the first identified anti-apoptotic protein and is known to inhibit apoptosis by sequestering its proapoptotic partners. Bcl-2 is frequently phosphorylated via several sites, including Thr56, Ser70, Thr74 and Ser87. Ser70 phosphorylation is the most frequently observed, suggesting its functional significance and phosphorylation of Bcl-2 is assumed to abrogate the pro-survival activity of Bcl-2 (reviewed in [17]). Therefore the expression of total Bcl-2 and pBcl-2 were tested here to examine the potential involvement of Bcl-2 in AKT-promoted cell survival signaling. As shown in Figure 3, total Bcl-2 expression exhibited a relative low basal level in PEO1 cells and cisplatin did not affect Bcl-2 levels. The changes of Bcl-2 expression were relatively modest except for a decrease in cells treated with combination of NU7441 and cisplatin ( Figure 3). On the other hand, in cisplatin-resistant PEO4 cells, Bcl-2 protein expression was not affected by cisplatin treatment but was completely abrogated by NU7441 treatments. Moreover, this decrease in Bcl-2 protein expression was maintained in cells treated with NU7441 and cisplatin at the same time. TCN inhibitor alone did not affect Bcl-2 protein expression, whereas TCN used in the presence of cisplatin suppressed Bcl-2 protein expression extensively ( Figure 3). This provides evidence that AKT inhibition by NU7441 and TCN had a significant inhibitory impact on anti-apoptotic Bcl-2 expression, especially in combination with cisplatin therapy. Thus TCN/NU7441 could become good suppressive agents targeting Bcl-2 protein. The phosphorylation status of Bcl-2 was also investigated using anti-pBcl2 (Ser70) and anti-pBcl (Thr56) antibodies. pBcl-2 (Thr56) was not detected in either PEO1 or PEO4 cells whereas pBcl-2 (Ser70) was found in PEO1 cells only but not in PEO4 cells. pBcl-2 (Ser70) was only observed in control cells (no treatment and DMSO treated) and was reduced in all other treatments and in particular by NU7441 treatment where it was barely detectable. Cisplatin appeared to have some modest restorative effect on Bcl-2 levels in combination with either TCN or NU7441 (Figure 3). Mcl-1 is another anti-apoptotic protein within the Bcl-2 family proteins and localizes to the mitochondria, Cddp TCN TCN+ cddp NU7441 NU7441 +cddp UT DMSO Cddp TCN TCN+ cddp NU7441 NU7441 +cddp Bak expression in PEO4 cells, although combination with cisplatin modestly reduced this effect. Considering the expression variances of Bcl-2 and Mcl-1 in PEO4 cells, it is possible that the increased Mcl-1 expression was induced by the loss of Bcl-2 expression in response to AKT inhibition, utilizing as a potential compensatory mechanism. Therefore, a specific inhibitor for Bcl-2, Bcl-XL and Bcl-w, termed BH-3 mimetic ABT-737 inhibitor, could be used in PEO1 and PEO4 cells to evaluate the correlation between Bcl-2 and Mcl-1 in determining the chemosensitivity. Bcl-XL is also a member of anti-apoptotic Bcl-2 family proteins and it prevents apoptosis via two distinct mechanisms, heterodimerization with pro-apoptotic proteins and formation of mitochondrial outer membrane pores to help maintain a normal membrane state under stressful conditions [19]. Bcl-XL expression in PEO1 cells did not vary under different drug treatments with the exception of NU7441 (Figure 3). NU7441 treatment reduced Bcl-XL expression both in the presence or absence of cisplatin. However, NU7441 did not have a similar impact on Bcl-XL expression in PEO4 cells and Bcl-XL expression was only suppressed to some extent by TCN and its combination with cisplatin ( Figure 3). The divergent effects on Bcl-XL expression exerted by TCNand NU7441-mediated AKT inhibition suggested that inhibition of Bcl-XL expression may not be involved in DNA-damage mediated AKT signaling pathway. In addition to anti-apoptotic Bcl-2 family proteins, XIAP (X-linked inhibitor of apoptosis protein) may be a key determinant in chemosensitivity by suppressing the apoptotic activities induced by cisplatin in ovarian cancer [20]. XIAP is suggested as a activator in PI3K/AKT survival pathway in chemo-sensitive and chemo-resistant ovarian cancer cell lines [20]. Here we investigated the XIAP expression levels under specified conditions and there are negligible differences on XIAP expression across the various conditions. However, XIAP expression level was clearly reduced upon cisplatin treatment in PEO1 cells but not in PEO4 cells (Figure 3). XIAP expression was also reduced by TCN and NU7441 in both PEO1 and PEO4 cells, indicating a potential correlation between XIAP expression and AKT activation regardless of the extent of chemo-sensitivity of the cells. However, the repression of XIAP protein was restored by cisplatin treatments in PEO1 and PEO4 cells (Figure 3). In all, XIAP demonstrated similar expression patterns in PEO1 and PEO4 cells in response to individual use of TCN and NU7441 as well as their combinations with cisplatin. Bad siRNA knockdown partially suppressed the cisplatin-inducted apoptosis Bad had been demonstrated as a potential downstream target for AKT activation and the influence of Bad knockdown on cisplatin-induced apoptotic activity in PEO4 cells was examined by Bad siRNA transfection, caspase 3/7 and MTT assays. Firstly, the working concentration of Bad siRNA was optimized to achieve the best transfection efficiency with minimal cytotoxicity. Western blotting analysis was then carried out to assess efficiency of Bad siRNA knockdown, showing that 50 nM Bad siRNA knocked down Bad expression by 50% in comparison to 50 nM siGenome Lamin A/C control ( Figure 4A). Furthermore, Bad siRNA at 100 nM inhibited Bad protein expression by 90% and thus Bad siRNA (100 nM) was adopted for the further experiments ( Figure 4A). To investigate the apoptotic activity after transfection, the Bad siRNA-transfected PEO4 cells were seeded in 96 well plates for caspase 3/7 and MTT assays. As is widely known, caspase3/7 assay provides a proluminescent caspase 3/7 substrate, releasing aminoluciferin by cleavage [21]. The cleaved aminoluciferin can be consumed by the luciferase, producing a luminescent signal which is proportional to the caspase 3/7 activity [21]. As shown in Figure 4B, the induction of apoptosis on cisplatin treatment (1.4-fold increase: 1.718/1.226) was reduced in Bad siRNAtransfected PEO4 cells when compared to that in untreated PEO4 cells (2.055-fold increase) and siGenome Lamintransfected control cells (2.31-fold increase) ( Figure 4B). Although this indicates that Bad is functionally proapoptotic in response to cisplatin in cisplatin-resistant PEO4 cells, the incomplete inhibition of apoptosis suggests that Bad was not solely responsible for the apoptotic responses in PEO4 cells. ABT-737 sensitized PEO1 and PEO4 cells to cisplatin treatment BH-3 mimetic ABT-737 is a small molecule inhibitor that could trigger Bax/Bak-mediated apoptosis and it has a high affinity for Bcl-2, Bcl-XL and Bcl-w but not Mcl-1 [22]. Moreover, the inability of ABT-737 to target Mcl-1 and the enhancement of Mcl-1 expression conferred resistance in cancer cells [22]. Here we investigate the potential of ABT-737 in reversing the resistance by targeting the Bcl-2 and Bcl-XL in ovarian cancer cell lines via caspase3/7 and MTT assays. Here ABT-737 (1 mM) was found to sensitize the PEO1 and PEO cells to cisplatin to a large extent (1.85 fold increases in PEO1 cells and 3.93 fold increases in PEO4 cells) ( Figure 5). Importantly, ABT-737 alone was relatively non-apoptotic towards both cell lines (1.697 and 0.874 in PEO1 and PEO4, respectively, implying that ABT-737 did not confer much toxicity to these two cell lines and without an apoptotic stimulus (e.g. cisplatin) the cells would not induce apoptosis even when anti-apoptotic proteins are inhibited ( Figure 5). These results suggest that ABT-737 could be a promising therapeutic agent for reversing the cisplatin-resistance in ovarian cancer and can be used in combination with cisplatin in advanced EOC. Although Mcl-1 expression was increased in response to TCN/NU7441-based treatments in PEO4 cells, this elevation was not sufficient to overcome the inhibition of Bcl-2 protein under the same conditions ( Figure 3). In this case, the combination therapy of ABT-737 and AKT inhibitors like TCN and NU7441 is of profound importance to be investigated furthermore due to the expression manners observed in PEO4 cells. DISCUSSION Ovarian cancer is the most common gynecological malignancy in the Western world and EOC accounts for 90% of the cases [2]. Cisplatin is the first line chemotherapeutic agent for patients with ovarian cancers [3]. However, despite initial sensitivity to platinum-based chemotherapy, EOC often develop drug-resistance, which limits patient survival [3]. AKT survival signaling, which is a crucial regulator of cell proliferation, growth, survival and metabolism, had been implicated in the development of drug-resistance and the progression of EOC [5,6]. There are emerging evidences for the involvement of AKT in controlling the apoptosis mechanism in response to cisplatin-induced DNA damage, promoting the cell survival and acquiring eventual cisplatin-resistance in EOC [5,6]. DNA-PK was found to phosphorylate AKT at Ser473 directly in response to platinum, promoting resistance. The phosphorylated AKT exerts its biological functions via phosphorylation of its downstream targets, ultimately inhibiting apoptosis and promoting cell survival [5,6]. Many studies had found that AKT activation is frequently observed in patients with relapse and therefore the potential role of AKT in the acquired cisplatin resistance may have significant clinical relevance [4][5][6][7]. Furthermore, Bcl-2 family proteins are key modulators of the intrinsic mitochondrial-mediated apoptotic pathway and regulation of the activity of some members within this family have been associated with AKT phosphorylation and chemosensitivity in EOC [2,8,9]. The mechanisms in which Bcl-2 family proteins interact to bring about the cisplatin-resistant phenotype of DNA-PK mediated AKT activation remains to be fully elucidated. This study therefore aims to examine the potential involvement of Bcl-2 proteins in AKT-regulated cell survival in response to cisplatin in cisplatin-sensitive and cisplatin-resistant EOC cells. Here we provide some preliminary data on the differential expression of Bcl-2 family protein according to AKT activity and give some insights into the possible mechanisms of AKT in developing cisplatin resistance. To investigate the association of AKT inhibition and subsequent resensitisation to cisplatin with Bcl-2 family members, Western blotting was used to identify the potential downstream targets of TCN and NU7441 within this family in cisplatin-sensitive (PEO1) and its cisplatin-resistant counterpart (PEO4) cell lines. TCN is a selective potent AKT inhibitor and is found to inhibit AKT activation directly with high affinity. Although there is limited literature regarding the effect of NU7441 on AKT inhibition, the fact that DNA-PK can phosphorylate AKT at Ser473 in presence of cisplatin can be used to suggest that NU7441 can function as an indirect AKT inhibitor [19]). After Western blotting screening of the Bcl-2 family proteins and several other potential apoptotic proteins, we identified interesting targets and selected Bad and Bcl-2 for further assessments. Firstly, multi-domain pro-apoptotic Bax and Bak are executors of apoptosis and their oligomerizations on the outer mitochondrial membrane are required for the release of cytochrome c and induction of apoptosis [18]. Previous studies have suggested that Bax rather than Bak promotes apoptosis in ovarian cancer cells and the data shown here also support this. Bak expression is not affected by AKT inhibition whereas an elevated level of Bax expression was detected in response to AKT inhibition in cisplatin-resistant PEO4 cells. This suggests the potential involvement of Bax in re-sensitizing PEO4 cells to cisplatin-induced apoptosis in response to AKT inhibition. However, this evidence is not sufficient to show that Bax is a direct downstream target for AKT activation and therefore Bax was not taken forward for further study in this project. Moreover, the limitation of Western Blotting is its inability to tell whether the protein is active in forming the pores on the mitochondrial membrane. Thus Bak expression may not be altered but it could still be functionally modulated upon treatments. Another pro-apoptotic subgroup of Bcl-2 family is the BH-3 only proteins, of which Bik, Bid, Bim, puma and Bad were assessed here using Western blotting. There are two models by which Bax may be activated by BH-3 only proteins, including direct activation and neutralization of the anti-apoptotic proteins [18]. Here BimEL expression was found to be up-regulated by the AKT inhibitor TCN in the presence of cisplatin in cisplatin-resistant PEO4 cells. This strongly supports a direct link between Bim expression and AKT phosphorylation in EOC cells. Moreover, expression of the more pro-apoptotic BimL was induced in PEO4 cells in response to AKT inhibition. Together, these results suggest that in addition to Bax, Bim is involved in the re-sensitization of chemoresistant EOC cells to cisplatin by AKT inhibitors. The data obtained from cisplatin-sensitive cells shown that the most pro-apoptotic isoform of Bim (Bims) was abundantly expressed especially in cells treated with cisplatin and TCN inhibitor. In the meantime, the BimEL level was significantly diminished by cisplatin. All these evidences indicate that Bim especially Bims plays a stimulatory role in apoptosis in response to cisplatin. (Figure 6). The BH-3 only Bad protein, which is a known downstream substrate of the AKT phosphorylation, has also been implicated in determining the chemosensitivity in EOC [16]. The phosphorylation of Bad at Ser136 inactivates its pro-apoptotic activity and therefore we studied the expression variances of Bad protein and its phosphorylation at Ser136 in response to AKT inhibition and platinum treatment. Here in cisplatin-resistant PEO4 cells, Bad protein was found to be increased in response to AKT inhibition. Moreover, TCN-targeted AKT suppression prevented Bad phosphorylation at Ser136, which was seen in PEO4 cells treated with cisplatin. Together, these results suggest that inactivation of Bad may be an underlying cause of cisplatin resistance in PEO4 cells, and that AKT inhibition desensitizes these cells by both upregulation of Bad and reversal of its inactivation. Interestingly, Bad phosphorylation was found to be increased in PEO4 cells treated with NU7441, either alone or in combination with cisplatin. This suggests that inactivation of AKT activity via DNA-PK may activate an alternative re-sensitization mechanism, compared with direct inhibition of AKT. In cisplatin-sensitive PEO1 cells, Bad expression was not affected by AKT inhibition. However, cisplatin-induced Bad inactivation via phosphorylation was not prevented in cells co-treated with TCN or NU7441. This may partly explain why AKT inhibition does not increase the sensitivity of PEO1 cells to cisplatin. However, AKT inhibition alone decreased Bad phosphorylation in PEO1 cells. Together, this suggests that Bad does not have an important role in the apoptotic response of chemosensitive PEO1 cells to cisplatin. Given the potential importance of bad in regulating cisplatin-sensitivity in PEO4 cells, inhibition of Bad expression by siRNA knockdown was found to partially prevent cisplatin-induced apoptosis in transfected PEO4 cells. The incomplete repression of apoptosis implies that Bad alone is not sufficient for cisplatin-induced apoptosis in these cells, and that other protein are also involved in the induction of apoptosis. However, this suppression may also have arisen due to the cytotoxicity of the transfection reagents to the cells during the experiments. Although cells remained viable after 72 hr transfection as seen in the initial MTT assay, additional stresses of harvesting and reseeding may have accumulated and these stresses may cause widespread cell death at the 72 hr post-transfection time point. Thus the knockdown of Bad may not be involved in, or sufficient to prevent cell death in such instance. In addition, the use of a high amount of Bad siRNA (100 nM) may have contributed to cytotoxicity. However, this amount was chosen,although 50 nM Bad siRNA may have a lower cytotoxicity, the transfection efficiency was too low (50%) to demonstrate the influence of Bad siRNA knockdown on cell viability. This functionally highlights the potential pro-apoptotic role of Bad in cisplatin-resistant cells. In comparison to Bim and Bad, expression of pro-apoptotic BH-3 only proteins Bik and Bid were not found to be regulated by AKT in the present study. Bik is rarely associated with AKT activation in literature but its expression has been shown to be necessary for caspase-8 activation. Bik expression was not affected by TCN-or NU7441-mediated AKT inhibition, however, its expression level was elevated in response to cisplatin treatment in both cisplatin-sensitive (PEO1) and cisplatin-resistant (PEO4) cells. Thus it is possible that Bik is involved in the apoptotic pathway directly activated by cisplatin in ovarian cancer cells but its activity is not induced by AKT signaling pathway. The lack of effect of cisplatin, TCN and NU7441 on Bid expression in PEO1 and PEO4 cells suggests that this protein is not involved in apoptotic pathways activated in these cells, and also that AKT does not regulate the activity of this protein. Unlike Bik and Bid, the expression of the pro-apoptotic BH-3 only protein may be regulated by AKT in PEO1 and PEO4 cells. However, the effect of AKT inhibition on Puma expression was found to be different in the cell lines, and was increased in PEO1 cells but decreased in PEO4 cells. Puma is a p53 inducible gene and acts as pro-apoptotic factor by binding to antiapoptotic Bcl-2 [15]. A recent study has established that induction of Puma expression by cisplatin was abolished in p53-deficient SKOV3 cells whereas increased Puma expression would sensitize SKOV3 cells to cisplatin via down-regulation of anti-apoptotic Bcl-XL and Mcl-1 [15]. Moreover, the Puma-triggered Mcl-1 down-regulation can be associated with caspase-dependent cleavage. Therefore overexpression of Puma could potentially enhance the sensitivity to cisplatin in EOC by lowering the threshold set simultaneously by Bcl-XL and Mcl-1. Thus, as both PEO1 and PEO4 cells are p53 mutated the lack of effect of cisplatin on Puma expression may due to this. However, the mechanism by which AKT inhibition regulates Puma expression may be p53-independent, and remains to be investigated. From these aspects, the role of Puma in cisplatin-induced apoptosis and in determining the chemosensitivity remains complex and unclear. Bcl-2 is an intriguing member within the Bcl-2 family that may be identified as a downstream target of AKT phosphorylation. In this study Western blot data suggested an association between AKT inhibition and suppression of anti-apoptotic Bcl-2 protein expression in both PEO1 and PEO4 cells. Bcl-2 expression was decreased by NU7441 regardless of the presence or absence of cisplatin. In addition, TCN also exerted a complete inhibitory effect on Bcl-2 expression in PEO4 cells when it was used in combination with cisplatin. Interestingly, phosphorylation of Bcl-2 at Ser70 (pBcl-2) was completely abrogated in PEO1 cells in response to cisplatin and AKT inhibition. And the phosphorylation of Bcl-2 at Ser70 was not detected in PEO4 cells, implying a potential regulatory role of Bcl-2 in cisplatin-sensitivity via its phosphorylation. Several studies speculated that pBcl-2 (Ser70) status is closely correlated with metastasis and the extent of malignancy in colorectal cancer and the absence of pBcl-2 (Ser70) is closely associated with poor survival in colorectal cancer [17]. Loss of pBcl-2 (Ser70) was more frequently recognized in the cases with advanced stage of lymph nodal metastasis and clinical stages in comparison to the poorly differentiated cases [17]. Considering this particularly with our evidences, it is convincible to assume that phosphorylation of pBcl-2 at Ser70 could play a similar inhibitory role in the inhibition of apoptosis in EOC. Therefore pBcl-2 (Ser70) can be suggested as biological marker for the extent of cisplatinsensitivity in EOC and a novel prognostic indicator. It is also necessary to mention that pBcl-2 (Thr56) was not detected in this study and therefore we assume that no clear association of phosphorylation of Bcl-2 at Thr56 found according to present study. In comparison, Bcl-XL expression was decreased in response to TCN in PEO1 cells only, and to NU7441 in PEO4 cells only. Together, these results indicate a greater role for Bcl-2 in the AKT-mediated regulation of cell survival pathways. Furthermore, the Western blot analysis on Mcl-1 expression can be interpreted alongside with the protein profiling results of Bcl-2. Intriguingly, inhibition of AKT increased Mcl-1 expression in PEO4 cells. This suggests that although AKT inhibition can inhibit expression of anti-apoptotic bcl-2, the cells may attempt to counteract this by increasing Mcl-1 expression as a possible cellular compensatory mechanism ( Figure 6). Thus it is possible that cisplatin resistance in these cells requires suppression of more than one anti-apoptotic Bcl-2 family proteins. To further clarify the role of Bcl-2 and Mcl-1 in cisplatin-induced apoptosis, a BH-3 mimetic ABT-737, which has a high affinity for Bcl-2 and Bcl-XL but not Mcl-1, was used in both cisplatin-sensitive and cisplatin-resistant cell lines ( Figure 6). Previous studies shown that ABT-737 triggered Bax/Bak-mediated apoptosis by targeting Bcl-2/Bcl-XL but ABT-737's inability to target Mcl-1 may confer resistance to cisplatin in vitro [22]. Here ABT-737 inhibitor was found to sensitize both PEO1 and PEO4 cells to cisplatin treatment, confirming that Bcl-2 has an important role in determining the cisplatin-sensitivity in EOC. Accordingly, ABT-737 in combination with cisplatin may be an effective strategy for enhancing the response of patients to cisplatin therapy. In addition to the Bcl-2 family of proteins, the role of the apoptotic effectors caspase 8 and caspase 9 in the response of EOC cells to cisplatin and AKT inhibition was also assessed ( Figure 6). Here the cleaved form of caspase 9 was only detected in PEO1 cells in response to cisplatin treatment. However, it is possible that the proteolytic activation of caspase 9, which occurs downstream of Bcl-2 proteins may only be detectable after treatment periods of greater than the 8 hrs time point which was used in this study. Similarly, no cleaved forms of caspase 8 were detected, but assessment of their expression after longer treatment times will be required to determine whether or not it is involved in the response of the cells to the drugs. However in support of a role for caspase 9 in cisplatininduced apoptosis in PEO1 cells expression of X-linked inhibitor of apoptosis (XIAP), which prevents activation of caspase 9, was found to be decrease in cisplatin-sensitive PEO1 cells treated with cisplatin. Apparently, more assessment is needed to give more information on XIAP molecular actions during apoptosis. Together with the data regarding Bcl-2 protein expression, these results suggest the intrinsic apoptotic pathway is the main mechanism by which apoptosis is induced in cisplatin-sensitive PEO1 cells treated with cisplatin ( Figure 6). Taken together, our results suggest that Bcl-2 family proteins are regulators of drug resistance. This study provides rational to support using a combination of cisplatin and ABT-737 to treat cisplatin-resistant ovarian cancer. Materials and chemicals AKT inhibitor TCN, DNA-PK inhibitor NU7441, and Bcl-2 inhibitor ABT-737, were obtained from Berry and Associates (Devon, UK), KuDOS Pharmaceuticals (Cambridge, UK) and Allan Richardson (London, UK), respectively. They were dissolved in DMSO. Cisplatin (1 mg/ml in PBS) was obtained from the Pharmacy Department, Hammersmith Hospital, London, UK. All other chemicals were purchased from Sigma-Aldrich (Dorest UK), and all solutions were prepared and diluted using distilled water. Drug treatments When required for drug treatment, cells were harvested by trypsinisation and cell counting was carried out using a hemacytometer. The trypsinized cells were centrifuged at 15,000 rpm for 5 mins and resuspended by pipetting up and down thoroughly. Then the cells were counted using a hemacytometer (Thermo Scientific, USA) following manufacturer's instructions. Cells were seeded at the required density and allowed to adhere overnight. When required for detected of phospho-proteins, cells were serum starved overnight by incubating in serum-free medium. Culture medium was then removed, and replaced with the required concentrations of drugs, or vehicle control. Treated cells were incubated under the appropriate conditions and the required time period. siRNA transfection The transfection with Bad small interfering RNA (siRNA) (5′→3′ sense: GGAGGAUGAGUGACG AGUUtt; 5′→3′ anti-sense: AACUCGUCACUCAUCCUC Cgg) (Applied Biosystems/Ambion, USA) was performed using antibiotic-free RPMI media containing 10% FCS and 2 mM L-Glutamine. PEO4 cells were seeded at 1 × 10 6 cells/well in 6-well plate and left in incubator overnight. Cells were transfected by the addition of Optimem medium (400 µl, Invitrogen) containing 100 nM Bad siRNA or siGenome Lamin A/C control siRNA (Dharmacon, Denver, Colorado, US) in the presence of 0.1% Transfection Reagent-1 (Dhramacon). After 48 hrs, a second transfection was carried out and the cells were incubated for a further 24 hrs. Transfected cells were then washed with PBS and trypsinized using 1 × Trypsine as previously described (Section 4.3). Cells were then counted and seeded in 6-well plate and 96-well plates for Western Blotting analysis, MTT and Caspase 3/7 Assays, respectively. MTT assay Cells were seeded at a density of 20,000 cells /well in a 96-well plate and incubated overnight to allow cell attachment. The transfected cells were treated with 25 µM cisplatin or vehicle control for 24 hours. 10 µl of MTT solution (3 mg/ml MTT in PBS) was added to each well. www.impactjournals.com/oncotarget After 2-hr incubation in the incubator, equal volume (i.e. 60 µl) of MTT STOP solution (10% SDS in 0.01% HCl) was added to the mixture and the plate was wrapped with foil, and incubated at room temperature overnight with shaking. Absorbance was measured on a microplate reader (Spectra Max 190) at 570 nm. Caspase 3/7 assay The cells were seeded at 20,000 cells/well in a white opaque 96-well plate (PerkinElmer, Singapore) and the plate was left in incubator overnight. Cells were treated with 25 µM cisplatin or vehicle for 24 hrs. Caspase-Gloâ 3/7 Substrate (Promega, USA) was added as per manufacturer's instructions and the plate was incubated at room temperature for 1 hr. Luminescence was measured using a LUMIstar model luminometer (BMG LabTech) using OPTIMA software (BMG LabTech). Data analysis Statistical analysis was carried out using Microsoft Excel 2001 and data are presented as mean ± standard error of the mean (SEM) combining three experimental repeats. The Western blot results were analyzed using densitometry (Image J software) where appropriate and the values of the protein densities were normalized to their corresponding beta-tubulin loading control to compare the variances of protein expression under specified conditions.
2018-04-03T00:50:17.610Z
2016-12-07T00:00:00.000
{ "year": 2016, "sha1": "e582643ab4f4a636917ac425138b50d7b712e9f5", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=13817&path[]=43942", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c69eb76f135b7c8184b4a1c2af348a23953a3052", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247855944
pes2o/s2orc
v3-fos-license
Challenges to Adolescent HPV Vaccination and Implementation of Evidence-Based Interventions to Promote Vaccine Uptake During the COVID-19 Pandemic: “HPV Is Probably Not at the Top of Our List” Introduction The COVID-19 pandemic has prevented many adolescents from receiving their vaccines, including the human papillomavirus (HPV) vaccine, on time. However, little is known about the impact of the pandemic on implementation of clinic-level evidence-based interventions (EBIs) that help to improve HPV vaccine uptake. In this qualitative study, we explored the pandemic’s impact on EBI implementation and HPV vaccine delivery. Methods During August–November 2020, we interviewed clinic managers in a rural, midwestern state about their experiences implementing EBIs for HPV vaccination during the COVID-19 pandemic. We used a multipronged sampling approach with both stratified and purposive sampling to recruit participants from Vaccines for Children clinics. We then conducted a thematic analysis of transcripts. Results In interviews (N = 18), 2 primary themes emerged: decreased opportunities for HPV vaccination and disruption to HPV-related implementation work. Most participants reported decreases in opportunities to vaccinate caused by structural changes in how they delivered care (eg, switched to telehealth visits) and patient fear of exposure to COVID-19. Disruptions to EBI implementation were primarily due to logistical challenges (eg, decreases in staffing) and shifting priorities. Conclusion During the pandemic, clinics struggled to provide routine care, and as a result, many adolescents missed HPV vaccinations. To ensure these adolescents do not fall behind on this vaccine series, providers and researchers will need to recommit to EBI implementation and use existing strategies to promote vaccination. In the long term, improvements are needed to make EBI implementation more resilient to ensure that progress does not come to a halt in future pandemic events. Introduction The COVID-19 pandemic has prevented many adolescents from receiving their vaccines, including the human papillomavirus (HPV) vaccine, on time. However, little is known about the impact of the pandemic on implementation of clinic-level evidencebased interventions (EBIs) that help to improve HPV vaccine uptake. In this qualitative study, we explored the pandemic's impact on EBI implementation and HPV vaccine delivery. Methods During August-November 2020, we interviewed clinic managers in a rural, midwestern state about their experiences implementing EBIs for HPV vaccination during the COVID-19 pandemic. We used a multipronged sampling approach with both stratified and purposive sampling to recruit participants from Vaccines for Children clinics. We then conducted a thematic analysis of transcripts. Results In interviews (N = 18), 2 primary themes emerged: decreased opportunities for HPV vaccination and disruption to HPV-related implementation work. Most participants reported decreases in opportunities to vaccinate caused by structural changes in how they delivered care (eg, switched to telehealth visits) and patient fear of exposure to COVID-19. Disruptions to EBI implementation were primarily due to logistical challenges (eg, decreases in staffing) and shifting priorities. Conclusion During the pandemic, clinics struggled to provide routine care, and as a result, many adolescents missed HPV vaccinations. To ensure these adolescents do not fall behind on this vaccine series, providers and researchers will need to recommit to EBI implementation and use existing strategies to promote vaccination. In the long term, improvements are needed to make EBI implementa-Introduction Since March 2020, the pandemic caused by SARS-CoV-2 has affected nearly all aspects of daily life, particularly in the ways in which health care is provided. Because of fear of coming into clinics and the preference for telehealth appointments, one area that has been especially affected is pediatric and adolescent immunization (1,2). Before the pandemic, data from the 2019 National Immunization Survey-Teen showed that only 54% of US adolescents aged 13 to 17 were up to date with the human papillomavirus (HPV) vaccine series (3). An analysis to estimate the impact of the pandemic on HPV vaccination found that vaccination rates were 75% lower during the pandemic compared with prior periods, and statistical modeling showed that these lower rates could lead to increases in the incidence of genital warts, cervical intraepithelial neoplasia, and other HPV-related cancers if adolescents do not catch up on the HPV vaccine (4). Adolescence is a critical period for HPV prevention. The Advisory Committee for Immunization Practices recommends administration of 2 doses of HPV vaccine for children and adolescents aged 9 to 14 years (5). Initiating administration early in adolescence is linked to higher rates of on-time completion (6) and increased effectiveness (7,8). Therefore, understanding the extent to which the pandemic has affected adolescents' ability to get vaccinated is imperative. Challenges in vaccinating adolescents during the last decade have led researchers and quality improvement (QI) staff to develop evidence-based interventions (EBIs) and strategies to assist clinical staff in increasing HPV vaccination. For example, commonly used EBIs include reminder/recall systems, standing orders for HPV vaccination, and provider assessment and feedback (9,10). Because of the complexity of these EBIs, a substantial amount of work happens "behind the scenes" in clinics. These implementation efforts to increase HPV vaccination rates (9-11) are often conducted independently or in collaboration with academic and community partners and led by administrative, non-patient-facing staff. Nearly 2 years into the pandemic, little is known about the impact of the pandemic on implementation of these EBIs to encourage vaccinations or about the experience of clinics that continued to provide routine care during the pandemic. Our attention now needs to turn to these lesser studied impacts on clinic practices that have implications for future health outcomes of adolescents. The aim of this qualitative study was to explore the experiences of clinics that continued to provide routine care during the pandemic and the impact of the pandemic on ongoing implementation efforts to promote HPV vaccination. Methods This study was part of a larger project using the Consolidated Framework for Implementation Research (CFIR; https:// cfirguide.org/) to understand barriers and facilitators to EBI implementation focused on HPV vaccination in clinics integrated or affiliated with large health care systems in Iowa. However, only results related to the impact of COVID-19 are reported here. We conducted semistructured interviews with clinic managers or administrators working in Vaccines for Children (VFC) clinics in large health care systems in Iowa from August through November 2020. The University of Iowa Review Board determined that this study did not meet the criteria for human subjects research. All participants were provided with information about the study and its purpose, compensation, the voluntary nature of their participation, and the researcher's contact information. We offered a $25 gift card to all participants to thank them for their time. We used multipronged sampling that included stratified sampling of VFC clinics in Iowa. First, researchers examined the list of VFC clinics in the state (N = 594) and excluded clinics that were either not pediatric/family practice clinics or not integrated or affiliated with a larger health care system, resulting in a final list of 305 clinics that met inclusion criteria. We stratified clinics by congressional district and rurality; a random sample of clinics (n = 5) was drawn from each stratum (n = 8); we repeated this process, ultimately recruiting from a sample of 80 clinics and completing 9 interviews. Up to 6 attempts were made to contact the clinic manager at each clinic, either by email or telephone. Common reasons for refusal were lack of time due to COVID-19 or not currently having a staff member in an administrative or management position. When this approach did not achieve thematic saturation in interviews, directed recruitment efforts were made through professional networks. Throughout the interview process transcripts were reviewed for thematic saturation, and recruitment ended when it was determined saturation had been reached. We adapted the interview guide from the CFIR (12). In addition to questions to address the CFIR constructs, we included questions to address the impact of COVID-19 on HPV vaccination delivery and on implementation of EBIs for HPV vaccination. Before the interview, we sent all participants a brief survey to collect demographic information about them and their clinic. All interviews were conducted by the first author (G.R.) via telephone and audio recorded. A third-party service was used for verbatim transcription. We generated frequencies and appropriate descriptive statistics for survey items capturing information on participant and clinic characteristics. To analyze the interviews, we conducted thematic analysis (13) that explored the impact of COVID-19. In the first round of coding, we used Nvivo version 12 (QSR International) to code transcripts and created a code to identify information on the pandemic ("impact of COVID-19"). This process was completed by the first author and a trained student in a master of public health program. We then analyzed the data coded under "impact of COVID-19" using a thematic analysis approach (13) to identify themes and subthemes. Results We completed interviews with 18 individuals; interviews ranged from 19 to 50 minutes, averaging 32 minutes. Of the 18 participants, 8 were aged 27 to 39, all were women, 14 worked in a rural clinic, and 12 worked in general practice or family medicine clinics (Table 1). Two primary themes emerged in all interviews under the parent code of impact of COVID-19: decrease in HPV vaccination and routine care and impact of the pandemic on implementation work ( Table 2). In a minority of interviews, a third theme was also identified: patient safety improvements. Decreased opportunities for HPV vaccination and routine care Overall reduction in in-person clinic visits due to the pandemic posed tremendous challenges to the clinics in delivering HPV vaccinations. Participants spoke about 2 main challenges in being able to vaccinate patients. First, all clinics had to implement new protocols to safely treat patients, which included reducing overall patient volume and switching to telehealth visits, both of which resulted in fewer adolescents being seen in person. To reduce patient volume, many participants reported that a respiratory clinic was established to physically separate sick patients from well patients, and this meant less time and capacity to see well patients. As one participant described, "[R]ight now, the biggest priority in our clinic is the sick clinic; we're doing COVID testing in our sick clinic. That's our biggest area here" (Interview-8). Other participants noted that shifts to virtual visits presented numerous challenges to HPV vaccination. In the first place, these changes required time to implement, and in the second place, specific to HPV vaccination, it meant that "when they're pushing virtual visits, it's not going to get them into the door to get those vaccines done" (Interview-11). The second main challenge was fear of COVID-19 exposure. Most participants reflected on the fear among their patient population of coming into clinics for preventive care such as vaccinations and being exposed to the virus. Although none of the participants had calculated exact numbers for the reduction in vaccinations, nearly all identified patient fear of COVID-19 as a barrier to vaccinating adolescents. As one participant summarized, "[F]or quite a while in the spring, people were really reluctant to come to the doctor's office, because they felt like we're a hotbed of disease" (Interview-10). Parents and guardians were not the only ones afraid of attending routine care visits; the clinics themselves shut down for a certain period, and schools made allowances about attendance at sports physicals in recognition that parents may not want to bring their children into clinics. For example, one participant reported that "in April [2020] . . . [the clinic] essentially cancelled every other visit except zero-to five-[year-old patients]" (Interview-3). Another participant spoke about the ruling that schools allowed prior years' sports physicals to count for school entry, so they "missed some opportunity this year getting in [their] normal amounts" of those visits (Interview-9). Together, the shifts made in clinics to provide patient safety and the hesitancy among parents to take their children into clinics translated into a reduction in HPV vaccinations. Disruption to ongoing EBI implementation work We identified 2 subthemes: disruptions due to logistical challenges and disruptions due to shifts in priorities during the pandemic. All participants noted that the COVID-19 pandemic had a negative impact on their ability to carry out ongoing projects and implementation of EBIs to increase HPV vaccination rates. Several participants spoke about previous efforts with QI teams for HPV vaccination projects. As one clinic manager described, "[I]t's just kind of fizzled off with . . . COVID" (Interview-14). Others reflected on projects that were ongoing with external partners that had been forced to halt because of the pandemic. For example, one participant spoke about a project with a pharmaceutical company to implement new strategies to promote HPV vaccination that they had started, but with the pandemic "all those meetings and such came to an end because [they] couldn't meet in person anymore" (Interview-6). Another clinic manager spoke about a school outreach program to promote HPV and other adolescent vaccines in which their clinic usually participates during the spring. However, she reflected that "school wasn't in session at that time. So, we missed that opportunity" (Interview-5). Finally, several participants noted that because of the pandemic, pharmaceutical representatives who are usually allowed into the clinic to provide education were not able to come and that the regularly scheduled state immunization conference was cancelled. These participants reported that these education opportunities are the primary way staff and providers learn about updates to HPV vaccination and best practices and motivate staff to implement EBIs. These interruptions were due to both logistical challenges and shifts in priorities that were necessitated by the pandemic. For example, early in the pandemic, many participants noted that because of shut-downs in routine care services, non-patient-facing staff were furloughed while multiple providers were out of the office due to mandatory quarantines resulting from COVID-19 exposure or infection. As one participant noted, "When we [were] short-staffed . . . that changed a lot of different workflows" (Interview-2), which meant increased time was spent to create new workflows, detracting from time available for other projects. In addition to creating staff shortages, the pandemic also disrupted regular communication between participants and others working on EBI implementation or HPV vaccine promotion. For example, one participant noted that their QI team had been redirected to focus on COVID and so "prior to COVID [they] were meeting once a month. But since COVID, [they've] gotten pulled into more of the COVID-related areas" (Interview-8). Another clinic manager noted that all her team communication had to switch to an online format and "there are some challenges of not having that person in front of you to talk to" (Interview-5) and that this lack of in-person communication had a particularly negative impact on implementation work. Related to fears of COVID-19 transmission, one participant noted that the clinic had to pull all educational materials from waiting areas because it didn't "want to lay them out for the patient to pick up" (Interview-4). Beyond these logistical challenges, the inability to work on EBI implementation and projects related to HPV vaccination was primarily due to the shift in priorities. As one participant summarized it, "[R]ight now with COVID, I would be amiss if I didn't say HPV [is] probably not at the top of our list. We're trying to make sure that people are staying healthy" (Interview-17). Many participants spoke about the challenges of maintaining safety protocols and the extra work that came along with that, and how these challenges had led to a reduced focus on HPV vaccination overall. Another participant noted that these shifts were not just occurring in clinics but also that "COVID as a whole has changed our health organization. Looking at how do we bring people in safely has been a huge thing" (Interview-1). This need to focus on patient safety, above all else, has been necessary, but all participants who spoke about these shifts said that it meant they have not had time to focus on EBI implementation for HPV vaccination. Patient safety improvements for infection control made during the COVID-19 pandemic Finally, in discussing implementation of EBIs for HPV vaccination, several participants spoke more generally about how the pandemic has changed health care delivery and lessons learned for the future. These participants spoke about some positive changes that have resulted from the need to be more creative about health care delivery and patient safety. They reflected that there are likely to be some permanent changes to health care delivery for their clinics and health care systems that may have implications for how EBIs are implemented. Several reported that the health care systems their clinics are affiliated with had created special respiratory clinics -designed to control infection -to care for COVID-19 patients. One reported that "we plan on keeping the respiratory clinic going forever. With all our respiratory stuff, it just makes sense, really" (Interview-16). Another common change was creating separate entrances or times for well and sick patients to be seen. One participant said they "had to reinvent the wheel as far as what keeps people safe, and how [they] still operate and get things that keep people healthy, without giving them the opportunity to catch something" (Interview-1). Because of the attention they devoted to these efforts, this participant spoke with her team about continuing with these changes throughout respiratory syncytial virus (RSV) season and indicated that many of these changes make "the most sense to keep the most majority of the people healthy coming in" (Interview-1). Implications of these kinds of permanent changes are not yet known, although one participant noted that in her clinic this change had resulted in fewer staff members being available for routine preventive care in the short term. Discussion Results from these interviews highlight a unique perspective on the impact of the COVID-19 pandemic on adolescent HPV vaccination, that of administrative clinic staff, most of whom work in rural areas. Participants in this study, while not directly involved in health care delivery, manage much of the work that happens behind the scenes to ensure patients receive the care they need. Across interviews, the impact of the pandemic on not only adolescent HPV vaccination but on all health care delivery and related EBI implementation work was evident. Interviews focused on implementation of EBIs for HPV vaccination specifically, but many of the barriers reported in relation to this area also applied to other areas. Data from 2020 identified sharp decreases in adolescent and pediatric vaccination (1,2) as well as well-child visits (14) that have likely persisted into 2021. Now, with the authorization of the COVID-19 vaccine for both adolescents and children, HPV vaccination may not be at the forefront of parents' or clinicians' priorities. Clinics will need to recommit to or expand their HPV vaccination efforts to ensure adolescents are vaccinated on time. In circumstances without the added pressure of the pandemic, clinics face challenges implementing existing EBIs to encourage HPV vaccination uptake, such as lack of staff to implement EBIs or QI initiatives, lack of knowledge about which EBIs to implement (9), competing priorities, the need for more staff training, and limited PREVENTING CHRONIC DISEASE resources (15). With the added stresses of the pandemic, these challenges have been compounded, and many participants reported that because of the need to address pandemic-related issues, HPV vaccination had fallen lower on the list of priorities. This has meant that ongoing QI efforts and EBI implementation to improve vaccination rates were often halted. By necessity, the response to the pandemic has been reactive, rather than proactive, which means that processes that were already in place were not prioritized during the pandemic. To overcome this, clinic staff should refocus on implementing strategies known to work to promote HPV vaccination (eg, reminder/recall, strong provider recommendation) (16) and have been effective in increasing rates for other vaccines (17,18). Although these interviews focused on challenges presented by the pandemic, several participants spoke about some of their unexpected findings from their efforts to continue providing health care. These participants spoke about how the pandemic was a learning opportunity in keeping patients safe during large-scale outbreaks and noted that they would continue some of their precautions in the future to deal with other infectious diseases. Although the COVID-19 pandemic has had an overwhelmingly negative impact on health care, valuable lessons have been learned about how to continue delivering primary care. However, the time spent to make these changes was at the expense of other ongoing work. For example, many participants spoke about the shift in priorities and the time that was needed to create processes for telehealth visits. Future research should focus on identifying best practices that have been developed during the pandemic to support not only future pandemic responses but potentially also dealing with typical influenza seasons. Although the topical focus of these interviews was HPV vaccination, results highlight challenges that have likely been present in all implementation work during the pandemic. There have been calls from the implementation science community to use implementation science to address COVID-19 (19,20), but less attention has been paid to how to address the fact that so much of the ongoing implementation work came to a halt during the pandemic. At this juncture, the pandemic is likely far from over, and history shows us that other pandemics and epidemics will occur. Implementation science researchers need to create resilient and sustainable EBIs and implementation processes that are not as vulnerable to emergency situations. This could mean focusing on implementing practices that could be more sustainable in emergency situations, for example, ensuring systems are in place to use reminder/ recall messaging. Sustainability has long been a challenge for implementation science, and many implementation studies lack an explicit definition of what sustainability means in practice (21,22), making it even harder for researchers and practitioners alike to focus on best practices in this area. The current situation and data from these interviews highlight that researchers must renew their focus on resiliency and sustainability for EBI implementation. Our study has several strengths. The primary strength was the use of qualitative methods to gather detailed and descriptive information from a relatively understudied perspective during the COVID-19 pandemic, namely clinic managers working in rural settings. Interviews allowed for detailed and nuanced data from clinic managers about the challenges presented by the pandemic to HPV vaccine delivery and EBI implementation. Additionally, more than three-quarters of participants worked in rural clinics, providing another often-understudied perspective on health care delivery and implementation. This study also has several limitations, primarily related to the timing of these interviews. When interview recruitment began in August 2020, COVID-19 cases were relatively low in Iowa, but by mid-November cases had risen again; therefore, participants may have had different perspectives on the impact of COVID on their work and organizations. However, despite these limitations, these results offer critical insights into this issue, and future research could focus on understanding perspectives from clinic managers in other geographic areas. In summary, pre-existing low rates of HPV vaccination coupled with the impact of the pandemic threaten to leave adolescents unprotected against HPV and with increased susceptibility to HPVrelated cancers. Our results have short-and long-term implications for both practitioners and researchers working in the fields of adolescent health, HPV vaccination, and implementation science. In the short term, a renewed commitment to EBI implementation for adolescent HPV vaccination is needed to ensure that those who are eligible now as well as those who may have missed doses over the past 2 years are vaccinated. Research conducted before the pandemic found that clinics do not always use EBIs and, when they do, there are significant challenges in implementing them (9,14). These challenges have been exacerbated by the pandemic, and both researchers and clinic staff will need to expend even more effort in this area. For example, for clinics that previously did not have reminder recall systems in place, these systems could be one way to identify all undervaccinated adolescents. However, implementing these new systems may require substantial effort given that the pandemic has taken priority and those involved may need to work even harder to obtain staff buy-in and leadership support. In the long term, the implementation science community needs to create more resilient and sustainable EBIs that can be easily implemented in health care systems. Doing so will help protect PREVENTING CHRONIC DISEASE Shift in priorities I think prior to COVID happening, we were putting plans in place on how to increase immunizations, whether that's signage in the rooms and just communicating with patients and parents about why this is beneficial for your child or for yourself. And then obviously COVID, and that kind of just threw everything quality improvement out the window, while you're trying to focus on, how the heck are we going to do this? (I-2) A lot has changed prior to COVID and now just trying to ensure that we adhere to all the new guidelines and make sure that we have staff, that we protect our staff as well with PPE and all of those things that are constantly changing, as we're entering into the search plan. I would say that's probably [leadership's] main focus right now. (I-15) Abbreviations: EBI, evidence-based intervention; HPV, human papillomavirus; I, interview; PPE, personal protective equipment; QI, quality improvement.
2022-04-02T06:23:32.990Z
2022-03-31T00:00:00.000
{ "year": 2022, "sha1": "64ba93aaf51410d3379243df6269232a8835a0f5", "oa_license": "CCBY", "oa_url": "https://www.cdc.gov/pcd/issues/2022/pdf/21_0378.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f67a043ca788915a9cb88c72e59e7676f207d487", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
235351272
pes2o/s2orc
v3-fos-license
Allocative Efficiency Analysis of Capsicum Cropping System under Tunnels HINA FATIMA*, LAL K ALMAS AND BUSHRA YASMIN Department of Economics, Department of Agricultural Sciences and Department of Economics Mohammad Ali Jinnah University, Karachi*, West Texas A & M University, Canyon , Fatima Jinnah Women University, Rawalpindi) Department of Economics, Mohammad Ali Jinnah University, Karachi PAKISTAN USA Abstract: The focus of this study was to analyze the allocative efficiency of the capsicum cropping system under tunnels. The data is collected from those farmers that were cultivating the capsicum crops under the tunnels in Pakistan. Cultivation of the crops under tunnels has a rising trend in Pakistan. The sample size was around 150 capsicum farms. The Stochastic Frontier Analysis (SFA) was used to examine the allocative efficiency of capsicum farms in Pakistan. The result of the study demonstrated that the average allocative efficiency of capsicum cropping systems under tunnels in Pakistan was around 65%. Around 35% of allocative inefficiency is present in the capsicum cropping system. Allocative inefficiency can be reduced by removing the mismanagement practices regarding the utilization of farm resources. It is linked with reallocating inputs or changing the input combination used, to achieve an optimum level of capsicum output at a given level of input prices. Hence, the objective of adoption of advanced farm technologies along with balanced application of farm inputs will result in higher farm productivity and allocative efficiency. Hence, it is recommended that to achieve the best possible capsicum production with a minimum cost of newly opted farm technology can be beneficial if farmers have improved advanced farming skills, trainings and know-how regarding the balanced inputs application under tunnels. Introduction Agricultural sector in developing part of the world is sustained underdeveloped, even though the economies of these countries highly depended on the agricultural sector. According to [1] approximately 70 percent of the LDCs labor force engaged in the agricultural sector and this sector's contribution of Gross Domestic Product (GPD) is around 30 to 60 percent. Typically, the agriculture sector of LDCs is based on horizontal expansion farming scheme (i.e., Increase in farm output by increasing the land under cultivation, etc.) and this is the major cause of low productivity and underdevelopment of the agriculture sector of LDCs. This way of farming is not desirable due to two reasons. First, due to the rapid growth of population, it is difficult to overcome the issue of food security of LDCs. Second, this kind of farming in the future might be resulted in natural resources degradation and environmental hazards. According to [2] around 1.2 billion populations around the world are breathing in poverty and malnourished. Around 90% of the population out of 1.2 billion deprived people were existed in Africa and South Asian countries. Explicit policy formulation is required for dealing with low output in developing countries. Compared to Sub-Sharan African countries the south Asian countries adopted the improved farm technologies quite rapidly. Due to this reason, the crop yield in sub-Sharan African countries is stagnant and lower than south Asian countries. According to [3] the difference in adoption of advanced technologies is because of lack of research related activities and investment in agricultural development. Since we have lost about three million hectares of land to urbanization and the population is going to be doubled by 2020, the urgency of increasing the productivity is on ever rise. Farming of vegetables is commercialized and for this reason it has greater value. Commercial farming focuses on profit making by considering the allocation of internal and external resources. The efficient utilization of inputs is influenced through various types of resources which can bring fluctuations in farm output. Hence it is vital and appealing to analyse reasons for output variation in farming. According to [4], transformation in Pakistan's agricultural sector changed the conventional, complementary farm inputs, i.e., seeds, fertilizer, harvesters, sprays, etc. into High Yielding Varieties (HYV) of seeds, commercial fertilizers, and advanced mechanization. These moderations take along the swift optimistic modifications in agriculture sector growth as well as on Pakistan's economy. [5] articulated that Pakistan's agricultural sector linked to the rest of the sectors of the economy directly or indirectly. Thus, advancement in technical and scientific fields, introduction of new cultivating and harvesting techniques and development of hybrid seed is essential for agricultural and economic development of Pakistan. [6] articulated that in context of agricultural advancement and technology adoption, Government of Pakistan took several initiatives to pull in the progression exertion in the right direction. Through technology transfer programs, government transferred the agricultural linked technology to the farmers. Government of Pakistan also initiated the farmer field schools (FFSs), as without the farmers' involvement, these agricultural technologies are remained unproductive. Agricultural development is not possible without farm managers upgraded managerial abilities. These can be acquired through firm training and formal and informal education. Furthermore, to enhance the productivity and efficiency of the factors of production, farm manager's education played dynamic role. An educated farmer has more ability to select the right quantity of inputs, per crop requirements. According to [7], education is the key factor to enhance the farm production. Higher education resulted in higher returns for farm managers, especially in the resourceful agricultural system. According to [8], the level of education has highly significant and positive relationship with farm the production. The major objective of the present study is to examine the allocative efficiency of capsicum cropping system under the tunnels. This analysis would be helpful in finding out that how good Pakistani farmers are in adoption of new farm technologies. At Some Point, even though the farmers have the access to the modern farm technologies. Yet the mismanagement and lack of skills to utilize accessible high-tech inputs result in dismal growth in farm sector. Hence, current study evaluates the allocative efficiency of sampled farmers those who opted to cultivate the capsicum crop under controlled environment by using the tunnels. Materials and Methods Data were collected from the Faisalabad Division of Pakistan by using the questionnaire regarding offseasonal cropping systems under tunnels. Around 1000 growers were cultivating the capsicum crop under the tunnels in the study area. Out of 1000 farmers, 150 farmers interviewed that were cultivating the capsicum crop under the tunnels by using the purposive sampling technique. Most of the questions were about socio-economic factors and production and cost practices under the tunnels. A detailed questionnaire was developed. Before collecting the data with the help of questionnaire, pre-pilot and pilot survey has been done. Prediction of economic theory points out that the price for output corresponds to minimum cost of production with the given set of input prices and technology. If buyers and sellers act in competitive manner, purchasing cost function will become C (yi, pi), representing minimized cost of producing yi at input prices pi. Frontier models not only used technological frontiers whereas uses reference technology. Cost inefficiency occurs if the cost is not lowered with respect to output. If efficiency has the value of one, then the agreement is on the frontier whereas more than one value shows contract above frontier referring to greater decease in cost. Two major techniques are generally employed to measure the productivity and efficiency analysis of the agricultural sector, namely, Stochastic Frontier Approach (SFA) and Data Envelopment Approach (DEA). The SFA is parametric technique and based on regression analysis, among output and inputs. The DEA is a non-parametric approach and originated from mathematical programming of the linear piecewise function. Existing study used the SFA technique by following the work of Coelli et al. The model for the cost frontier of bitter gourdcapsicum cropping system in its general form can be written as: Where i E illustrates the perceived cost of each farm, on the other hand, Pni is the input price of n th input and Qmi shows the output of each farm. In order to estimate the cost frontier of bitter gourd-capsicum cropping system, it is compulsory to realize the properties of cost minimization solution. Such as homogenous of degree one, non-negative, concave, and non-decreasing in farm input prices and output. To estimate the bitter gourd-capsicum cropping system cost frontier model, this study used the Cobb-Douglas (C-D) functional form. First, this study opted the Translog functional form but due to the problem of extreme multicolineartity, Translog functional form is not desirable for the analysis of current data at hand. Hence, C-D cost frontier model is finally recommended after the estimation of all required diagnostic regarding functional form. At this juncture, the term vi is a random and symmetric variable. It symbolizes the statistical noise and estimate errors in the given model. On the other hand, the term ui represents the factors of inefficiency, which is on-negative. In order to satisfy the above-mentioned properties of cost minimization, the βn is should be non-negative such as: Henceforward, after imposing the constraint on bitter gourd-capsicum cost function, this constraint of homogeneity which is given in equation no. (4) is substituted in equation no. (5). Which is as follows: In order to fulfil the necessary condition of normalized cost frontier function, each input price variable in the cost function bitter gourd-capsicum cropping system equation has been divided by the NPK price. Where Ci, is total cost of production of capsicum crop, pni, is price of inputs e.g., price of capsicum seeds, price of fertilizer (NPK), price of pesticides, price/cost of labor, price of farmyard manure, land preparation cost, total cost on tunnels and ymi is the total output of capsicum. This study is also opted the model of inefficiency to find out the socio-economics factors that might have effect on technical efficiency of capsicum production system. In equation no. 6, Ui represents the technical inefficiency in capsicum production function. Result and Discussion Before the estimation of capsicum cost frontier model, the third hypothesis are applied to the cost frontier model of the capsicum cropping system. This hypothesis implies that non-normalized capsicum cost frontier model is suitable for the study. The LR statistics test is used to check the validity of the cost model. The calculated value of likelihood ratio test statistics is 149.8, which is greater than the tabulated value of 3.84. Hence, it rejected the null hypothesis of nonnormalized capsicum model. Thus, the normalized cost frontier model is estimated from the capsicum cost frontier analysis. Capsicum C-D Stochastic Cost Frontier Analysis Efficiency is a multidimensional concept. It encompasses specifically the concepts of production, cost, price, profit, etc. Production frontier is the maximum quantity of output obtained from a given amount of inputs, while cost frontier shows the minimum cost of production of the output for a given number of prices of inputs. In this study, the Cobb- The capsicum cost frontier function has the gamma coefficient value around 0.99 and it is significant at the 1% level. This implies that the 99% change in the total production cost of the capsicum cropping system is due to differences in the cost efficiencies as shown by the gamma value. This result also suggests that farmers in the data set are cost inefficient meaning that the farmers in the sample have not yet acquired the essential skills that enabling them to select the inputs in optimal combinations. The gamma value shows that capsicum growers can achieve the current output at lower costs, which raises the concern about the use of inputs in proportions to the optimality, related to the relative prices of inputs. The coefficient of capsicum output is negative and significant at the 5 % level. This negative relationship between output and total cost of production demonstrates the increasing returns-to-scale in this study. This result points out that the farmers can minimize production cost by using the farm inputs in optimal proportions given the input prices. This result also illustrates that increase in the cost efficiency among the capsicum farmers would also result to higher farm profit in the study area. The coefficient of cost of tunnels is positive and highly significant. This highly significant value of cost of tunnels estimate highlights its importance in the cost structure of the capsicum cropping system farms in the study area. The results revealed that as the number of tunnels increases, it would subsequently increase the cost of production of the capsicum cropping system. If the cost of the tunnel is more than the price of output, then alternatively it will increase the cost of production by increasing the number tunnels in capsicum farms. The coefficient of seed cost is negative and statistically significant at the 5 % level. Likewise, the farmers using improved seed varieties have a higher probability of being cost efficient compared to those farmers who use traditional seed varieties, other factors being held constant. This finding can be explained by the fact that the use of improved seeds translates into an improvement in technical efficiency and resultantly an improvement in cost and economic efficiency. According to [9] and [10] inadequate supply of improved seed varieties resulted in lower farm productivity and higher cost of production. The coefficient of cost pesticide spray is positive and insignificant. Thus, the pesticide spray relationship with total cost of production is unclear in the capsicum cropping system. It might be due to the fact that farmers are not much informed about the applicable application of pesticides when capsicum crop is cultivated under the controlled environment. The coefficient of labor cost is positive and highly significant at the 1 % level of significance. The result indicates that number of labor hour has a significant effect on the cost function of sampled capsicum farms. This result is in accordance with the studies of [11], [12] and [13]. The land preparation procedures play a vital role in capsicum tunnel farming. The coefficient of land preparations cost is positive and significant at the 1 % level of significance. This result indicates that an increase in the number of land preparation practices resulted in a corresponding increase in the cost of production of the capsicum cropping system. The results of the farmyard manure (FYM) on the total cost of production of the capsicum cropping system are positive and highly significant at the 1 % level of probability. This result suggests that an increase in the application of farmyard manures leads to the subsequent increase in the cost of production of the capsicum cropping system farms included in the data set. Table 2 represents the factors that may influence on the allocative efficiency of capsicum cropping system farms of the study area. Cost Inefficiency Model of Capsicum Cropping System In capsicum cost inefficiency model, the coefficient of education is positive and significant. This is the contradictory to the findings of [14], [15], [16] and [17]. Although this result is in line with the findings of [18]. The reason behind the positive and significant sign either farmers prefer more non-formal education compared to formal education or most farmers' dependent upon their years of experience to realize the allocative efficiency other than formal education. On the other hand, education plays fundamental role in application of inputs and farm management. This may be due to this fact education effects technical efficiency more compared to the allocative efficiency. Age of farmer and inefficiency in most of the cases have the positive relationship. The coefficient of age is another positive and highly significant variable implying that efficiency increases with the age of the farmer. This is contradictory to the findings of [15], [11], [17] and [16]. These studies reported that the older a farmer becomes, the more he or she finds difficult to combine the available technology. As the older farmers in most of the cases are hesitant to opt the advance farm technologies as compared to younger farmers [21,22]. Although the studies of [14] and [19] reported the findings like the present study. Most of the farmers of the capsicum cropping system are middle aged. But up to a certain threshold, and after that, this probability starts decreasing pointing towards the fact that experience in farming plays an important role in the reduction of production costs. The coefficient of access to credit holds the negative sign. Agricultural credit is important as it influences farming productivity and increases input utilization efficiency. Allocative efficiency at a certain input and output price, increases productivity, which is caused by credit support. But it is insignificant. This result is in line with the finding of [20]. Hence, the influence of access to credit on elective is quite uncertain on the capsicum cropping system of the study area. Furthermore, the coefficients of tenant and owner cum tenant are positive and highly significant at the 1 % level of significance. These results show that those farmers owned the farmers have a higher probability of being allocatively efficient than those with the tenant and the owner cum tenant in the study area. The results revealed that having own farm enables the farmer to fully utilize the production capacities of the used inputs and avoiding their under-utilization. The tractor ownership coefficient revealed the negative and significant relationship with allocative inefficiency. This result is according to the expectation. The ownership of tractor composes the farmers to do with time farm practices as much as considered necessary during the cultivation and harvesting season. The tractor owned farms not only set aside the cost of hiring the tractor but also execute the land preparation and other farm operation activities on fellow farmers' farm to charge the rental fee. Thus, the tractor ownership, positive influences on the allocative efficiency of capsicum cropping system farms under the tunnels. The coefficient of operational holding under the capsicum cropping system is positive and statistically significant at the 1 % level. This result depicts that farmers should keep in mind the demand and supply forces of specific crops. Increasing the operational holding for the specific crop without the prior knowledge regarding crop demand resulted in lower crop price, lower profitability, and resultantly higher cost of production. Hence, in case of a capsicum cropping system, increasing the operational holding negatively affect the allocative efficiency of the study area farms. The coefficient of number tunnels ' per acre is negative and statistically significant at the 1 % level. This result implies that as the number of tunnels increases, it would correspondingly increase the allocative efficiency of capsicum farms in the study area. If the cost of per tunnel is more than the price of capsicum output, then alternatively it will increase the cost of production by increasing the number tunnels in capsicum farms [23,24].The present study revealed the negative relationship in between number of tunnels and allocative efficiency. Hence, it shows that as number of tunnels increases the allocative efficiency in capsicum cropping system improved. Figure.1 displays the estimated allocative efficiency of the capsicum cropping system farms under tunnels. Allocative efficiency frequency distribution ranges from below 0.20 to above 0.90. In case of capsicum cropping system around 44% of the allocative efficiency lies within the range of 0.20 to 0.60. Around 18% and 35% of capsicum farms' allocative efficiency ranged from 0.61 to 0.80 and 0.81 to above 0.90, respectively. Allocative Efficiency Analysis of Capsicum Cropping System The mean allocative efficiency is about 0.65. Hence, allocative efficiency analysis points out the presence of allocative inefficiency in the capsicum cropping system. The result of allocative efficiency analysis revealed that specifically, through input reallocation, capsicum farms can decrease input consumption around 35% compared to their costs on the production frontier. If the input costs can be decreased 25 to 35%, farm profits of capsicum growers will also increase. This result depicts that farmers could minimize the use of inputs and cost of improved management practices and efficient utilization of inputs. Most of the farmers in the study area, operating the tunnel farming by having the training from their fellow farmers instead of any agricultural institute. Due to the mismanagement of farm resources/inputs, the cost of growing the capsicum crop has been increased by around 35 %. Hence, by improving the farm management practices, skills, formal education, farmers can get the same level of capsicum output by reducing the 35 % cost of production. The excessive use of farm inputs such as fertilizers negatively affects soil fertility. The unnecessary use of pesticides sprays impact on crop quality and increased the cost of production. Thus, the removal of mismanagement farm practices and reallocation of farm resources will enhance the capsicum output at the optimum level. Prior studies of [25,26] estimated the technical efficiency of capsicum cropping system and other crops found the technical inefficiencies in the production process of different cropping systems. Hence, it is needed that farmers should apply the farm inputs in a way that farmers can get most of the benefit and profitability by applying minimum farm resources on one hand and reduced cost of production and enhanced production on the other hand. Conclusion The major reasons, why capsicum farmers turn out to be dissuaded after opting for the latest technologies as farmers have slight knowledge regarding the use of selected farm technologies, in most of the cases. The most part of the farmers in the study area take up the high-tech technologies and tunnel farming by following the fellow farmers. And the irony is if farmers are emulating/following those farmers, who himself/herself does not have the complete knowledge of the opted technology, then that halflearned knowledge will be transferred to all the farmers of that specific area, and consequently will impact negatively towards the efficacy of adopted farm technology. Most of the farmers prefer the suggestions of fellow farmers over meeting with the extension centre staff or agricultural experts from agricultural institutions. Most of the time, these preferences lead to erroneous information, inappropriate selection of farm inputs and technologies that ultimately leads to massive farm losses in the study area. Thus, each time, farmers are going to select new technology, public/private research institutes should provide their best possible expertise as a frontline farm worker. Opting the high-tech technology, and skilled to generate the best out of it is an art that can be achieved by incessant learning while administering through the well-learned argic-experts and research institutes. The focus of agricultural policy should recognize the efficiency gains as a source of improving productivity and profitability and thus poverty for the sizeable part of Pakistan's agricultural sector.
2021-06-06T08:46:36.555Z
2021-05-07T00:00:00.000
{ "year": 2021, "sha1": "878e03fd962e5e098950fa9bca143e6c60cd9515", "oa_license": null, "oa_url": "https://doi.org/10.37394/232015.2021.17.48", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "878e03fd962e5e098950fa9bca143e6c60cd9515", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
15935322
pes2o/s2orc
v3-fos-license
Apical ballooning and cardiomyopathy in a melanoma patient treated with ipilimumab: a case of takotsubo-like syndrome Although animal studies have shown that the immunomodulator ipilimumab causes inflammation of the myocardium, clinically significant myocarditis has been observed only infrequently. We report a case of suspected acute coronary syndrome without a culprit lesion on cardiac angiography and takotsubo cardiomyopathy (TC)-like appearance on echocardiography in a patient with metastatic melanoma who received four standard doses of ipilimumab. Apical ballooning, hyperdynamic basal wall motion, systolic anterior motion of the mitral valve, and associated severe left ventricular outflow tract obstruction were present. Restaging with positron emission tomography-computed tomography done soon after discharge incidentally revealed increased fludeoxyglucose uptake in the apex. This case illustrates that a TC-like syndrome might be caused by autoimmune myocarditis after ipilimumab treatment although this was not biopsy-confirmed. Post-marketing surveillance should capture cardiac events occurring in patients treated with ipilimumab to better document and clarify a relationship to the drug, and biopsies should be considered. Physicians utilizing this novel agent should be aware of the potential for immune-related adverse events. Background Derived from Japanese word for octopus pot, typical takotsubo cardiomyopathy (TC) presents clinically indistinguishable from acute coronary syndrome, but systolic apical ballooning of a hypo-or akinetic left ventricular (LV) apex with hyperdynamic basal walls will be present from the deleterious effects of a catecholamine surge. In 90% of cases, a clear emotional or physical stressor precedes the presentation, hence the term stress-induced cardiomyopathy [1]. Acute increased adrenergic activity from cocaine, pheochromocytoma, sub-arachnoid hemorrhage, or trauma can precipitate TC via altered vascular tone and/or direct toxicity. Patients experience typical chest pain and may manifest heart failure or shock. Troponin release, often small, and anterior ST segment elevations are usually present. Angiography, per definition, should fail to reveal a culprit lesion. In 16% of cases there is a pressure gradient across a narrowed LV outflow tract, often associated with systolic anterior motion of the mitral valve (SAM). We report a case of "takotsubo cardiomyopathy-like" myocardial dysfunction after ipilimumab treatment for metastatic malignant melanoma. Case presentation An 83-year old woman with hypertension was diagnosed with biopsy-proven vaginal melanoma four months prior to admission. PET-CT showed invasive loco-regional disease and a three-millimeter nodule in the left upper lung lobe. Attempted resection was complicated by positive margins. Four cycles of ipilimumab (3 mg/kg every three weeks), last dosed three weeks prior to hospitalization, were administered. Radiotherapy was deferred. The patient had developed pruritus, lethargy, and malaise after the third dose and diarrhea after the fourth dose of ipilimumab. These symptoms responded to short courses of prednisone. Prior to admission, the patient experienced two weeks of fairly continuous worsening substernal chest pain and progressive dyspnea. On admission, she denied acute emotional stress, illicit drug or herbal medication use. Electrocardiography revealed sinus tachycardia at 110/minute and 1 millimeter ST elevations in leads I, V 2 , and V 3 . The initial troponin-I level was 0.98 (normal <0.04) ng/ml, thyroid-stimulating hormone was measured as 2.6 (0.4-4.0) mIU/L, and the erythrocyte sedimentation rate was 65 (<20) mm/hour. A chest radiograph revealed numerous round bilateral lung masses. Transthoracic echocardiography showed an akinetic apex, hyperkinetic base and septum, an ejection fraction of 50%, and LV outflow tract obstruction with a peak gradient of 100 mmHg with SAM. Emergent cardiac angiography demonstrated an isolated 30% proximal left anterior descending artery stenosis without evidence of a thrombus. No intervention was performed. Figure 1 displays the ventriculogram. The patient developed transient supraventricular and ventricular tachycardia. A beta-blocker and was started. On hospital day 3, the patient was asymptomatic and was transferred to acute cardiac rehabilitation. 18 F-Fludeoxyglucose (FDG) PET-CT performed two days later revealed focal FDG uptake in the patient's ballooned LV apex (Figure 2). Discussion and conclusion Drug-induced TC has been associated with direct-acting sympathomimetic xenobiotics, causing myocardial dysfunction either directly, due to free radical formation and apoptosis, or via alterations in coronary vasomotion; atropine and adrenergic reuptake inhibitors may do the same. Chemotherapeutics and monoclonal antibodies potentially implicated have included 5-fluorouracil, rituximab, and vascular endothelial growth factor antagonists. Postulated mechanisms include direct myocardial ischemia due to coronary vasospasm, toxic myocarditis from impurities, upregulation of transforming growth factors stimulating myocytal reticulin fiber growth, and increased inflammatory cytokine levels [2,3]. Ipilimumab, a monoclonal antibody directed against cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4), leads to activated T-lymphocyte proliferation and results in prolonged overall survival in metastatic melanoma [4]. Almost three quarters of patients on ipilimumab experience immune-related adverse events, most commonly rash, pruritus, diarrhea, and colitis. One third of these are severe, life-threatening, or disabling [5]. CTLA-4-deficient mice succumb to myocarditis and pancreatitis characterized by lymphoproliferative infiltrates and granular tissue formation [6,7]. Clinically significant myocarditis has been identified in less than one percent of patients [8]. Although the proposed Mayo Clinic criteria for TC precludes myocarditis [9], a TC-like appearance on echocardiography has been reported in lymphocytic myocarditis without evidence of a viral cause [10], suggesting that TC is a nosographic entity with more than one etiology. In the absence of a known stressor, considering the subacute onset of symptoms, and supportive imaging, we hypothesize that autoimmune myocarditis from ipilimumabrelated CTLA-4 inactivation could have accounted for our patient's presentation of a "takotsubo cardiomyopathy-like" syndrome. Though the PET-CT was for restaging and not protocolled as a cardiac study, focal FDG uptake in the LV apex, corresponding to the akinetic myocardium, is dramatic. Focal uptake of FDG, representing enhanced metabolic activity, may be due to microvascular or myocyte damage, changes in fatty acid utilization, or a combination [11]. As expected, most non-inflammatory TC cases in the literature assessed with cardiac PET-CT show decreased FDG uptake in the stunned areas [12,13]. However, a recently reported case linked increased FDG uptake with severely decreased fatty acid metabolism in an impaired, inflamed myocardium [14]. When done early, a positive PET-CT has a 100% specificity compared to endomyocardial biopsy for acute myocarditis [15]. However, since no biopsy was taken, a cardiac metastasis cannot be excluded although most cases are clinically silent and cardiovascular manifestations are rarely seen in isolation or as a presenting symptom; and cardiac lesions are usually multiple at diagnosis [16]. Medline and Embase searches as well as a query to the manufacturer of ipilimumab failed to reveal similar cases. Therefore, to our knowledge, this might be the first reported case of a "takotsubo cardiomyopathy-like" syndrome in a patient treated with ipilimumab. While no causal relationship can be proven, post-marketing surveillance should capture cases of ipilimumab cardiac toxicity and physicians utilizing this novel agent should be aware of this potential immune-related adverse event. Consent Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
2018-04-03T02:20:53.251Z
2015-02-17T00:00:00.000
{ "year": 2015, "sha1": "e616fb6b5267812de408ff26c5bc74435ef5ad8d", "oa_license": "CCBY", "oa_url": "https://jitc.biomedcentral.com/track/pdf/10.1186/s40425-015-0048-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb36d69bc7c312adb9f810fe3bbadb68d3f45d7e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2993610
pes2o/s2orc
v3-fos-license
O R I G I N a L a R T I C L E Open Access Elevated peripheral blood lymphocyte-to-monocyte ratio predicts a favorable prognosis in the patients with metastatic nasopharyngeal carcinoma Abstract Introduction: Patients with metastatic nasopharyngeal carcinoma (NPC) have variable survival outcomes. We have previously shown that an elevated peripheral blood lymphocyte-to-monocyte ratio (LMR) is associated with an increased metastatic risk in patients with primary NPC. The present study aimed to investigate the prognostic value of pretreatment LMR in a large cohort of metastatic NPC patients. Methods: Clinical data of 672 patients with metastatic NPC diagnosed between January 2003 and December 2009 were analyzed. The peripheral lymphocyte and monocyte counts were retrieved, and LMR was calculated. Receiver operating characteristic (ROC) curve analysis and univariate and multivariate COX proportional hazards analyses were performed to evaluate the association of LMR with overall survival (OS). Background Nasopharyngeal carcinoma (NPC) is a squamous cell carcinoma that occurs in the epithelial lining of the nasopharynx, with high incidence recorded in South China and Southeast Asia [1,2]. With the increasing application of high-precision radiotherapy, distant failure is expected to become a predominant cause of death from NPC [3,4]. Once metastasis is diagnosed, the overall survival (OS) of patients is typically under 15 months with palliative chemotherapy. Nevertheless, retrospective studies have shown great differences in the survival outcomes in patients with variable affected anatomic sites and different numbers of metastases [5,6]. For specific subgroups, OS may exceed 10 years [6]. Therefore, a valuable marker to predict prognosis is desirable to facilitate individualized treatments and thus to achieve better outcomes for patients with metastatic NPC. In the last decade, pretreatment peripheral differential leukocytes, such as lymphocytes and monocytes, have been found to be associated with prognosis in various cancers. A high pretreatment lymphocyte count has been determined to be associated with the good prognosis of patients with acute lymphoblastic leukemia [7], metastatic gastric cancer [8], and NPC [9]. A high monocyte count has been found to be a poor independent prognostic factor in patients with diffuse large B-cell lymphoma and metastatic melanoma [10]. An elevated lymphocyte-to-monocyte ratio (LMR) has been reported to be a prognostic factor for clinical outcome in patients with diffuse large Bcell lymphoma and Hodgkin's lymphoma [11]. Recently, we have shown that an elevated LMR is associated with an increased metastatic risk in patients with primary NPC [12]. However, there have been few studies of the prognostic value of LMR in patients with metastatic NPC. Therefore, the current study was designed to analyze the effect of pretreatment LMR on OS in these patients. Patient selection and data collection Clinical data of patients with metastatic NPC referred to Sun Yat-sen University Cancer Center (SYSUCC) between January 2003 and December 2009 were reviewed. All of the included patients met the following criteria: 1) pathologically confirmed World Health Organization (WHO) type II or III NPC; 2) radiographically detectable metastatic disease; 3) a Karnofsky Performance Status score of ≥70; and 4) available clinical information and laboratory data at the diagnosis of metastasis. The exclusion criteria were as follows: 1) patients with a self-reported acute infection or hematologic disorder and 2) those with another type of malignancy. The Union for International Cancer Control/American Joint Committee on Cancer (UICC/AJCC) TNM classification system (6th edition, 2002) was used for staging. This study protocol was approved by the Clinical Ethics Review Board of SYSUCC. As part of the physical examination, peripheral blood was collected before treatment, and both peripheral lymphocytes and monocytes were counted using a Sysmex XE-5000 automated hematology analyzer (Sysmex, Kobe, Japan). The peripheral LMR was calculated as the ratio of the absolute peripheral lymphocyte count to monocyte count. The serum antibody titers of Epstein-Barr virus (EBV) immunoglobulin A against virus capsid antigen (VCA/IgA) and early antigen (EA/IgA) were detected by enzyme-linked immunosorbent assay [13]. Treatment and follow-up According to our institutional guidelines for the palliative treatment of metastatic NPC, cisplatin-based systemic chemotherapy was provided to all patients as a basic treatment. Definitive radiotherapy targeting both the primary tumor and its regional lymph nodes (locoregional radiotherapy, lrRT) was administered to the patients with metastasis at presentation for local symptomatic relief or as a part of a multidisciplinary approach, as previously described [14][15][16]. The evaluation of tumor response to therapy was based on a computed tomography (CT) or magnetic resonance imaging (MRI) scan. After the treatment was completed, the patients were evaluated at 3-month intervals for the first 3 years and every 6 months thereafter or until death. The last follow-up date was December 31, 2013 for all available patients. Statistical analysis Statistical analyses were performed using SPSS software (version 16.0, SPSS Inc., Chicago, IL, USA). OS was defined as the period between the first diagnosis of metastatic NPC and death or the last follow-up. The receiver operating characteristic (ROC) curve analysis was performed to select the most appropriate cut-off points for absolute lymphocyte and monocyte counts as well as LMR to stratify the patients at high risk of malignancy-related death. Univariate and multivariate analyses of clinicopathologic variables were performed using Cox proportional hazards regression models. Actuarial OS was plotted against time using the Kaplan-Meier method, and differences between the survival curves were assessed using the log-rank test. The correlation of LMR with different clinicopathologic characteristics was evaluated by Spearman's rank correlation coefficient (r). The chi-square test was used to analyze differences in proportions. A two-sided P < 0.05 was considered significant. Multivariate Cox proportional hazards regression analysis of clinicopathologic characteristics We used a multivariate model to adjust for the confounders of the association of LMR with survival. The results showed that a high LMR was an independent predictor of a favorable OS (HR = 0.50, 95 % CI = 0.41-0.60, P < 0.001) ( Table 3 Model 1, including LMR as a variable). In addition, both absolute Footnotes as in Table 1 lymphocyte and monocyte counts were analyzed for their independence from other covariates using the Cox model (Table 3 Model 2, including lymphocyte and monocyte counts as variables). LMR was not included here, considering the multicollinearity between LMR and absolute lymphocyte and monocyte counts. The results showed that the absolute lymphocyte count was an independent factor for a favorable prognosis (HR = 0.77, 95 % CI = 0.64-0.93, P = 0.007), whereas the absolute monocyte count was an independent inferior prognostic factor for patients with metastatic NPC (HR = 1.98, 95 % CI = 1.63-2.41, P <0.001) ( Table 3 Model 2). After stratification by T stage, N stage, metastasis at presentation, metastasis after radical therapy, number of metastatic lesions, and metastatic sites, only LMR remained a significant predictor of prognosis (Figs. 3, 4 and 5). Moreover, an advanced N stage, the presence of two or more lesions, and liver metastasis were shown to be independent indicators of short OS (Table 3). Discussion In the current study, we demonstrated that an elevated LMR was significantly associated with prolonged OS and was independent of the other variables assessed in predicting the prognosis of patients with metastatic NPC. Moreover, after stratification by T stage, N stage, metastasis at presentation, metastasis after radical therapy, number of metastatic lesions, and metastatic sites, LMR remained a significant predictor of prognosis. There is substantial evidence in advanced cancer that the host systemic immune response is an important independent predictor of outcome and that pre-treatment measures of the systemic inflammatory immune response can be used to independently predict cancer patients' survival [17]. Among many systemic inflammatory measures, the white blood cell (WBC) subset count (the neutrophil count [18] or the neutrophil-to-lymphocyte ratio [19]) is well known as an independent prognostic factor for survival [17]. However, evidence that LMR may have a prognostic role in cancer is limited. Recent Fig. 2 Kaplan-Meier overall survival (OS) analysis for patients with metastatic NPC. a, the OS rate was higher in the patients with a high absolute lymphocyte count than in those with a low count (P = 0.002). b, the OS rate was lower in the patients with a high absolute monocyte count than in those with a low count (P < 0.001). c, the OS rate was higher in the patients with a high LMR than in those with a low LMR (P < 0.001). LY, lymphocyte; MO, monocyte Table 1 reports have indicated that LMR was positively associated with survival outcomes in classical Hodgkin's lymphoma [20], diffuse large B-cell lymphoma [21], metastatic non-small cell lung cancer [22], and NPC [12]. In the present study, we evaluated LMR as a prognostic indicator in 672 patients with metastatic NPC. Some of our results were consistent with previous findings. We found that an elevated LMR not only had a strong correlation with longer survival but also was an independent prognostic factor for survival, as determined by multivariate analysis using the Cox model. However, some of our results differed from those reported by Jin et al. [23], which have shown that the absolute lymphocyte count was not correlated with OS. In the current study, after adjusting for confounders, the absolute lymphocyte count remained as an independent prognostic factor for OS. The discordance between these two studies may be partially due to the different sample sizes: 672 patients were recruited in this study compared with 229 in the study by Jin et al. [23]. The mechanisms underlying the relationship between LMR and the prognosis of cancer patients remain unclear, which may be partially explained by the link between chronic inflammation and cancers [24][25][26]. It is a consensus that the adaptive immune system carries out immune surveillance and can eliminate newly formed tumors; however, effective adaptive immune responses are always suppressed in established tumors through several pathways, including the inhibition of dendritic cell differentiation and the activation and infiltration of regulatory T cells and tumor-associated macrophages [24]. Lymphocytes are crucial components of the adaptive immune system, and the presence of tumor-infiltrating lymphocytes has been reported to indicate the generation of an effective antitumor cellular immune response [27]. The peritumoral inflammatory response is thought to reflect the interaction between the tumor and the host. In previous studies, a high lymphocytic infiltrate has been linked with prolonged survival, independent of clinicopathologic characteristics, in breast cancer patients [28]. However, data supporting the association between intratumoral immune cells and blood-based Fig. 3 Kaplan-Meier OS analysis according to baseline absolute lymphocyte count in patients with metastatic NPC. In the T1-2 subgroup (a), the T3-4 subgroup (b), the N0-1 subgroup (c), the N2-3 subgroup (d), the subgroup with metastasis after radical therapy (f), the subgroup with multiple metastasis lesions (h), the bone metastasis subgroup (i), and the lung metastasis subgroup (k), the OS rates are higher in the patients with a high absolute lymphocyte count than in those with a low count (all P < 0.05). In the subgroup with metastasis at presentation (e), the subgroup with one metastasis lesion (g), the liver metastasis subgroup (j), and the extraregional lymph node metastasis subgroup (l), there is no significant difference between the two curves (all P > 0.05) cells constituting the systemic inflammatory response with OS are sparse. Previous studies have demonstrated an association between a low peripheral blood lymphocyte count and short survival in patients with different types of cancer [29,30]. We have previously shown prolonged survival of primary NPC patients with elevated lymphocyte counts compared with those with decreased lymphocyte counts [12]. Monocytes are a subset of circulating white blood cells that can further differentiate into a range of tissue macrophages and dendritic cells [31]. It has been reported that monocytes secrete various proinflammatory cytokines, such as interleukin (IL)-1, IL-6, IL-10, and tumor necrosis factor-α (TNF-α), which have been associated with short survival and poor prognosis in patients with malignancy [32,33]. Moreover, monocytes release monocyte chemo-attractant protein-1 (MCP-1) upon stimulation and mediate tumor-associated macrophage infiltration in solid tumors, which have been shown to produce a variety of chemokines, such as transforming growth factor-α (TGF-α), TNF-α, IL-1, and IL-6, to promote tumorigenesis, angiogenesis, and distant metastasis of malignant tumors [34,35]. As a consequence, a high absolute monocyte count may indicate poor prognosis. Our findings also showed that a high monocyte count was significantly associated with short survival in patients with metastatic NPC. LMR, which is defined as the absolute lymphocyte count divided by the absolute monocyte count, may reflect the diverse effects of monocytes and lymphocytes on tumor progression. Previous studies have demonstrated that normal human monocytes suppress either the phytohemagglutinin-or antigen-induced lymphocyte proliferative response when the monocyteto-lymphocyte ratio is increased [36]. In the current study, although the lymphocyte count or monocyte count alone could predict the survival outcomes in Fig. 4 Kaplan-Meier OS analysis according to baseline absolute monocyte count in patients with metastatic NPC. In the T1-2 subgroup (a), the T3-4 subgroup (b), the subgroup with metastasis at presentation (e), the subgroup with multiple metastasis lesions (h), the bone metastasis subgroup (i), the liver metastasis subgroup (j), the lung metastasis subgroup (k), and the extraregional lymph node metastasis subgroup (l), the OS rates are lower in the patients with a high absolute monocyte count than in those with a low count (all P < 0.05). In the N0-1 subgroup (c), the N2-3 subgroup (d), and the subgroup with metastasis after radical therapy (f), the OS rate is higher in the patients with a high absolute monocyte count than in those with a low count (all P < 0.001). In the subgroup with one metastasis lesion (g), there is no significant difference between the two curves (P = 0.070) patients with metastatic NPC, LMR outperformed them in this regard. Our results of multivariate Cox proportional hazard analysis showed that LMR and absolute lymphocyte and monocyte counts were independent prognostic factors. However, after stratification analysis, only LMR remained a significant predictor of prognosis. In addition, consistent with the findings of Pan et al. [37], we found that an advanced N stage, the presence of two or more metastatic lesions, and liver metastasis were independent prognostic factors for short OS, whereas lung or bone metastases were not associated with OS. We postulate that these findings may be associated with a unique biological behavior of NPC in the liver metastasis group. Conclusion Our results show that LMR can function as an independent prognostic factor for patients with metastatic NPC. Moreover, this ratio can be easily determined with routine blood counts and is readily applicable clinically. We acknowledge that our findings are limited by the retrospective and single center nature of this study; thus, further independent validation of our findings is warranted. Fig. 5 Kaplan-Meier OS analysis according to baseline LMR in patients with metastatic NPC. In the T1-2 subgroup (a), the T3-4 subgroup (b), the N0-1 subgroup (c), the N2-3 subgroup (d), the subgroup with metastasis at presentation (e), the subgroup with metastasis after radical therapy (f), the subgroup with one metastasis lesion (g), the subgroup with multiple metastasis lesions (h), the bone metastasis subgroup (i), the liver metastasis subgroup (j), the lung metastasis subgroup (k), and the extraregional lymph node metastasis subgroup (l), the OS rates are higher in the patients with a high LMR than in those with a low LMR (all P < 0.01)
2018-05-08T18:24:11.442Z
0001-01-01T00:00:00.000
{ "year": 2015, "sha1": "9de4e505b41a258f2fea4f8c0cee22cd46a1492e", "oa_license": "CCBY", "oa_url": "https://cancercommun.biomedcentral.com/track/pdf/10.1186/s40880-015-0025-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9de4e505b41a258f2fea4f8c0cee22cd46a1492e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
214408315
pes2o/s2orc
v3-fos-license
Enhancing the Growth and Yield of Lettuce ( Lactuca sativa L. ) in Hydroponic System Using Magnetized Irrigation Water The application of magnetic technology to agricultural productions is considered new breakthroughs to the enhancement of food production. However, studies about its application on the hydroponic system of production on high valued crops are limited. Hence, the present study assessed the effect of magnetically-treated water on the growth and yield parameters of lettuce in hydroponic system. Magnetic device with different number of permanent magnets was used to magnetize irrigation water in the hydroponic system. Uniform and healthy seedlings are transplanted and arranged completely randomized design. Magnetically treated water enhanced the growth and yield parameters of lettuce such as weekly height, leaf area, fresh weight, and root length. The height, leaf area, fresh weight, and root length increased up to 44.30%, 199.93%, 50.72%, and 37.00%, respectively vs. the control. Results revealed that magnetic treatment for water in hydroponic system has potentials to increase the growth of lettuce and consequently its yield. Introduction In the recent years, concerns on food security have grown accompanied by abrupt increase in food price focusing attention on increasing food demand and in what way this will be addressed (Linehan, Thorpe, Andrews, Kim, & Beaini, 2012). As projected by the Food and Agriculture Organization of the United Nations (2009), the global population by the year 2050 would demand an estimation of more than 70 percent greater food demand than what is produced today (Linehan et al., 2012). All at once, it is projected that fifty percent (50%) of the land fit for growing crops around worldwide will be unworkable for crop production (United Nations, 2017). Accordingly, productions of food have to be increased by 110% to address high demand (Gashgari, Alharbi, Mughrbil, Jan, & Glolam, 2018). The Philippine government, along with other nations, adopted the 2030 Agenda for Sustainable Development in 2015 with a common goal of ending hunger, achieving food security, improving nutrition and promoting sustainable agriculture (Briones, Antonio, Habito, Porio, & Songco, 2017). Because of the growing population, specifically in the Philippines, it is vital to increase the production of crops. Besides, area of cultivated land is decreasing, and portion of it is being utilized to construct infrastructures for the growing Recoletos Multidisciplinar y Research Journal population. Hence, the challenge is to produce more food with less soil and water. Furthermore, natural disasters exacerbate this situation and unrestricted application of chemicals for crop growing purposes caused the decrease in the fertility and quality of soil (Barman, Hasan, Islam, & Bauu, 2016) resulting to water scarcity and low yield. The growing need for food in the Philippines puts pressure in land use, forest degradation, and natural resources. This destruction attributes to food production (Gabriel& Mangahas, 2017). To sustain human needs, the development of new food production technology is essential (Pascual, Lorenzo, & Gabriel, 2018) and other farming methods for food production should be used to prevent food crisis in the coming years (Gashgari et al., 2018). Food productions could be increased in many folds through applications of technological interventions (Agrawal & Jacob, 2010). Thus, developing new approaches to decrease consumption of water and soil for crop production is of significant importance. These made the researchers to develop an approach such as soil-less cultivation systems, commonly known as hydroponic, and the application of magnetic water in agriculture. Hydroponics is a culture technique in which plants grow in a soil-less media but use a liquid or a water solution. Hydroponics is a viable method of producing different kinds of vegetables (lettuce, tomatoes, cucumbers, and bell peppers, straw berries, honeydew melons, celery, Mediterranean and Asian herbs and Asian greens) along with ornamental plants, and foliage plants (Dunn, 2013;Sace & Estigoy, 2015). A great number of crops or vegetables and plants can be grown through hydroponic culture system. The quality of produce, palatability and nutritive value of the end products are generally higher when compared with the usual soil-based farming or cultivation (Barman et al., 2016). The yields of crops grown with hydroponic system are significantly higher than the crops grown with the soil (Sace & Natividad, Jr., 2015). This cultivation technique is sustainable because it is cost-efficient, clean and eco-friendly. In addition, it is widely accepted all over the world, regardless whether developed or developing countries. Reviews of studies conducted related to hydroponics by Alatore-Cobos et al. (2014);Ferguson, Saliga III and Omay (2014); Pascual et al. (2018); Gashgari et al. (2018); Sardare and Admane (2013); and Sace & Estigoy, (2015) showed that the technology could increase crop productivity and is promising to meet increasing food demand with scarce water resource and decreasing arable land for food production. Further, sensory evaluation for hydroponically grown lettuce is comparable with conventionally grown lettuce (Murphy, Zhang, Nakamura, & Omaye, 2011). In the Philippines, despite the problems caused by high temperature and low relative humidity, planting lettuce in a re-circulating hydroponic system is still profitable as manifested by the yield of crops grown in hydroponics system to be three to four times higher than crops grown with soil (Sace & Natividad, Jr., 2015). Moreover, magnetic water treatment (MWT) techniques have shown potential as a promising technology in different areas, especially agriculture (Ali & Samaneh, 2017). The application of the magnetized water technology is regarded as a promising innovative technique to enhance crop water-use efficiency and productivity. The utilization of magnetized water would enable great and more quantity and qualitative agricultural production (Hozayn & Amir Mohamed Saeed Abdul Qados, 2014). Magnetized water can be obtained by way of passing water completely through a strong permanent magnet mounted in or on a feed conduit pipeline (Mostafazadeh-Fard, Khoshravesh, Mousavi, & Kiani, 2011). Literatures explained that the structural arrays of water would be modified when it passes through a magnetic field resulting in the increasing intercellular movement (Scaloppi, 2008). Magnetic treatment changes hydrogen bonding and increased mobility of ions in water solution and thus resulting to reduction in EC, TDS and increase pH of solutions (Surendran, Sandeep, & Joseph, 2016). This permits the restoration of a structure of natural and enhanced water in its capability to disband and transport minerals resulting to more nutrients absorbed in the water (Hachicha, Kahlaoui, Khamassi, N. Misle, & Jouzdan, 2018;Grewal & Maheshwari, 2011;Mohamed & Ebead, 2013;Eitken & Turan, 2004). This process may result in higher nutrient uptake, increasing the physiological processes in crop production (Scaloppi, 2008). Other researchers quoted changes in physical and chemical properties of water such as hydrogen bonding, conductivity, polarity, refractive index, surface tension, pH and solubility of salts when water is exposed to magnetic field (Chang & Weng, 2008). These changes might activate the hormones and enzymes faster during growth process which may result in an improved mobilization and transportation of nutrients and bring out biological activity in plants and consequently influence plants resulting to an improved growth and yield (Maheshwari & Grewal, 2009;Surendran et al., 2016). Various studies have claimed the beneficial effects of magnetic water in many farming situations. According to the study conducted by Hozayn et al. (2010), the yield of beans, mung bean, and groundnut increased between 11-47% when irrigated with magnetized water. Improvements of germination, plant growth, flower and a 26.67% increase in fruit yield of banana (Patil, 2014), enhanced seedling lengths, fresh and dry weights, and chlorophyll content of turnip (Haq et al. 2016) when irrigated with magnetized water. Crop growth and yield parameters of cowpea brinjal have improved when using magnetized water (Surendran et al., 2016). Root growth of different plant species is influenced by magnetically treated water (Turker, Temerci, Battal, & Erez, 2017). Hozayn, Abd El Monem, Abd El-Fatah Elwia, and El-Shatar (2014) reported that application of magnetized water could lead to better crop yield and water productivity of different crops like wheat, Faba bean, chickpea, lentil, canola and flax. An increase in the foliate area of lettuce compared to plants grown in the hydroponic system was observed when the electric field is applied (Castañeda, Patiño,M., Patiño, J., Aleman, & Torres, 2016). Significant positive effect of the application of the magnetic technology is revealed in different crop growth parameters (Selim& El-Nady, 2011; Dagoberto, Angel, & Lilita, 2002;Moon& Chung, 2000;Socorro & Carbonell, 2002;Pittman, 1977). Literature indicates that there are feasibly positive effects of magnetic field treatment of water on plant growth, yield and other related parameters. Previous studies reported the potential of the two (2) technologies (hydroponic system technology and magnetic water technology) separately as a technological intervention in increasing food production for different crops in many folds. Shukla, Wagh, Vaishamapayan, Gaopande, and Vishnoi (2016), conducted a laboratory scale experiment in determining the effects of magnetic field in wheatgrass grown in hydroponic system using permanent magnets (500 Gauss) positioned directly under the roots of the wheatgrass. Castañeda et al., (2016), conducted an experiment on the response of lettuce grown in hydroponic system as affected by electric field on a laboratory scale. However, this method of magnetization is not for long term as it is affected by the availability of electricity, number of turns in the coil, and current density (Hilal, M.H. & Hilal, M.M., 2000;Majid, 2009). This presents that there are limited studies reported on the result of utilizing magnetically treated water on crops grown in the Recoletos Multidisciplinar y Research Journal Figure 1. Actual set-up of the experiment hydroponic system, specially using high valued crops. The present study evaluated the growth and yield response of lettuce grown in a hydroponic system of production under field conditions, as affected by magnetically treated irrigation water using permanent magnets. Preparation of the Nutrient Solution The nutrient solution was prepared using nineteen litres of water. The three types of fertilizer (master blend 4-18-38; calcium nitrate and magnesium sulfate) were diluted separately in a three (3) one-litre capacity containers according to manufacturer's recommendation. After diluting each fertilizer in a separate container, it was thoroughly mixed in the nineteen litres of water initially prepared. Magnetic Field Simulator The magnetization was done using a permanent magnetic device as shown in figure 1. There were two types of magnetic devices used separately to treat water solution. The magnetic devices consisted of identical magnets but of different number of permanent magnets. The first magnetic device consisted of four (4) permanent magnets, while the second magnetic device consisted of six (6) permanent magnets. A 46cm long PVC pipe of 1.27 cm in diameter passed through the magnetic devices. The submersible pump allows the water solution to circulate within the system, causing the water solution to pass through the magnetic field many times. The water was treated magnetically using the two magnetic devices and denoted as T1 (4 magnets) and T2 (6 magnets), whereas T0 (untreated water) as the control. The schematic diagram of the magnetization process used in the study is shown in figure 2. Agronomic Practices and Sowing The study secured lettuce (Lactuca sativa L.) seeds from an agricultural supply outlet. Before sowing, the seeds were evaluated manually for any defects that may cause low germination. Seed selection discarded defective seeds and the good seeds are sown in seedling trays (50% garden soil and 50% vermin compost). Two weeks after sowing, transplanting was done using the planting media. The planting media used in this study to hold the newly transplanted plants was polyurethane foam placed inside a Styrofoam cup, is shown in figure 3a. After transplanting, the Styrofoam cups are positioned in the holes provided in a piece of plywood. The plywood was used to hold the Styrofoam cups in position on top of the drum container as shown in figure 3b. The base of the Styrofoam cups (punched with a hole) was partially submerged to the water solution inside the drum to facilitate the absorption of nutrients by the roots. The research experiment was laid out in Completely Randomize Design (CRD) with three treatments including the control in triplicate. Response Measurement After transplanting, the heights of the newly transplanted seedlings were measured and statistically analyzed to ensure the uniformity of heights of the test crop before treatment application. Response measurements include the weekly height of the plants starting from the day after transplanting until harvesting, the leaf area (cm2) of the plants, the fresh weight (g) of plants, and the length of roots at harvest. Using a ruler, the heights of plants were measured from the base of the plant to the tip of the longest leaf. Leaves of the sample plants were taken and measured using standard methods of measuring leaves of plants of irregular shapes. The fresh weight was taken by weighing the plants using a digital weighing scale. The root length (cm) of the plant was taken by measuring the root from the neck of the root to the tip and shoot from base to tip by using a ruler (Iqbal et al., 2013). Statistical Analysis The analysis of variance (ANOVA) for each parameter was calculated using the embedded statistical analysis in office excel 2007. The least significant difference (LSD) test at a 1% level of probability was applied to test the differences among means. Results and Discussions Growth Characteristics The lettuce in the experiment prior to harvesting is shown in figure 4 while figure 5 shows the height of crops randomly selected from each treatment. Figure 5. A photograph comparing the heights of representative crops from the three treatments The weekly heights of lettuce grown in hydroponic system in response to the utilization of magnetically-treated irrigation water for twentyeight (28) days are presented in Figure 6. Results revealed that irrigation using magnetically treated water increased the height of the lettuce. Seven days after transplanting (7 DAT), the height of lettuce irrigated with water magnetized with six magnets was found to be significantly higher by 10. 66% than the height of lettuce irrigated with magnetized water using four magnets and the control, whereas, the height of lettuce irrigated with water treated with four magnets was found to be comparable with the height of lettuce in control. The height of lettuce irrigated with water magnetically treated with six magnets remained significantly higher (p<0.01) than the heights of lettuce irrigated with water magnetized with four magnets and the control, but significant difference was observed between the heights of lettuce irrigated with water treated with four magnets and the control after 14 DAT. This trend was observed continuously until the harvest at 28 DAT. The result of the present study revealed that the magnetized water positively affects the growth of lettuce in hydroponic system as manifested by the significant increase in height. It also showed that the height of the crop increased with the increase in the number of magnets present in the magnetic device. This result could be due to the effects of the magnets on the water when magnetized as explained by different literatures. The structural arrays of water would be modified when magnetized resulting to an increase in the intercellular movement. Magnetic treatment changes hydrogen bonding and increased mobility of ions in water solution and thus resulted to reduction in EC, TDS and increase pH of solutions (Surendran et al., 2016). This permits to restore a structure of natural and enhanced water in its capability to disband and transport minerals (Hachicha et al., 2018) resulting to more nutrients absorbed in the water (Grewal & Maheshwari, 2011;Mohamed & Ebead, 2013;Eitken & Turan, 2004). This process may result in higher nutrient uptake, increasing the physiological processes in crop production (Scaloppi, 2008). These changes might activate the hormones and enzymes faster during growth process which may result in an improved mobilization and transportation of nutrients and brought out biological activity in plants and consequently influence plants resulting to an improved growth and yield (Maheshwari & Grewal, 2009;Surendran et al., 2016). Note: Treatment means in a column that carries the same letter superscript are not significantly different based on the least significance difference (LSD) test at a 1% level of probability. Figure 6. Weekly heights (cm) of lettuce as affected by magnetically treated water This result of using magnetized water on the plant height of the test crop used in the present study is parallel but better (44.30% higher vs. control) than the results of Amira, Qodos, and Hozayn (2010) which reported that the height common flax irrigated with magnetized water increased by 6.01% compared to the control. Hozayn, Abdallha, Abd El-Monem, El-Saady, and Darwish (2016), observed a significant increase of 16.75% and 13.85% in the height of canola and wheat, respectively, after irrigating with magnetized water over the control. Several research studies have shown that plants heights increased significantly in response to magnetic field effect against the control (Yusuf & Ogunlela, 2015;Hozayn et al., 2014;Gudigar, 2013;Selim, 2008;Aladjadjiyan, 2007). An important variable affecting light interception for photosynthesis and carbohydrate production is the leaf area in a canopy. The leaf areas of the lettuce grown in hydroponic system were significantly increased (p<0.01) as results of applying irrigation with magnetically treated water, as shown in Figure 7. The leaf areas were found to be 1265.6cm2 and 2098.0cm2 for T1 and T2, respectively, whereas the control group recorded a leaf area of 699.5cm2. The percentage increments in the leaf area were 80.93% and 199.93% for T1 and T2, respectively against the leaf area of the control (Table 1). This result could be due to the effects of magnetization on the physical and chemical properties of water that positively affect the different crop growth parameters (Selim & El-Nady, 2011;Dagoberto, et al., 2002;Moon& Chung, 2000;Pittman, 1977). The result conforms to those obtained by El-Yazied, Shalaby, Khalf and El-Satar (2011) that applying irrigation with magnetized water increases the leaf area of the plants. A parallel result was reported by (De Souza, 2005;Novitsky, 2001;& Khattab, 2000) that magnetically treated water improves the leaf size of different seedlings. The fresh weights of lettuce grown in hydroponic system in response to the application of magnetically treated water as irrigation were increased significantly (p<0.01) when compared to the control group, T0 (Figure 8). The fresh weights of the test crop were found to be 285.3g (14.58% higher than T0), 375.3g (50.72% higher than T0), respectively, for T1 and T2, whereas the control showed a fresh weight of 249.0g. Hozayn et al. (2016) reported a similar effect of magnetized water on the fresh weight of wheat. The fresh weight of wheat irrigated with magnetically treated water increased by 24.52% over the control. Recent studies concerning the effect of using magnetically treated irrigation likewise supported the results of the present research study, i.e., in turnip 22.76%, 47.17% and 53.91%, higher fresh weight with magnetically treated water (Haq et al., 2016). Similar significant effects have been recorded in the yield of snow pea and celery when applied magnetically treated irrigation water but under controlled environment conditions (Maheshwari & Grewal, 2009). This result is consistent with the research works of Moussa, (2011), and De Souza, (2006) which observed the effect of pretreatment of seeds with magnetized water as irrigation as the leaf, stem, and roots of tomato and bean significantly increased compared against the control. Further, magnetized water improves the fresh weight of different crops (Aladjadjiyan, 2002;Selim, 2008). The recorded lettuce root lengths were 45.4cm, 54.1cm and 62.2cm for T0, T1, and T2, respectively as presented in Figure 9. The percentage increments in root lengths were 19.16% and 37.00% for T1 and T2, respectively, higher than the control (Table 1). This result could be due to the positive effects of magnets on the water when magnetized as manifested by better growth of crops. According to literatures, the changes in water when magnetized improved nutrient mobilization and consequently increased the physiological processes in crops. This improved root length enables the plants to effectively and efficiently absorb water and nutrients and consequently result in better yield. This result coincides with the work of Haq et al., (2016) which revealed an increase in the root length of turnip from 6.20% -14.27% with an increasing magnetization dose of irrigation water. The root length of Triticum aestivum increased in response to magnetic irrigation water treatment. Several pieces of research have established the effectiveness of the magnetic field on the root growth of different plants (Belyavskaya, 2001(Belyavskaya, & 2004Cuartero & Fernández-Muñoz, 1998;Muraji, 1992;Muraji, Asai, & Tatebe, 1998). Positive effects of magnetized water on the growth of roots are significant as they seem to induce better capacity for water and nutrients uptake, providing more significant physical sustenance to the development of shoots. Improved growth and development of the roots could lead to improved root systems all throughout the lifespan of the plants ( De Souza et al., 2006). Conclusion Based on the analysis of the single run experiment, the results of the present study established beneficial effects of using magnetically treated irrigation water on the growth parameters (height, leaf area, fresh weight, and root length) of lettuce grown in a hydroponic system. The magnetic treatment of water in the hydroponic system increased the height, leaf area, fresh weight, and root length of lettuce significantly. Utilization of the magnetic technology could enhance crop production with limited resources. Recommendation Since the technology is yet for more verification especially on field or macro application, the researchers recommend that more trials and replication be done. Recoletos Multidisciplinar y Research Journal assistance of the faculty of the college of engineering of the Cagayan State University at Sanchez Mira, Cagayan.
2020-01-09T09:10:17.215Z
2019-12-31T00:00:00.000
{ "year": 2019, "sha1": "5cfcfcac0b7be3b0e81c3ffefacd3d060e38b494", "oa_license": "CCBYNC", "oa_url": "https://rmrj.usjr.edu.ph/rmrj/index.php/RMRJ/article/download/772/203", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2db9fe9f42adb78763f78ddd18b445b5e88740d6", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Mathematics" ] }
210549083
pes2o/s2orc
v3-fos-license
The Cognitive Rationale for the Saying “Put the Heart in the Belly” Metaphor and metonymy play an important role in the formation of the common saying “put the heart in the belly”. The human organs “heart” and “belly” function as container metaphors. The heart and the thought emotions produced by it are transfigured in the cognitive framework of “container and function”, and the conceptual thought emotions of goals are transferred with the remarkable “heart”. The belly and stomach exist metonymy in the cognitive framework of “container and content”, with a significant “belly” refer to non-significant target concept “stomach”; “belly” naturally becomes the most typical container of the human body. And “heart” becomes a typical member of the belly as an abstract thought and emotion; “belly” and “heart” become each other’s typical symbol; the more typical it is, the psychological connection it will be easier to establish between them. so people almost ignore the logic of this saying. Introduction In daily life, when we make others be rest assured, we generally say "put the heart in the belly" instead of "putting the heart in the chest", why? Obviously, the crux of the problem lies in whether "reassuring" is inclined to "put it in the belly" or to "put it in the chest" to express. By retrieving the People's Network Corpus, we collected 41 of the 3,505 examples like "put it in the belly" which express the meaning of being rest assured, while none of the 278 examples like "put it in the chest" which express the meaning of that. What causes this tendency? What is the meaning of "heart" here? What is the relationship between the heart, the chest, and the belly? Why is there a relationship of accommodate and be accommodated between "belly" and "heart"? We think it's mainly related to people's cognitivity. "From a near sense, from oneself, from a distance from something", people always learn other things from themselves. As the subject of cognition, human cognition is fundamentally shaped by embodied experience (Gibbs & Gibbs, 2005, p. 3). This experience is based on the human mind. Lakoff and Johnson (1999, p. 3) pointed out The Embodied Mind. Concepts, reasoning, and language, which are important components of the mind, are "all based on physical experience and cognitive processing" (Wang, 2007, p. 2). It can be seen that language comes from people's cognition; from the cognitive point of view, we can explore the initial cognitive motivation of language symbols to a certain extent. So we try to explain the three questions from the perspective of cognitive linguistics. After searching the related materials from Chinese basic ancient corpus, the Chinese full-text search system (the fourth edition), we found that the similar expression of the saying is first seen in the ErCheng's Legacy authored by Cheng Hao and Cheng Yi, compiled by Zhu Xi in the Song Dynasty, for instance: (1) 尧舜知他几千年,其心至今在,心要在腔子里。(《二程遗书》卷七) "Yao and Shun" be known for thousands of years, his thoughts and spirits are still there, the heart should be in the cavity". (ErCheng's Legacy, Number 7) In Example 1 "heart" does not refer to the "heart" of human organs, but refers to the thought and spirit of the sage. Zhu Xi's other book, Zhu Zi Yu Lei, also has the same expression, "heart" expressed the mind, such as: (2) 问:"'心要在腔子里。'若虑事应物时,心当如何?"曰:"身在此,则心合在此。"(《朱 子语类·卷第九十六》) Q: The heart should be in the cavity. Where in the heart when you think about things? A: The body is here, the heart should be here. (Zhu Zi Yu Lei,Number 96) In Example 2 "heart" refers to the mind of thought. In Ming Dynasty, the archetype of the saying "put the heart into the belly" appeared, expressing the meaning of being rest assured and "heart" refers to the heart of fearing here. Such as: Tan Shaowen's back now, and that's when he puts his heart in his belly. (Qi Lu Deng,Number 46) In the Republic of China, "broke the belly to change the heart" or "dug heart from the broken belly" means the operation of heart change. The heart is an organ of the human body, which is different from that of "put the heart in the belly". It can be used as a side evidence of "heart in the stomach", indicating the inherent cognition of "heart in the belly" in people's mind, although not within the scope of our research. Such as: If it's really to dig heart to break the belly, I'm afraid that he is reluctant to it. (A Dream of Red Mansions, In modern, "put the heart in the belly" referred to rest assured. It's not only applied to daily dialogue and other spoken languages, but also applied to some other written languages. "Heart" mainly means concerning, such as: Eat the product we produce here and put the heart in the belly. In short, the "heart" of "put the heart in the belly" has been become abstract conceptualized. It no longer refers to the human organ, but refers to the organ produced abstract thoughts and emotions. The heart and emotion are closely related in the cognitive framework of "container and function". The activation of the conceptual mind is accompanied by the activation of conceptual emotion. The heart is cognitively significant because the heartbeat is perceived and touched, while the emotion cannot. Therefore, people use a significant heart to refer to the goal concept of emotion. It initially developed from the upper categories of "mind, thought" to the other lower categories of fearing, concerning, and so on. It's just in line with the semantic evolution of "put the heart in the belly" from "bringing mind back" to "resting assured". The Relationship Between "Heart", "Chest", and "Belly" Physical Spatial Relations Heart, chest, and belly are important organs of the human body. Because the heart is in the chest, if taking the chest as a container, the heart is the object of the container. There is a relationship of including and being included between them. The chest is between the neck and the abdomen, so the "heart" included in the chest is also between the neck and abdomen. The "heart" and "belly" neighbored in the physical space, such as Figure 1. That is to say, the above of "heart" is the throat; the below of "heart" is the belly; such a physical spatial relationship becomes the formation of a realistic basis of the common saying "heart is about to jump out of the throat/throat eye" and "put the heart in the belly". Mental Spatial Relations Does the "cavity" of "put heart in the cavity" refer to chest or abdominal cavity? The Chinese Dictionary gives the interpretation of the "cavity" in this sentence: 【cavity】❶ chest and abdomen; body. Then looking at the interpretation of "chest and abdomen" in the Chinese Dictionary: 【xiong fu】❶ chest and abdomen; trusted subordinate. "Xiong fu" has become a word in the pre-qin period. There is a strong correlation between "chest" and "abdomen" in people's cognition, which is determined by the common sense that the abdomen is under the chest. Physical space distance is close to the psychological space distance, and then attractted each other, combined with each other, and eventually solidified into a side-by-side compound word. As the Iconicity Theory states, "Entities are close in the cognition or conceptions are also close in their linguistic forms in time and space". When "xiong fu" means trusted subordinate, "chest" denoting "heart", it shows the psychological space distance between "heart" and "chest" is much smaller than that between the "heart" and "belly" in people's cognition, so the former's semantic correlation is much higher than the latter's. We confirmed our conclusion by testing the semantic correlation between heart, chest, and belly in 1,220 languages around the world using the Cross-Language Semantic Web Database (CLICS), just as Figure 2. The semantic correlation between them is in line with the objective truth. Just as that the physical distance between "heart" and "chest" is less smaller than that between "heart" and "belly" in reality. The adjacent relationship between "heart" and "abdomen" in physical space widens people's psychological space compared to that between "heart" and "chest". This space distance caused gap providing a downward placement of the accommodation for the "heart" beating uneasiness, and even to jump out of the throat. In addition, if we take the heart as a mass point gathered on the center of gravity, the higher the mass point is, the greater the potential energy of gravity is. In other conditions unchanged, the energy of higher mass point is greater than that of the lower mass point. As is known, the greater the more unstable and the lower the more stable. So in order to make the unstable heart in a stable state, you have to be decentralized, that is, put in the belly; then "put the heart into the belly" is logical. The Accommodating Relationship Between "Belly" and "Heart" "Put the heart in the belly" indicates that belly has the basic function of accommodating the abstract conceptual domain of "heart". This basic function derives from the semantic properties of the human organ itself. Semantic Properties and Functions of the Belly "Abdomen" means the area below the chest of the torso in the pre-qin period. There is already a usage of the original meaning in Oracle. "Du" appeared in the Qin and Han Dynasties representing the concept of human or animal belly. In Song Dynasty "Du Zi" appeared, not only the part of the below of chest and the above of leg but also the stomach it indicates. "The below of chest and the above of leg" indicates the spatial and bounded nature of the belly. The stomach indicates the capacity and specificity of the belly. So we summarize the semantic nature of "belly" as follows: [+spatiality] [+boundness] [+capacity] [+specificity] Of the above four attributes, it is clear that "capacity" is the basic function of the belly. The capacity of abstract concepts is derived from the expansion of specific foods. Although the stomach is a direct organ to contain food, the stomach is internal, invisible, and the belly is prominent and visible. The prominent, visible are much more significant than the inner, invisible. In the "container-content" of the cognitive framework, the activation of the concept "belly" will be accompanied by activation of the concept of stomach, so people often regard "stomach" as "belly". Then the belly will naturally have the function of accommodating food. Typical Container "Belly" and Typical Member "Heart" The human body itself is a container, and the chest and belly can certainly be seen as containers. But the chest is an intrinsic, invisible, and untouchable container, which is not significant in the psychology of perception, while the belly is external, visible, touchable, and even audible such as growling of the belly and ventriloquism etc. Multiple sensory overlays further highlight the significance of the belly, so that the "belly" is more attractive, more easily identifiable, processed, and remembered, and more likely to be typical in human containers. As a typical container of the human body, the stomach has become a "universal container" in people's cognition, and the ancients even regarded the belly as a typical thought organ, giving it a profound atmosphere of thought and culture, so the accommodations of belly are complicated. It can be seen that things representing abstract concept are typical members of the containment. The "heart" of "put the heart in the belly" is a typical member of the containments as abstract emotion naturally. Thus the belly is a typical container. "Belly" and "heart" become each other's typical symbol; the more typical the easier it is to establish psychological connection between them, so people almost ignore the logic of this saying, and thus widely spread.
2019-11-07T15:30:17.058Z
2019-10-08T00:00:00.000
{ "year": 2019, "sha1": "cc12ddc829d1e1a95bb7eadc2621be2a7b36954e", "oa_license": null, "oa_url": "https://doi.org/10.17265/2159-5836/2019.10.004", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "25570dbc266bf860e6427ff90cf6f091d2f5f73c", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Psychology" ] }
262576470
pes2o/s2orc
v3-fos-license
DNA-vaccination via tattooing induces stronger humoral and cellular immune responses than intramuscular delivery supported by molecular adjuvants Tattooing is one of a number of DNA delivery methods which results in an efficient expression of an introduced gene in the epidermal and dermal layers of the skin. The tattoo procedure causes many minor mechanical injuries followed by hemorrhage, necrosis, inflammation and regeneration of the skin and thus non-specifically stimulates the immune system. DNA vaccines delivered by tattooing have been shown to induce higher specific humoral and cellular immune responses than intramuscularly injected DNA. In this study, we focused on the comparison of DNA immunization protocols using different routes of administrations of DNA (intradermal tattoo versus intramuscular injection) and molecular adjuvants (cardiotoxin pre-treatment or GM-CSF DNA co-delivery). For this comparison we used the major capsid protein L1 of human papillomavirus type 16 as a model antigen. L1-specific immune responses were detected after three and four immunizations with 50 μg plasmid DNA. Cardiotoxin pretreatment or GM-CSF DNA co-delivery substantially enhanced the efficacy of DNA vaccine delivered intramuscularly by needle injection but had virtually no effect on the intradermal tattoo vaccination. The promoting effect of both adjuvants was more pronounced after three rather than four immunizations. However, three DNA tattoo immunizations without any adjuvant induced significantly higher L1-specific humoral immune responses than three or even four intramuscular DNA injections supported by molecular adjuvants. Tattooing also elicited significantly higher L1-specific cellular immune responses than intramuscularly delivered DNA in combination with adjuvants. In addition, the lymphocytes of mice treated with the tattoo device proliferated more strongly after mitogen stimulation suggesting the presence of inflammatory responses after tattooing. The tattoo delivery of DNA is a cost-effective method that may be used in laboratory conditions when more rapid and more robust immune responses are required. Introduction DNA vaccination has experienced great progress since the initial discovery of the spontaneous transfection of myocytes after intramuscular delivery of plasmid DNA in saline solution in 1990 [1]. Yet, intramuscular administration by simple injection of DNA is considered to be one of the less effective routes of DNA vaccination. The transfection of cells after single syringe injection of naked DNA is a rather inefficient process and various improvements using different physical, biochemical and biological methods have been made. Among the commonly used methods of DNA vaccination, the highest efficacy was achieved after in vivo electroporation and gene gun delivery [2]. Tattooing is an invasive procedure involving a solid vibrating needle that repeatedly punctures the skin, wounding both the epidermis and the upper dermis in the process and causing cutaneous inflammation followed by healing [3]. Modified tattooing devices have been used in medical research for the delivery of various materials to the skin for different purposes, e.g. bleomycin for the treatment of hypertrophic scars [4], viruses to induce papillomas in mice and rabbits [5], pigments to study processes associated with cosmetic tattooing [3] and DNA for prospective gene therapy of skin disorders or vaccination [6][7][8]. Techniques based on multiple puncturing (up to 15 punctures) are used in human medicine to assess immune responses [9,10] as well as for vaccination [11,12]. As tattooing involves a much larger area of the skin than intradermal injection, it offers an advantage of potentially transfecting more cells [13]. Gene expression after DNA tattooing has been shown to be higher than that after intradermal injection [7,8] and gene gun delivery [8]. DNA vaccines delivered by tattoo were able to induce both cellular [6,7] and humoral antigen-specific responses [6,8]. Compared to intra-muscular injection of DNA, delivery of DNA by tattooing seems to produce different gene expression patterns. In one study, tattooing of 20 μg DNA resulted in at least ten times lower peak values of gene expression than intramuscular injection of 100 μg DNA. Gene expression after tattoo application peaked after six hours and vanished over the next four days, while the intramuscular injection of DNA resulted in high levels of gene expression peaking after one week and remaining detectable up to one month [6]. Despite lower dose of DNA and decreased gene expression, DNA delivered by tattoo induced higher antigen-specific cellular as well as humoral immune responses than intramuscular DNA injection [6,8]. In this work, we evaluated the effect of two adjuvants, cardiotoxin and plasmid DNA carrying the gene for the mouse granulocyte-macrophage colony-stimulating factor (GM-CSF), on the efficiency of a DNA vaccine delivered either by tattoo or intramuscular needle injection. As a model antigen, we used a codon modified gene encoding the L1 major capsid protein of the human papillomavirus type 16 (HPV16) that has been shown to be highly immunogenic in our previous experiments using intramuscular administration of DNA in combination with cardiotoxin pre-treatment [14]. Our results indicate that molecular adjuvants substantially enhance the efficiency of the HPV16 L1 DNA vaccine when administered intramuscularly. However, the delivery of the HPV16 L1 DNA in the absence of adjuvants using a tattoo device elicited much stronger and more rapid humoral and cellular immune responses than intramuscular needle delivery together with molecular adjuvants. Animals Eight-week-old female C57BL/6 (H2 b ) mice were purchased from Charles River (Sulzfeld, Germany) and kept under specific pathogen-free conditions at the animal facilities of the German Cancer Research Center in compliance with the regulations of the Germany Animal Protection Law. Plasmids Plasmid pUF3L1h [14] carrying the humanized HPV16 L1 gene under the control of the human cytomegalovirus immediate-early promoter (pCMV) was used for the induction of antigen-specific immune responses in the DNA immunization experiments. The L1 protein expression of pUF3L1h has been shown to be substantially increased due to the codon optimization. The plasmid pBSC/GM-CSF (kindly provided by M. Smahel, Institute of Hematology and Blood Transfusion, Prague, the Czech Republic) was used as an adjuvant in the DNA immunization experiment. This plasmid contains the sequence coding for the mouse GM-CSF that was excised from the plasmid pBK-GM [15] by XhoI and SalI restriction enzymes and ligated into the XhoI-site of the plasmid pBSC [16]. The production of GM-CSF was confirmed by transfecting 293T cells with the pBSC/GM-CSF plasmid and analyzing lysates using the mouse GM-CSF ELISA kit (OptEIA™, BD Biosciences Pharmingen, San Diego, CA, USA). The adjuvant effect of pBSC/GM-CSF plasmid has been evaluated in our previous immunization experiments [17]. DNA immunization Plasmid DNA was purified from E. coli DH5α using CsCl equilibrium density centrifugation and dissolved in TE buffer to a final concentration of 5 mg/ml. Anesthetized mice were immunized with DNA four times, on days 0, 14, 28 and 98. Each mouse received 50 μg of plasmid pUF3L1h (6 groups) or pBSC/GM-CSF (control group) in one immunization dose. Two groups of mice received a mixture of 50 μg pUF3L1h DNA and 50 μg pBSC/GM-CSF DNA per animal in a single dose. For intramuscular delivery, the DNA was injected into the tibia anterior muscle of the right leg in a final volume of 50 μl PBS. Tattooed DNA was delivered in 10 μl TE buffer for single plasmid administration or 20 μl TE buffer for the mixture of plasmids in one or two drops to the shaved skin at the dorsum fol-lowed by tattoo with a 7-linear tattoo needle using a commercial tattoo machine (Rotary 12000 PL, Bortech Tattoogrosshandel, Wuppertal, Germany). The tattoo device was adjusted to allow exposure of only 1-2 mm of the needle tip beyond the barrel guide. The depth of 1-2 mm for tattooing of the mouse skin was shown to result in the immediate location of tattooed inks mainly in the dermis and to a lower extent in the epidermis [3]. A skin surface area of approximately 2 cm × 1 cm was tattooed by 30-times repeated two-second-lasting treatments with the tattoo needle oscillating at the voltage 17.4 V corresponding with the frequency 145 Hz (145 punctures per second) set on the power supply (DC POWER SUPPLY, DF 1730 SB3A, Bortech Tattoogrosshandel, Wuppertal, Germany). Thus, every tattooed mouse received during one immunization the total number of 60 900 (7 × 30 × 2 × 145 = 60 900) solid-needle punctures to deliver 50 μg DNA in 10 μl TE buffer or 121 800 (2 × 60 900 = 121 800) solid-needle punctures to deliver 100 μg DNA in 20 μl TE buffer. The tattoo procedure was well tolerated, however local trauma involving minor swelling and reddening of the skin was observed. In addition, some mice were pretreated with 50 μl of cardiotoxin (10 μM, Latoxan, Valence, France) five days before the first DNA immunization in the loci of vaccination. Thus, cardiotoxin was applied either into the tibia anterior muscle by needle injection or to the dorsal skin by tattoo. ELISA Blood of immunized mice was collected 10 days after the third and 9 days after the fourth DNA immunization. For detection and endpoint-titration assays of HPV 16 L1-specific antibodies an antigen capture ELISA was used. For this, microtiter plates were coated overnight at 4°C with 50 μl PBS containing purified rabbit polyclonal IgG anti-HPV16 L1 antibodies at a 1:200 dilution. Plates were blocked with 100 μl 3% milk/PBS-0.3% Tween 20 for 1 h at 37°C followed by the addition of 50 μl of the HPV16 L1 VLPs (5 mg/ml) diluted 1:1500 in 1.5% milk/PBS-0.3% Tween 20 for 1 h at 37°C. Plates were washed with PBS-0.3% Tween 20 and 50 μl of mouse serum were added in 2-fold dilutions starting at 1:50 and ending at 1:13107200 and incubated for 1 h at 37°C. Non-specific binding was determined using the dilution 1:50 of the mouse sera on plates coated with PBS only. Plates were washed and incubated with 50 μl/well of a sheep antimouse IgG polyclonal antibody conjugated to peroxidase (Sigma) diluted 1:3000 in 1.5% milk/PBS-0.3% Tween 20 for 1 h at 37°C. After the final washing, 100 μl/well of ABTS [2,2'-azino-bis(3-ethylbenz-thiazoline-6-sulfonic acid)] staining solution (1 mg/ml in a 100 mM sodium acetate-phosphate buffer, pH 4.2, 0.015% H 2 O 2 ) was used for enzyme reaction. Absorptions were measured at 405 nm in a Titertek automated plate reader after 40-60 minutes. IFN-γ-enzyme-linked immunosorbent (ELISPOT) assay The ELISPOT assay was performed 9 days after the fourth DNA immunization as described in our previous work [18]. MultiScreen IP sterile plates (96 well; Millipore, Eschborn, Germany) were pre-soaked with 70% ethanol for 1 min, and the ethanol was removed by extensive rinsing with PBS. The plates were coated with 600 ng per well of anti-mouse interferon gamma (IFN-γ) capture antibody (BD Pharmingen, Heidelberg, Germany) in 100 μl of PBS overnight at 4°C. Unbound antibody was removed by washing twice with PBS and twice with medium (RPMI-1640, Sigma; 10% fetal calf serum, 2 mM L-glutamine, 1% penicillin-streptomycin). Plates were blocked for 7 h with 100 μl of medium at 37°C, and splenocytes from individual mice were seeded in four serial dilutions: 2, 1, 0.5 and 0.25 × 10 6 cells per well in 100 μl of medium. Splenocytes from each mouse were left either untreated (background control), or stimulated with 900 ng of pokeweed mitogen (Sigma) in 100 μl of medium (positive control), or with 0.2 μM L1 aa165-173 peptide [19] in 100 μl of medium. Plates were incubated for 20 h at 37°C. Cells were removed by six washes with PBS-0.01% Tween 20 and one wash with sterile water. Then, 200 ng of sterile-filtered biotinylated rat anti-mouse IFN-γ detection antibody (BD Pharmingen) in 100 μl of PBS were added per well, and the plates were kept at 4°C overnight. The plates were washed six times with PBS-0.01% Tween 20 and once with PBS, and this was followed by the addition of 100 μl of a 1:1000 dilution of streptavidin-alkaline phosphatase (BD Pharmingen) in PBS. Plates were incubated for 30 min at room temperature and then washed three times with PBS-0.01% Tween 20, followed by three washing steps with PBS alone. Plates were developed with 5bromo-4-chloro-3-indolylphosphate (BCIP/Nitro Blue Tetrazolium Liquid Substrate System; Sigma), 100 μl per well. The reaction was stopped after 15 minutes by rinsing the plates with water. Spots were quantified using an ELIS-POT reader (AID EliSpot Reader ELR04; AID GmbH, Strassberg, Germany). Statistical analysis Data of end-point titration of ELISA assay were analyzed by Wilcoxon Rank sum test. For ELISPOT assay analysis, we performed two tailed unpaired t-test using Prism 4 software (GraphPad Software, Inc., San Diego, CA, USA). A difference between groups was considered significant for p < 0.05. Results To compare different routes of delivery of DNA vaccines, i.e. intradermal tattooing versus intramuscular needleinjection, as well as the adjuvant effect of GM-CSF DNA co-delivery or cardiotoxin pre-treatment, we immunized mice with HPV16 L1 DNA four times as described in Material and Methods. The time-schedule of immunizations is outlined in Figure 1. DNA-tattooing induces higher levels of specific antibodies than DNA-intramuscular injection After three immunizations, all mice (15/15) immunized by HPV16 L1 DNA-tattooing developed high levels of L1specific antibodies, while intramuscular delivery of DNA induced L1-specific antibodies only in 8 out of 15 mice: in one mouse receiving no adjuvant (1/5), three mice coimmunized with GM-CSF DNA (3/5) and four mice pretreated with cardiotoxin (4/5; Figure 2). The end-point titration of sera collected after three immunizations showed that the level of L1-specific antibodies was 500-2000 times higher in all five mice immunized three times by tattoo (without adjuvant) than the titer of the single antibody-positive mouse of the group immunized intramuscularly without adjuvant ( Figure 2). Moreover, three doses of DNA delivered by tattoo induced at least 16-times higher levels of anti-L1 antibodies than three intramuscular DNA immunizations applied after cardiotoxin pre-treatment or using GM-CSF DNA co-delivery (Figures 2 and 3). Comparing groups of mice immunized with DNA using the two different delivery methods, all of the tattooed mice produced significantly higher levels of specific antibodies than intramuscularly immunized mice after three immunizations (p < 0.0001). The fourth DNA immunization increased the number of mice producing L1-specific antibodies in the intramuscularly immunized group (from 8/15 to 15/15 positive mice) and also enhanced the level of L1-specific antibody production in 14 out of the 15 mice treated with the tattoo device. The boosting effect of the fourth DNA immunization was higher in intramuscularly-immunized than in tattooed mice. However, four intramuscular DNA immunizations induced still lower production of L1-specific antibodies than three DNA immunizations delivered by tattoo (p < 0.0001). Both GM-CSF DNA co-delivery and cardiotoxin pre-treatment enhanced the L1-specific humoral responses after both three and four HPV16-L1 DNA immunizations delivered either by intramuscular injection or tattoo, but the differences were not statistically significant. The effect of both adjuvants (GM-CSF DNA co-delivery and cardiotoxin pre-treatment) was more pronounced in mice immunized intramuscularly than tattooed and in mice immunized three times rather than four times. No specific anti-L1 antibodies were detected at any dilution in sera of the control group of mice receiving GM-CSF DNA delivered by tattoo. DNA-tattooing induces higher specific cellular immune responses than DNA-intramuscular injection Nine days after the fourth immunization, the splenocytes from all vaccinated mice were analyzed by an L1-specific IFN-γ-ELISPOT assay. The non-specific stimulation with mitogen led to the enhancement of IFN-γ-producing cells in all mice, showing that the splenocytes used in the ELIS-POT assay were alive and able to secret IFN-γ ( Figure 4). The numbers of cells producing IFN-γ per 250 000 splenocytes after mitogen-stimulation ranged from about 90 to 270 in the control group of three mice (GM-CSF-tattooed mice), about 50 to 600 for the L1-intramuscularly immunized mice (difference is non-significant) and about 200 to 900 for the L1-tattooed mice (p < 0.05). The non-specific, mitogen-induced increase of IFN-γ-producing cells in splenocytes of the L1-tattooed mice was significantly higher in comparison with the L1-intramuscularly immunized mice (p < 0.001). The comparison of the numbers of IFN-γ-producing cells in serial dilutions of splenocytes incubated one day with either plain medium or in the presence of an L1 peptide (aa165-173; [19]) revealed that one mouse (M3) immunized intramuscularly with HPV16 L1 DNA and all three control mice immunized with GM-CSF DNA did not elicit detectable L1-specific cellular responses. The numbers of L1-specific IFN-γ-producing cells per 250 000 splenocytes ranged from 3 to 362 for the 15 mice that received the Immunization scheme Figure 1 Immunization scheme. Mice were immunized four times with DNA on days 0, 14, 28 and 98. Cardiotoxin pre-treatment was carried out 5 days prior the first DNA immunization. Blood was collected twice, on days 38 and 107. Splenocytes were isolated on day 107 and analyzed by ELISPOT assay. The effects of the cardiotoxin pre-treatment or the GM-CSF co-delivery on L1-specific cellular immune responses elicited after HPV16 L1 vaccination were not significant. Both adjuvants enhanced the numbers of L1-specific IFNγ-producing cells in mice immunized with L1 or GM-CSF intramuscularly as well as in the L1-tattooed mice (notsignificant). The L1-tattooed mice that were pre-treated with cardiotoxin showed lower numbers of both mitogenand L1-peptide-stimulated IFN-γ-producing splenocytes than the L1-tattooed mice receiving no prior treatment with cardiotoxin (not statistically significant). Discussion In this study we compared different protocols of DNA immunization and observed that three DNA immunizations delivered by tattoo elicited much higher specific humoral immune responses than three or even four intramuscular injections. Further, tattooing induced higher specific cellular immune responses than intramuscular DNA injections. Administration of an adjuvant (GM-CSF or cardiotoxin) had virtually no effect on the efficacy of tattoo immunization whereas it enhanced the effect of the intramuscular injection. The cardiotoxin pre-treatment of muscles before administration of DNA is a routinely performed procedure for DNA immunization. In this work, we evaluated the importance of cardiotoxin pre-treatment for induction of anti-L1 specific antibodies. It has been shown that some intramuscularly delivered DNA vaccines are not able to induce effectively specific antibody responses without the VLP-based ELISA for detection of serum IgG antibody titers after DNA plasmid immunization [20], while for other DNA vaccines the usefulness of muscle pretreatment was not demonstrated [21]. We immunized mice three times with 50 μg pUF3-hL1 DNA in 2-week intervals and found that mice more consistently developed L1-specific antibodies after cardiotoxin administration than receiving no muscle pretreatment (4/5 versus 1/5). Further, four intramuscular immunizations with 50 μg pUF3-hL1 DNA elicited L1specific antibodies in all mice regardless of the use of cardiotoxin, indicating that the absence of cardiotoxin pretreatment of muscles might be substituted by increasing the number of boosting DNA immunizations. To our knowledge, there are only four studies addressing the use of tattooing for DNA immunization [6][7][8]22] and only one of the publications focuses on a comparison of tattooing with intramuscular needle injection of DNA [6]. In this work, we observed that the tattoo delivery induced more robust immune responses than intramuscular delivery that was in concordance with previous findings of Bins and coworkers [6]. However, in our study we used higher doses of DNA for tattoo delivery and also a more intensive tattoo protocol than Bins et al., suggesting that reducing the dose of DNA and mild conditions of tattooing could result in a decrease of efficiency of DNA tattoo immunization. Although we did not determine the mechanisms by which DNA tattooing leads to better immune response one can speculate that this is due to (i) better uptake of the DNA by non-antigen-presenting cells [22], (ii) better uptake of DNA by antigen-presenting cells, (iii) duration of expression or (iv) the induced traumata accompanying the tattooing [3]. The fact that the lymphocytes from mice treated with the tattoo device demonstrated a higher mitotic index when treated with a mitogen supports the idea of induction of traumata and release of danger signals. We observed that treatment of mice with the tattoo device induced local trauma which was evident macroscopically by minor swelling and reddening of the punctured skin areas and was also reflected in stronger T-cell responses towards an unspecific mitogen, detected in the ELISPOT assay. Interestingly, this effect was only observed in animals that had received the L1 construct but not or to a much lower extent in the control mice treated with the GM-CSF expression vector alone. Perhaps, the viral origin of the L1 protein and/or the high immunogenicity of L1virus-like particles contributed to non-specific stimulation of murine immune system. The mode of DNA delivery (tattooing versus intramuscular injection) had a much higher effect on the vaccination efficiency than the addition of adjuvants (GM-CSF, cardiotoxin). Similarly, another DNA delivery method, intramuscular in vivo electroporation, has been shown to induce higher antibody titers than intramuscular DNA injection in combination with cardiotoxin pretreatment [20]. It is conceivable that a robust local tissue injury induced by tattooing attracts leukocytes and leads to local release of cytokines [3]. The exact mechanisms of action of cardiotoxin are not yet determined but tissue damage and necroses are important factors [23]. The GM-CSF attracts antigen-presenting cells to the application site [24]. Thus, tattooing may partially substitute for the function of cardiotoxin and GM-CSF in their function. This is consistent with the observation that cardiotoxin pre-treatment or coadministration of the GM-CSF expression construct did not have any effect on tattoo immunization. The intramuscular needle-injection causes very little tissue damage [25]. That could be the reason why both GM-CSF and cardiotoxin substantially enhanced the immune responses after intramuscular DNA immunization. The advantage of tattoo treatment is the low price of the tattoo device and a standardized method for the application; the main disadvantages are the strain on the animals and a somewhat cumbersome application procedure. In particular, the local traumata induced by the tattooing procedure might not be considered acceptable in routine prophylactic vaccination settings involving human subjects. Nevertheless, DNA vaccination via tattoo seems to be the method of choice if faster and stronger immune responses have to be achieved. Potential applications might be vaccination of life stock for prophylaxis or of human beings for therapeutic purposes. Cytotoxic T-cell response in DNA immunized mice detected by IFN-γ-ELISPOT assay Figure 4 Cytotoxic T-cell response in DNA immunized mice detected by IFN-γ-ELISPOT assay. Cellular immune responses after four DNA vaccinations are shown. Six groups of mice (5 per group) were immunized with HPV16 L1 DNA on days 0, 14, 28 and 98 either by tattoo or intramuscular delivery without any adjuvant, in combination with prior application of cardiotoxin 5 days before the first immunization or in mixture with mouse GM-CSF DNA (1:1). A control group of three mice was tattooed with mouse GM-CSF DNA. Splenocytes were isolated 9 days after the last DNA immunization and examined in 4 serial dilutions in the IFN-γ-ELISPOT assay. The representative numbers of spots reflecting IFN-γ-producing cells per 250,000 splenocytes are shown. Splenocytes were stimulated non-specifically with mitogen or specifically with the L1 peptide (aa 165-173). Non-stimulated splenocytes were used as negative controls.
2018-04-03T00:38:56.881Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "bb6e9df49e2d32f40ea785e69ef20e9c109c9d98", "oa_license": "CCBY", "oa_url": "https://gvt-journal.biomedcentral.com/counter/pdf/10.1186/1479-0556-6-4", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7f371ae03b1a46de9990681af18852fc79d427e8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
16787558
pes2o/s2orc
v3-fos-license
Teratoma of the tongue Teratomas are the benign tumours, which may occur anywhere in the body. Development of these lesions in the oral cavity is extremely rare. In the oral cavity, they usually arise in the midline, in the floor of mouth. Infrequently, they may be seen in the tongue proper. We hereby, present a case of swelling tongue in 56 years female diagnosed as teratoma. INTRODUCTION Teratomas are benign neoplasms composed of all the three germinal layers. Around 80% are located in the ovaries and sacral lesions, 7% are seen in head and neck region, only approximate 1.6% of these tumours are found in oral region. Pure oral presentation in the tongue is extremely rare. Only a few number of cases have been reported in the literature so far (1). CASE REPORT A 56 years old female presented with swelling on the dorsal surface of tongue. Examination revealed a firm, rubbery, non-tender 3x2x2cm sessile swelling with no induration or ulceration ( fig 1). She noticed this swelling 6 months previously, which more recently increased in size. Lately, she experienced difficulty in moving tongue, resulting in globbus sensation and dysphagia. General examination of the patient revealed average built, pulse rate 80 beats/min with regular rhythm, BP 130/90 mm Hg, RR 24 breaths/min. There was no history of fever, night sweats and weight loss. Jaundice, cyanosis and oedema were absent. Computed tomography revealed a 3x2 cm cystic anterior lingual structure, the wall of which was thin and regular with a content consisting of homogenous fluid. There was no bone involvement. With a clinical diagnosis of dermoid cyst an excision biopsy was performed. Histopathological findings consisted of cyst wall lining of stratified squamous epithelium with sebaceous glands, blood vessels, muscle and cartilage in the underlying connective tissue, and a diagnosis of teratoma was made ( fig 2). No evidence of malignant transformation was noted. One year after surgical removal of the lesion, there was no sign of recurrence. DISCUSSION The tongue is derived from two separate embryologic origins. The anterior two thirds is derived from ectoderm and posterior one third is from endoderm. The anterior two third s originate from paired lateral lingual swellings, which are contributed by first branchial arch. These swellings fuse in the midline to form the tuberculum impar. The posterior one third of the tongue arises from the hypobranchial eminence, which is made up of mesoderm of the second, third and a portion of the fourth pharyngeal arches. Congenital dermoid cysts arise from epithelial rests trapped during midline fusion of these branchial arches whereas acquired dermoid cyst arise from epithelium implanted during trauma and they occur at the sites away from midline. The terms teratoma, teratoid cyst and dermoid cyst have been used interchangeably to describe a wide variety of lesions by some authors. Meyer has classified these cyst as epidermoid, dermoid and teratoid. Epidermoid cysts are lined with simple squamous epithelium and surrounding connective tissue. Dermoid type cysts contain skin appendages whereas teratoids contain epithelium lined mesodermal or endodermal elements such as bone, teeth, muscle, mucous membrane (2). Teratomas of the oral cavity are divided anatomically depending on their location. They can be sublingual, geniohyoid and lateral (3). The other differentials which are encountered at these sites are ranula, lymphangioma, angioma and lipoma (1). Teratoma is a tumour, which contains disorderly arranged tissues and organs. There is an epithelial lined cavity containing mesodermal as well as endodermal derivatives like muscle, intestinal mucosa, respiratory mucosa, fibres, bone and blood vessels etc. Teratoma of tongue may exhibit skin, hair, bone, cartilage or mucous membrane on the surface (7). The rarity of teratoma has been stemmed from the fact that it is not located along embryonic fusion line and it does not involve the floor of the mouth (4). Teratoma in the head and neck region are rare, comprising 1-10% of cases. Very few numbers of cases have been reported so far. They probably arise from totiopotent embryonic tissue that has been displaced during ontogeny (8). A good patho-radiological correlation is required to confirm the diagnosis. Ultrasonography establishes the presence of solid and cystic components and can differentiate cyst from surrounding tissue. By far MRI has been proven to be superior among imaging modalities, as it can locate exact position, extention and demarcations of the lesion (1 ). Because of their avascular character, teratoma do not enhance with administration of contrast material and thus can cause diagnostic confusion with choriostoma, endodermal sinus tumours and granular cell tumours. Because oral teratomas are well defined, complete excision is usually possible. Recurrences are very rarely seen in head and neck teratomas. Most of the times, these tumours are benign but may result in high degree of mortality and morbidity due to variations in their size and location. If large enough, they may cause airway obstruction, respiratory distress, dysphagia, difficult in eating and pain due mostly to infection in the lesion. In malignant teratoma radio-chemotherapy is used after surgical removal of the tumour.(5) Alphafetoprotein( AFP) has been shown to be reliable indicator of disease activity and some authors advocate investigating teratoma recurrence by doing serial serum AFP levels. They have been shown to increase in teratocarcinoma (6). Though teratoma has been reported in infants so far it is unusual with the site not being the midline. Malignant change was not seen. Patient responded better after complete surgical excision of the lesion. Though rare, teratoma should be considered in the differential diagnosis of tongue masses.
2016-05-04T20:20:58.661Z
2012-02-01T00:00:00.000
{ "year": 2012, "sha1": "3f0700a10a3240463411b7403992871aadefff6c", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jscr/article-pdf/2012/2/6/6634479/jscr-2012-2-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8201a1dc243a53e30bb331c23f5a7f77f9214dd5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
29824905
pes2o/s2orc
v3-fos-license
LIND/ABIN-3 Is a Novel Lipopolysaccharide-inducible Inhibitor of NF-κB Activation* Recognition of lipopolysaccharide (LPS) by Toll-like receptor (TLR)4 initiates an intracellular signaling pathway leading to the activation of nuclear factor-κB (NF-κB). Although LPS-induced activation of NF-κB is critical to the induction of an efficient immune response, excessive or prolonged signaling from TLR4 can be harmful to the host. Therefore, the NF-κB signal transduction pathway demands tight regulation. In the present study, we describe the human protein Listeria INDuced (LIND) as a novel A20-binding inhibitor of NF-κB activation (ABIN) that is related to ABIN-1 and -2 and, therefore, is further referred to as ABIN-3. Similar to the other ABINs, ABIN-3 binds to A20 and inhibits NF-κB activation induced by tumor necrosis factor, interleukin-1, and 12-O-tetradecanoylphorbol-13-acetate. However, unlike the other ABINs, constitutive expression of ABIN-3 could not be detected in different human cells. Treatment of human monocytic cells with LPS strongly induced ABIN-3 mRNA and protein expression, suggesting a role for ABIN-3 in the LPS/TLR4 pathway. Indeed, ABIN-3 overexpression was found to inhibit NF-κB-dependent gene expression in response to LPS/TLR4 at a level downstream of TRAF6 and upstream of IKKβ. NF-κB inhibition was mediated by the ABIN-homology domain 2 and was independent of A20 binding. Moreover, in vivo adenoviral gene transfer of ABIN-3 in mice reduced LPS-induced NF-κB activity in the liver, thereby partially protecting mice against LPS/d-(+)-galactosamine-inducedmortality. Taken together, these results implicate ABIN-3 as a novel negative feedback regulator of LPS-induced NF-κB activation. The innate immune response to microbial pathogens begins when pathogen-associated molecular patterns meet their cognate Toll-like receptors (TLRs) 8 on effector cells of the immune system, such as monocytes and macrophages (1). Lipopolysaccharide (LPS), an integral cell wall component of Gram-negative bacteria and one of the most potent stimulators in innate immunity, is recognized by the TLR4-MD2 receptor complex (2). In the past years, much progress has been made in understanding the intracellular signaling cascades that are initiated when LPS stimulates TLR4 (reviewed in Refs. 3 and 4). Ligation of the TLR4-MD2 complex by LPS initially results in the recruitment of myeloid differentiation factor (MyD)88 and MyD88-adaptor like (Mal), also called TIRAP, to the receptor cytoplasmic domain. MyD88 then facilitates recruitment of the serine/threonine kinases IL-1R-associated kinase (IRAK)-1 and -4, thus enabling IRAK4 to phosphorylate IRAK1. The latter subsequently dissociates from the receptor complex and associates with tumor necrosis factor (TNF) receptor-associated factor (TRAF)6, constituting a cytoplasmic signaling complex. Upon ubiquitination, TRAF6 activates transforming growth factor-␤-activated kinase 1, which in turn activates the inhibitor of B kinase (IKK) complex that consists of the regulatory subunit IKK␥ (also known as NEMO) and the kinases IKK␣ and IKK␤. The latter eventually phosphorylates the inhibitory IB proteins, resulting in their ubiquitination and degradation. This allows the transcription factor NF-B to translocate to the nucleus and initiate transcription of inflammatory cytokines, such as TNF, which contribute to mounting an inflammatory response. Apart from this MyD88-dependent signaling pathway, TLR4 also initiates a MyD88-independent signaling pathway that is mediated by the adaptor proteins Toll/IL-1 receptor domain domain-containing adaptor-inducing interferon-␤ (TRIF; also known as TICAM-1) and TRIFrelated adaptor molecule (also known as TICAM-2). Although the TRIF/TRIF-related adaptor molecule pathway may contribute to delayed NF-B activation, it is mainly responsible for interferon regulatory factor 3 transcription factor activation via IKK⑀ and TANK-binding kinase 1. Although the LPS-induced inflammatory response is indispensable for controlling the growth of pathogenic microorganisms (5), excessive cytokine production can be harmful to the host and may even contribute to a life-threatening condition termed septic shock (6). In addition, TLR4-initiated signaling pathways have recently been implicated in the pathogenesis of various autoimmune and chronic inflammatory diseases. For instance, activation of TLR4 has been shown to contribute to experimental models of autoimmune encephalomyelitis, asthma, and atherosclerosis (7)(8)(9). This universal and inherently dangerous role of TLR4 in inflammation emphasizes the need for tight regulation of TLR4-initiated signaling pathways. As such, it is not surprising that the host acquired several proteins that can hold LPS-induced NF-B activation in check (reviewed in Ref. 10). One of these negative feedback regulators is the zinc finger protein A20. This protein was originally identified as an inhibitor of TNF-induced NF-B activation, because mice with a functional deletion of the A20 gene die prematurely due to unrestrained TNF-induced inflammation (11). However, the observation that mice doubly deficient in both A20 and TNF (or TNF receptor-1) developed spontaneous inflammation, similar to mice deficient in A20 alone, suggested that A20 also has a negative role in LPS-induced NF-B activation. Indeed, A20-deficient macrophages show a prolonged NF-B response to LPS, and reconstitution experiments with these cells in mice showed that A20 is required for preventing endotoxic shock, pointing to A20 as an important down-regulator of pro-inflammatory signals initiated by LPS (12). In the present study, we identify Listeria INDuced (LIND), a protein that is induced in human mononuclear phagocytes by Listeria infection (13), as a novel LPS-inducible A20-binding inhibitor of NF-B activation. As LIND was found to share sequence as well as functional homology with two previously identified A20binding inhibitors of NF-B, ABIN-1, and ABIN-2 (14, 15), we named it ABIN-3. Interestingly, ABIN-3 expression was inducible by LPS in the monocytic cell line THP-1 as well as in primary human monocytes. Moreover, because ABIN-3 could inhibit LPSinduced activation of NF-B in vitro as well as in vivo, our results identify ABIN-3 as a novel player in the negative feedback regulation of NF-B activation in response to LPS. MATERIALS AND METHODS Cell Lines and Reagents-Human embryonic kidney cells (HEK293T) were a kind gift from Dr. M. Hall (University of Birmingham, Birmingham, UK) and were grown in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum, 2 mM L-glutamine, 0.4 mM sodium pyruvate, and antibiotics. The human THP-1 myelomonocytic cell line was obtained from the American Type Culture Collection and was grown in RPMI 1640 supplemented with 10% fetal bovine serum, 0.4 mM sodium pyruvate, 2 mM L-glutamine, 4 M ␤-mercaptoethanol, and antibiotics. The murine RAW264.7 macrophage cell line was obtained from the ATCC (Manassas, VA) and was cultured in Dulbecco's modified Eagle's medium supplemented with 10% fetal calf serum. Recombinant human TNF and recombinant murine IL-1␤ were produced in Escherichia coli in our laboratory and were purified to at least 99% homogeneity. TNF had a specific biological activity of 2.3 ϫ 10 7 IU/mg of purified protein, as determined with the international standard code 87/650 (National Institute for Biological Standards and Control, Potters Bar, UK). IL-1␤ had a specific activity of 3.65 ϫ 10 8 IU/mg of purified protein, as determined with the international standard code 93/668. LPS from Salmonella abortus equi was obtained from Sigma. Cloning of ABIN-3-TBLASTN searches with the region of homology between ABIN-1 and ABIN-2 were conducted using the NCBI online service, in the non-redundant and expressed sequence tag data base. A "full insert sequence" of a clone of human coronary artery smooth muscle cells (accession number AK024815) was identified as a potential homologue. 5Ј Rapid amplification of cDNA ends (RACE) was performed on HeLa mRNA, using the SMART RACE kit (Clontech Laboratories, Palo alto, CA) according to the instructions from the manufacturer. Primers for first round and nested PCR were 5Ј-cgttccttttccttctcctcccgctgca-3Ј and 5Ј-ctctgcctctgatgcggatccttctccc-3Ј, respectively. Full-length cDNA was amplified using forward (5Јggagatctgcggccgctatggcacattttgtacaagg-3Ј) and reverse (5Ј-gagatctctacggatggactttctttactgagg-3Ј) primers. The open reading frame of ABIN-3 was cloned in-frame with an N-terminal E tag into the mammalian expression vector pCAGGS. The cloned fragment was sequenced on both strands with a cycle sequencer (Applied Biosystems, Foster City, CA). Deletion mutants were generated by PCR and cloned in pCAGGS using the following primers: 5Ј-tgcgaacaaggaaaagatcaagtgttcattttccgagg-3Ј and 5Ј-gaaaatgaacacttgatcttttccttgttcgcaagag-3Ј for ABIN-3 ⌬AHD1 and 5Ј-gaacagaaatggaagttcttaatcaagagaaagaggagc-3Ј and 5Ј-tctttctcttgattaagaacttccatttctgttctcatc-3Ј for ABIN-3 ⌬AHD2. Plasmids and Adenoviruses-Plasmids coding for GFP and FLAG-tagged A20 have been described previously (16,17). The plasmid encoding TLR4 was a kind gift from Dr. M. Muzio (Dept. of Immunology and Cell Biology, Mario Negri Institute, Milano, Italy) (18), and plasmids encoding MyD88, IRAK1, and TRAF6 were kind gifts from Dr. J. Tschopp (Institute of Biochemistry, University of Lausanne, Switzerland) (19,20). The plasmid pNFconluc (21), encoding the luciferase (Luc) gene driven by a minimal NF-B-responsive promoter, was a gift from Dr. A. Israël (Institut Pasteur, Paris, France). The plasmid pUT651, encoding ␤-galactosidase, was supplied by Eurogentec (Seraing, Belgium). For the production of a recombinant ABIN-3 adenovirus, the ABIN-3 cDNA, N-terminally fused to an E tag, was cloned into the pACpLpA.CMV shuttle vector and cotransfected with the rescue plasmid pJM17 (which encodes the adenovirus dl309 genome, lacking E1 and E3 functions) into HEK293 cells using calcium phosphate coprecipitation (22). Recombinant plaques were isolated and expression of ABIN-3 from the ubiquitously active cytomegalovirus (CMV) promoter was confirmed by Western blotting. Control viruses without transgene (AdRR5) or expressing the ␤-galactosidase gene (AdLacZ) were generated with the same pJM17 adenoviral backbone vector. A virus expressing an NF-B luciferase reporter gene (AdNFBLuc) (23) Isolation and Culture of Primary Monocytes-Peripheral blood mononuclear cells (PBMCs) were prepared from fresh blood samples of healthy donors drawn on citrate/phosphate/dextrose (Etablissement Français du Sang, Paris, France). Blood was diluted 1:2 in RPMI 1640 Glutamax medium (BioWhittaker, Verviers, Belgium) and centrifuged over Ficoll (MSL, Eurobio, Les Ulis, France) for 20 min at 15°C and 600 ϫ g. Human monocytes were selected from PBMCs by adherence. PBMCs were plated at 6 ϫ 10 6 cells/ml and allowed to adhere for 1 h at 37°C in a 5% CO 2 air incubator in a humidified atmosphere. Non-adherent cells were removed; adherent cells were washed with RPMI and cultured in RPMI supplemented with antibiotics (100 IU/ml penicillin and 100 g/ml streptomycin) and 0.2% normal human serum (BioWhittaker). Monocytes were stimulated with 100 ng/ml LPS from S. abortus equi (Alexis, San Diego, CA). Transfection, Coimmunoprecipitation, and Western Blotting-2 ϫ 10 6 HEK293T cells were seeded in 10-cm Petri dishes and transfected with a total of 5 g of DNA per plate using the DNA calcium phosphate coprecipitation method, as described (24). After 24 h, the cells were lysed in 500 l of lysis buffer (50 mM Hepes, pH 7.6, 250 mM NaCl, 5 mM EDTA, 0.1% Nonidet P-40), supplemented with protease and phosphatase inhibitors. Immunoprecipitation was performed with a monoclonal anti-FLAG M2 antibody (Sigma), and immunocomplexes were bound to protein A-trisacryl beads (Pierce). Beads were washed twice with lysis buffer, twice with the same buffer containing 1 M NaCl, and again twice with lysis buffer. Binding proteins were eluted with Laemmli buffer and analyzed by 12.5% SDS-PAGE and Western blotting. Detection of co-precipitating and transfected proteins was achieved with a monoclonal anti-E tag (Amersham Biosciences) or anti-FLAG tag (Sigma) antibody, each of which was coupled to horseradish peroxidase. Immunoreactivity was revealed with a Renaissance-enhanced chemiluminescence system (PerkinElmer Life Sciences). Reporter Gene Assays for NF-B-2 ϫ 10 5 HEK293T cells were grown in 6-well plates and transiently transfected by DNA calcium phosphate coprecipitation with a total of 1 g of DNA. The DNA mixture comprised 100 ng of pUT651, 100 ng of pNFconluc, and 800 ng of specific expression plasmids. After 24 h, the cells were seeded in 24-well plates. Another 24 h later, cells were left untreated or were stimulated with TNF (1000 IU/ml), IL-1␤ (40 ng/ml), or TPA (200 ng/ml) for 6 h. For RAW264.7 cells, 5 ϫ 10 5 cells were transfected using Lipofectamine 2000 and Opti-MEM (Invitrogen) with 250 ng of an NF-B-dependent luciferase reporter plasmid and 100 ng of a plasmid encoding ␤-galactosidase, together with 250 ng of specific expression plasmids. After 6 h, the cells were stimulated with 100 ng/ml LPS for 3 h. In all of the above cases, cells were lysed after stimulation in 200 l of lysis buffer (25 mM Tris-phosphate, pH 7.8, 2 mM dithiothreitol, 2 mM 1,2-cyclohexaminediaminetetraacetic acid, 10% glycerol, and 1% Triton X-100). After addition of substrate buffer to a final concentration of 470 M luciferin, 270 M coenzyme A, and 530 M ATP, luciferase (Luc) activity was measured in a Topcount microplate scintillation reader (Packard Instrument Co., Meriden, CT). ␤-Galactosidase activity was assayed using chlorophenol red ␤-D-galac-topyranoside substrate (Roche Molecular Biochemicals, Mannheim, Germany) or the Galactostar reporter gene assay system (Applied Biosystems). Luc values were normalized for ␤-galactosidase values to correct for differences in transfection efficiency (plotted as Luc/␤-galactosidase). To determine NF-B activation induced by overexpression of specific signaling proteins, 4 ϫ 10 4 HEK293T cells were seeded in 24-well plates and transiently transfected with 20 ng of pUT651, 20 ng of pNFconluc, and a total of 160 ng of specific expression plasmids. After 24 h, the cells were lysed and analyzed as described above. For in vivo NF-B analysis by luciferase reporter gene assay, mice were infected with a total of 5 ϫ 10 9 pfu adenoviruses, comprising 25% AdNFBLuc, 25% AdLacZ, and 50% AdRR5 or AdABIN-3. Three days after infection, mice were injected intraperitoneally with 200 ng of LPS or vehicle. After 4 h, mice were killed, liver homogenates were made, and NF-B promoter activity was determined by measuring luciferase (Luc) activity in tissue extracts as described above. Luc values were normalized for ␤-galactosidase values to correct for differences in infection efficiency (plotted as Luc/␤-galactosidase). IL-8 Determination-IL-8 levels in cell supernatants were determined via specific enzyme-linked immunosorbent assay (BD Pharmingen) according to the manufacturer's instructions. For RT-PCR on THP-1 monocytes, 5 ϫ 10 5 cells were seeded in 10-cm Petri dishes and allowed to grow for 48 h. At the end of this period the cells were either left untreated or stimulated with LPS (1 g/ml) or TNF (1000 IU/ml). Total RNA of THP-1 cells was isolated by the guanidium isothiocyanate-phenol-chloroform method (25), and first strand cDNA was synthesized using the SuperScript TM firststrand synthesis system for RT-PCR (Invitrogen). For RT-PCR on PBMC, total RNA was prepared using the RNeasy Mini Kit (Qiagen) and reverse-transcribed with Superscript II RNase H (Invitrogen) according to the manufacturer's protocol. cDNA samples were amplified by PCR with genespecific primers (5Ј-ggagatctgcggccgctatggcacattttgtacaagg-3Ј and 5Ј-ggagatctctacggatggactttctttactgagg-3Ј) that amplify the complete open reading frame of ABIN-3. As a control for cDNA integrity, either RT-PCR for a ␤-actin fragment was performed using 5Ј-gaactttgggggatgctcgc-3Ј and 5Ј-tggtgggcatgggtcagaag-3Ј primers, or RT-PCR for GAPDH was performed using 5Ј-tgaaggtcggagtcaacggatttggt-3Ј and 5Ј-catgtgggccatgaggtccaccac-3Ј primers. For ABIN-3 protein expression analysis in THP-1 monocytes, 7 ϫ 10 6 cells were seeded in 10-cm Petri dishes and were either left untreated or stimulated with LPS (1 g/ml) or TNF (1000 IU/ml) for various time periods. Subsequently, cell lysates were prepared and immunoblotted with a rabbit polyclonal ABIN-3 antibody raised against an ABIN-3-specific peptide (NH 2 -CDVQHKANGLSSVKKVHP-COOH) coupled to keyhole limpet hemocyanin. For real-time quantitative PCR, total RNA of PBMC was prepared using the RNeasy Mini Kit (Qiagen). Purified RNA was reverse-transcribed with Superscript II RNase H (Invitrogen) according to the manufacturer's protocol. The expression levels of ABIN-3 and GAPDH were determined by real-time quantitative PCR, using a LightCycler FastStart DNA Master PLUS SYBR Green I kit (Roche Applied Science). Forward and reverse primers for human ABIN-3 were, respectively, 5Ј-caaaggaaaaggaacattac-3Ј and 5Ј-tgctgtagctcctctttctc-3Ј. Primers for GAPDH were the RT 2 PCR primer set from SuperArray (Frederick, MD). The cDNA copy number of each gene was determined using a six-point standard curve. Standard curves were run with each set of samples, the correlation coefficients (r 2 ) for the standard curves being Ͼ0.98. All results were normalized with respect to the expression of GAPDH. To confirm the specificity of the PCR products, the melting profile of each sample was determined using the LightCycler, and by heating the samples from 60°C to 95°C at a linear rate of 0.10°C/s while measuring the fluorescence emitted. Analysis of the melting curve demonstrated that each pair of primers amplified a single product. In all cases, the PCR products were checked for size by agarose gel separation and ethidium bromide staining to confirm that a single product of the predicted size was amplified. For ABIN-3, each run consisted of an initial denaturation time of 5 min at 95°C and 40 cycles at 95°C for 8 s, 56°C for 8 s, and 72°C for 15 s. For GAPDH, the run consisted of 40 cycles at 95°C for 15 s, 58°C for 15 s, and 72°C for 25 s. Animal Treatment Protocols-Female C57BL/6 mice (8 -12 weeks old) were purchased from Charles River (Sulzfield, Germany). All animals were maintained under standard specific pathogen-free conditions and received humane care in concordance with the National Institutes of Health guidelines and with the legal requirements in Belgium. All animal experiments were performed in accordance with protocols approved by the Institutional Animal Care and Research Advisory Committee. For adenovirus infection, mice were intravenously injected with a total of 5 ϫ 10 9 pfu of virus diluted in pyrogen-free phosphate-buffered saline. In preliminary experiments, we found that adenoviral transgene expression in the liver is maximal 3 days after infection. Therefore, mice were challenged with LPS/GalN 3 days after infection. In the LPS/GalN-induced model of acute lethal hepatitis, mice were injected intraperitoneal with 200 ng of LPS in combination with 20 mg of GalN (Sigma), corresponding to the LD 100 determined in preliminary studies. Statistics-All data represent at least three independent experiments and are expressed as mean values Ϯ S.D. Survival curve was compared using a log rank 2 test, and the level of probability was noted (*, p Ͻ 0.05; **, p Ͻ 0.01; and *** p Ͻ 0.0001). Identification of LIND as an ABIN- The protein sequences of the A20-binding inhibitors of NF-B ABIN-1 and ABIN-2 show significant homology over a region of ϳ70 amino acids. It is in this region that the previously described ABIN homology domain 1 (AHD1) and AHD2 are located (26) (Fig. 1). Using this homologous region in BLAST searches, we identified LIND, a protein that is induced in human mononuclear phagocytes infected with Listeria (13), as a potential ABIN protein. Comparison of the full-length protein sequence of LIND with the sequences of ABIN-1 and ABIN-2 revealed that LIND was much more homologous to ABIN-1 than to ABIN-2. Besides the AHD1 and AHD2 regions of homology, LIND and ABIN-1 also share a third region of strong homology, indicated as AHD3, which is not present in ABIN-2 (Fig. 1). Because of this strong sequence homology with ABIN-1 and ABIN-2, we will henceforth refer to LIND as ABIN-3 (ϭTNIP3). To test if ABIN-3, besides sequence homology, also shows functional homology with ABIN-1 and ABIN-2, we investigated whether ABIN-3 could interact with the zinc finger protein A20. Therefore, expression plasmids for E-tagged ABIN-3 and FLAGtagged A20 were cotransfected in HEK293T cells, followed by immunoprecipitation with an anti-FLAG tag antibody. Immunoblotting with anti-E tag revealed that ABIN-3 indeed coimmunoprecipitated with A20, indicating that both proteins can associate with each other in mammalian cells ( Fig. 2A). In addition to interacting with A20, ABIN-1 and ABIN-2 are also characterized by the ability to inhibit the activation of NF-B in response to TNF, IL-1␤, and TPA (14,15). To test if ABIN-3 shares this NF-B inhibiting activity, we coexpressed ABIN-3 with an NF-Bdependent luciferase reporter gene in HEK293T cells. The effects of the NF-B inhibitor A20 and the irrelevant protein GFP were used as positive and negative controls, respectively. ABIN-3 was indeed able to inhibit NF-B-dependent luciferase expression induced by TNF, IL-1␤, or TPA (Fig. 2B). Taken together, this strong sequence and functional homology with ABIN-1 and ABIN-2 identifies LIND as a novel A20-binding inhibitor of NF-B activation, named ABIN-3. ABIN-3 Is an LPS-inducible Protein-Tissue distribution of ABIN-3 mRNA was investigated by PCR amplification of a cDNA panel containing first strand cDNA samples from 24 different human tissues. No ABIN-3 mRNA could be detected in heart, salivary gland, adrenal gland, pancreas, ovary, or fetal brain. High levels of ABIN-3 mRNA were detected in most of the other tissues, except for kidney and bone marrow, both of which showed only a low expression level of ABIN-3 mRNA (Fig. 3A). We subsequently analyzed ABIN-3 mRNA expression by semi-quantitative RT-PCR on mRNA isolated from various human cell lines such as THP-1, HEK293, and HepG2. Constitutive expression of ABIN-3 mRNA could not be detected in any of the cell lines (data not shown). However, a clear induction of ABIN-3 mRNA could be observed in THP-1 monocytes after stimulation for 3 h with LPS (Fig. 3B). In contrast, stimulation of THP-1 cells with TNF only led to a slight induction of ABIN-3 mRNA. To investigate if the observed induction of ABIN-3 mRNA in THP-1 cells was also reflected at the protein level, polyclonal antibodies against ABIN-3 were generated in rabbits, and used to analyze the expression of ABIN-3 protein in THP-1 cells treated with LPS or TNF. Consistent with the RT-PCR data, ABIN-3 protein could not be detected in unstimulated cells. However, after 6-h LPS treatment ABIN-3 protein was clearly visible (Fig. 3C). In contrast, stimulation of THP-1 cells with TNF did not lead to detectable expression levels of ABIN-3 protein (data not shown). We also evaluated LPS-inducible expression of ABIN-3 in primary human monocytes, which were selected by adherence from peripheral blood mononuclear cells of healthy donors. These monocytes were stimulated either with vehicle or with LPS for 2 or 20 h. Total RNA was isolated, and expression of ABIN-3 mRNA was ana-lyzed by semi-quantitative RT-PCR (Fig. 3D) as well as by realtime quantitative PCR (Fig. 3E). In both cases, expression of ABIN-3 mRNA was induced already slightly after 2-h treatment with LPS, and was more pronounced after 20 h. All together, these data demonstrate that LPS is a potent inducer of ABIN-3 expression in monocytic cells. ABIN-3 Inhibits LPS/TLR4-induced NF-B Activation- The above results show that expression of ABIN-3 is inducible by LPS and thus suggest a role for ABIN-3 in the LPS/TLR4-induced pathway to NF-B. To investigate whether ABIN-3 prevents LPS/ TLR4-induced NF-B activity, we first tested the effect of ABIN-3 overexpression on NF-B-dependent luciferase reporter gene expression in response to transient TLR4 overexpression in HEK293T cells, which as such is already sufficient to activate NF-B. The upper panel of Fig. 4A illustrates that ABIN-3 expression significantly reduced TLR4-induced NF-B-dependent luciferase gene expression. To evaluate whether ABIN-3 also inhibits the expression of an endogenous NF-B target gene, we also analyzed the TLR4-induced production of IL-8 in the same experiment. IL-8 levels in the HEK293T cell supernatant were increased 6-fold upon TLR4 expression (Fig. 4A, lower panel). Consistent with the NF-B inhibitory effect of ABIN-3 in the luciferase reporter assay and the fact that IL-8 expression is known to be at least partially NF-B-dependent (18,27), coexpression of ABIN-3 significantly reduced the expression of IL-8 in response to TLR4. To investigate the effect of ABIN-3 in a more physiologically relevant cell line, we next investigated whether ABIN-3 also inhibits LPS/TLR4-induced expression of an NF-B-dependent luciferase reporter gene in the RAW264.7 macrophage cell line. As can be seen in Fig. 4B, ABIN-3 expression indeed significantly reduced LPS-induced luciferase expression in RAW264.7 macrophages, further establishing the function of ABIN-3 as an inhibitor of LPS/TLR4-induced NF-B-dependent gene expression. For the previously described NF-B inhibitors ABIN-1 and ABIN-2, it was shown that their AHD2 region is essential for NF-B inhibition (26). 9 In addition, ABIN-1 and ABIN-2 have been shown to bind A20 through the more upstream AHD1. To elucidate whether AHD1 and AHD2 of ABIN-3 have a similar function, we made deletion mutants of ABIN-3 that either lacked AHD1 (ABIN-3 ⌬AHD1) or AHD2 (ABIN-3 ⌬AHD2) and evaluated the binding of these mutants to A20 as well as their NF-Binhibiting potential. Transient overexpression of FLAG-tagged A20 together with E-tagged ABIN-3 WT, ABIN-3 ⌬AHD1, or ABIN-3 ⌬AHD2, followed by immunoprecipitation of A20 with an anti-FLAG antibody, clearly demonstrated that ABIN-3 ⌬AHD2 and ABIN-3 WT bind equally well to A20. In contrast, ABIN-3 ⌬AHD1 did not coprecipitate with A20, showing that the binding of ABIN-3 to A20 requires AHD1 (Fig. 5A). Similar results were obtained when binding was studied via yeast twohybrid experiments (data not shown). The NF-B-inhibiting effect of the ABIN-3 deletion mutants was analyzed by means of an NF-B luciferase reporter gene test in HEK293T cells. Whereas both ABIN-3 WT and ABIN-3 ⌬AHD1 significantly reduced TLR4-induced expression of an NF-B-dependent luciferase reporter gene, ABIN-3 ⌬AHD2 had no effect anymore (Fig. 5B). This indicates that AHD2 is essential for the NF-B inhibiting function of ABIN-3, whereas AHD1 is not. Because AHD1 is essential for ABIN-3/A20 binding, these results also indicate that ABIN-3 does not need to bind A20 to prevent NF-B activation. We next investigated the level in the NF-B signaling pathway at which ABIN-3 interferes with LPS/TLR4-induced NF-B activation. Therefore, we analyzed the effect of ABIN-3 coexpression on NF-B activation induced by overexpression of the TLR4 signaling proteins MyD88, IRAK1, and TRAF6, as well as by IKK␤, which is acting more downstream in the pathway and mediates NF-B activation by all stimuli that activate the "classic" NF-B pathway. As shown in Fig. 6, ABIN-3 inhibited NF-B activation induced by MyD88, IRAK1, and TRAF6 but not that induced by IKK␤. This suggests that ABIN-3 interferes with LPS/TLR4-induced NF-B activation at a level downstream of TRAF6 but upstream of IKK␤. ABIN-3 Inhibits LPS-induced NF-B Activation in the Liver and Protects Mice against LPS/GalN-induced Mortality-To validate the NF-B inhibitory function of ABIN-3 in vivo, we also tested the effect of ABIN-3 on LPS-induced NF-B activation in mouse liver. For this purpose, mice were infected with an adenovirus expressing an NF-B-dependent luciferase reporter gene, together with an adenovirus expressing either an ABIN-3 transgene (AdABIN-3), or no transgene (AdRR5) as a control. Three days after AdABIN-3 infection, ABIN-3 transgene expression was clearly detectable in total liver cell extracts by Western blotting (Fig. 7A, upper part). LPS injection of AdRR5infected mice resulted in a 13-fold increase in NF-B-dependent luciferase activity in the liver (Fig. 7A, lower part). However, consistent with the NF-B inhibitory effect of ABIN-3 in vitro, LPS-induced NF-B activity was substantially lower in the liver of AdABIN-3-infected mice. Because these data indicate that ABIN-3 can inhibit LPSinduced NF-B activity in the liver, we investigated the effect of adenoviral gene transfer of ABIN-3 in the murine model of LPS/ GalN-induced acute liver failure. Therefore, C57BL/6 mice were injected intravenously with 5 ϫ 10 9 pfu of AdRR5 or AdABIN-3. Three days later, mice were challenged intraperitoneally with a lethal dose of LPS/GalN. In the control group, all mice died within 10 h after LPS/GalN injection. In contrast, AdABIN-3-infected mice were significantly (p ϭ 0.0038) protected against LPS/GalN-induced mortality, as one-third of them survived the LPS/GalN challenge (Fig. 7B). These observations clearly demonstrate that ABIN-3-mediated NF-B inhibition in the liver is associated with a protective effect against LPS/GalN-induced liver failure. DISCUSSION Although essential to combat bacterial infections, LPS-induced activation of NF-B acts as a double-edged sword. Inappropriate or prolonged activation of NF-B can lead to an exaggerated immune response, which might be harmful to the host. Therefore, to prevent excessive immune responses to LPS, the host may acquire mechanisms that dampen the response to LPS or even confer unresponsiveness to successive triggers with LPS, a phenomenon named LPS tolerance (28). Down-regulating LPSinduced responses can at least partially be accomplished by the LPS-induced production of NF-B inhibitory proteins, which . Functional homology of LIND/ABIN-3 with ABIN-1 and ABIN-2. Coimmunoprecipitation of ABIN-3 with A20. A, 2 ϫ 10 6 HEK293T cells were transiently transfected with 3 g of E-tagged ABIN-3 and 1.5 g of FLAGtagged A20 expression vectors, as indicated. Immunoprecipitation (IP) of A20 was performed with anti-FLAG tag antibody, and coprecipitating ABIN-3 was detected by immunoblotting (WB) with anti-E tag antibody (upper panel). Aliquots of total lysates (TL) were analyzed for expression of ABIN-3 (middle panel) and A20 (lower panel) by immunoblotting with anti-E tag and anti-FLAG tag, respectively. B, effect of ABIN-3 on NF-B-dependent gene expression. 2 ϫ 10 5 HEK293T cells were transiently transfected with 300 ng of expression plasmid for GFP, A20, or ABIN-3, each time with 100 ng of pUT651 and 100 ng of pNFconluc. Cells were left untreated or stimulated with 200 ng/ml TPA, 40 ng/ml IL-1␤, or 1000 IU/ml TNF for 6 h. All cells were lysed 24 h after transfection. Cell lysates were assayed for Luc and ␤-galactosidase activity. Luc values were normalized for ␤-galactosidase values to adjust for differences in transfection efficiency (plotted as Luc/Gal). Each bar represents the mean Ϯ S.D. of three samples. then provide a negative feedback loop (reviewed in Refs. 10 and 29). For example, LPS-inducible alternative splicing of MyD88 can shut down LPS-induced NF-B activation by preventing the recruitment of IRAK4 to MyD88 (30,31). Several other LPS-inducible proteins were shown to inhibit NF-B activation by targeting different steps in the TLR4 signaling pathway and include, among others, A20 (12), SOCS1 (32, 33), IRAK-M (34), and ST2 (35). Here we identify human ABIN-3 as a novel protein that fulfils two essential criteria to be implicated in the negative feedback regulation of LPS-induced NF-B activation. First, ABIN-3 expression was induced by LPS in monocytic cells. Second, we could show that expression of ABIN-3 inhibits NF-Bdependent gene expression in response to LPS, both in vitro as well as in vivo. Human ABIN-3 shows partial sequence homology with ABIN-1 and ABIN-2 and shares with these proteins the ability to bind A20 and to inhibit TNF-, IL-1-, and LPS-induced NF-B activation upon overexpression in HEK293T cells (14,15). 9 These overlapping activities suggest that the function of ABIN-1, -2, and -3 might be at least partially redundant. The fact that ABIN-2-deficient mice are normal and do not show any defect in NF-B activation in response to different stimuli might also reflect such redundancy (36). However, we cannot exclude cell type-or stimulus-specific effects of distinct ABINs on NF-B signaling. The more restricted expression of ABIN-3 in specific tissues, as well as its inducibility by LPS, suggests that ABIN-3 might indeed have a unique function. In this respect, it is worth mentioning that coimmunoprecipitation experiments have shown that ABIN-3 does not compete with the other ABINs for binding to A20 (data not shown). Moreover, our ongoing yeast two-hybrid experiments demonstrate different protein-protein interactions for each ABIN family member. It is still unclear how ABINs interfere with NF-B signaling. Our finding that ABIN-3 still prevents TRAF6-induced NF-B activation, but no longer IKK␤-induced NF-B activation, indicates that ABIN-3 interferes with LPS/TLR4 signaling at the level of or downstream of TRAF6 but upstream of IKK␤. A similar effect on proximal signaling was previously shown for ABIN-1 and -2, which inhibit TNF-induced NF-B activation downstream of TRAF2 and upstream of IKK␤. The binding of ABIN-3 to A20 suggests that its NF-B inhibitory effect might be mediated by A20. A20 was recently proposed to inhibit NF-B activation by de-ubiquitinating several proteins, including TRAF6, RIP, and IKK␥ (12,(37)(38)(39). In fact, while this report was prepared, ABIN-1 was described to physically link A20 to IKK␥ by directly binding IKK␥, thus facilitating A20-mediated de-ubiquitination of IKK␥ (39). However, our data demonstrate that the NF-B inhibitory potential of ABIN-3 does not correlate with ABIN-3/A20 binding, as an AHD1-deletion mutant of ABIN-3, which can no longer bind A20, is still fully capable of inhibiting NF-B activation. Although we cannot exclude that ABINs somehow regulate or modulate the function of A20, these findings make it unlikely that the NF-B inhibitory effect of ABIN-3 is exclusively mediated by A20. Another model that could explain the NF-B inhibitory effect of ABIN-3 implicates the possibility that ABIN-3 prevents the formation of specific protein-protein interactions in the cell. Deletion analysis of ABIN-3 showed that, like in ABIN-1 and -2 (26), 9 the AHD2 region is necessary for its NF-B inhibiting function. In this context it is worth mentioning that AHD2 shows strong sequence homology with a region in IKK␥ that was recently shown to mediate the binding of IKK␥ to polyubiquitin chains (26,40,41). This allows IKK␥ to bind polyubiquitinated receptor interacting protein (RIP) 1 in the TNF signaling pathway, which is essential for TNF-induced NF-B activation. Although IKK␥/RIP1 binding most likely also involves other surrounding amino acids that provide further specificity, similar ubiquitin-dependent protein-protein interactions might be mediated by ABINs via their AHD2. In this way, ABIN-3 might also compete with IKK␥ or other signaling proteins to form crucial protein-protein interactions in response to TLR4 triggering. Because RIP1 is not involved in the LPS/TLR4-induced MyD88-dependent signaling pathway that is inhibited by ABIN-3 (42), ubiquitin-dependent protein- Role of AHD1 and AHD2 for ABIN-3/A20 binding and ABIN-3-mediated inhibition of TLR4-induced NF-B activation. A, coimmunoprecipitation of ABIN-3 deletion mutants with A20. 1.2 ϫ 10 6 HEK293T cells were transiently transfected with 1 g of FLAG-tagged A20 together with 1 g of E-tagged ABIN-3 WT, ABIN-3 ⌬AHD1, or ABIN-3 ⌬AHD2, as indicated. Immunoprecipitation (IP) of A20 was performed with anti-FLAG tag antibody, and coprecipitating ABIN-3 WT or deletion mutants were detected by immunoblotting (WB) with anti-E tag antibody (upper panel). Aliquots of total lysates (TL) were analyzed for expression of ABIN-3 WT and deletion mutants (middle panel) and A20 (lower panel) by immunoblotting with anti-E tag and anti-FLAG tag, respectively. B, effect of ABIN-3 deletion mutants on TLR4-induced NF-B activation. 4 ϫ 10 4 HEK293T cells were transiently transfected with 20 ng of pUT651, 20 ng of pNFconluc, and 20 ng of empty vector (/), or an expression plasmid for TLR4. In each case, cells were also transfected with 40 ng of an expression plasmid for ABIN-3 WT, ABIN-3 ⌬AHD1, or ABIN-3 ⌬AHD2. All cells were lysed 24 h after transfection. Cell lysates were assayed for Luc and ␤-galactosidase activity. Luc values were normalized for ␤-galactosidase values to adjust for differences in transfection efficiency (plotted as Luc/Gal). Each bar represents the mean Ϯ S.D. of three samples. protein interactions different from RIP1/IKK␥ must be implicated as potential targets for ABIN-3. We were unable to further establish the role of ABIN-3 in LPS signaling by RNA interference-mediated knockdown of ABIN-3, because we could obtain no sufficient reduction of ABIN-3 expression (data not shown). Elucidation of the role of ABIN-3 by generating ABIN-3 knock-out mice by homologous recombination might be an alternative approach. However, whereas ABIN-1 and ABIN-2 are expressed in murine as well as in human cells, we were unable to identify a functional murine orthologue of the human ABIN-3 gene. Data base searches revealed a gene termed "weakly similar to ABIN-3," but closer examination of its sequence showed that it encodes a smaller protein that does not contain the complete AHD2. Moreover, as overexpression of this murine ABIN-3like protein did not inhibit NF-B activation (data not shown), it does not qualify as a true ABIN. Although a functional murine ABIN-3 gene might not exist, it is worth mentioning that expression of human ABIN-3 is able to prevent LPS-induced NF-B activation in murine cells as reflected by our experiments with murine RAW264.7 macrophages as well as our in vivo mouse experiments. Multiple stimuli can activate NF-B by partially overlapping signaling pathways. Therefore, ABIN-3 might also affect the activation of NF-B by other stimuli than the ones tested in this study (TNF, IL-1, TPA, and LPS). In this respect, it is worth mentioning that ABIN-3 has previously been described as LIND (Listeria INDuced), a protein that is induced in mononuclear phagocytes infected with Listeria (13). Because Listeria is not a Gram-negative bacterium and thus has no LPS, a TLR agonist other than LPS must be responsible for inducing ABIN-3 expression, raising the possibility that ABIN-3 acts as a negative regulator of inflammatory responses initiated by a wide range of TLRs. In addition, it cannot be excluded that ABIN-3 also regulates pathways different from NF-B, such as the activation of interferon regulatory factor 3 and AP-1. In contrast to its ability to inhibit TPA-induced NF-B activation (Fig. 2B), ABIN-3 did not prevent TPA-induced AP-1 activation in the same cells (data not shown), demonstrating that ABIN-3 does not act in a nonspecific way. Further studies will be needed to reveal the complex interplay between ABIN-3 and other signaling proteins in the regulation of different signaling pathways. In addition to the NF-B inhibitory potential of ABIN-3 in cultured cells, we were able to show that adenoviral gene transfer of ABIN-3 inhibits LPS-induced expression of an NF-B-dependent luciferase reporter gene in the liver, and partially protects mice against LPS/GalN-induced mortality. In this model of acute liver failure, LPS induces the production and release of several cytokines, including TNF, IL-1, and IL-6, whose production is known to be NF-B-dependent. These cytokines subsequently contribute to the pathogenesis of hepatic liver failure (43). As studies with NF-B decoy oligonucleotides have previously been shown to prevent LPS-induced fatal liver failure (44), the NF-B inhibitory effect of ABIN-3 is most likely responsible for the observed protection against LPS/GalN-induced mortality. This suggestion is reinforced by our observation that human ABIN-3 also inhibits NF-B activation in murine macrophages, which are the predominant cytokine-producing cells after LPS challenge (45). On the other hand, other NF-B-independent effects of ABIN-3 might also account for the protection of mice against LPS/GalN, as the closely homologous ABIN-1 protein was recently shown to possess an anti-apoptotic effect in hepatocytes, enabling it to protect mice against TNF/GalN-induced mortality (46). However, we were unable to show a similar anti-apoptotic effect for ABIN-3 (data not shown). In conclusion, we identified ABIN-3 as a novel player in the negative feedback regulation of LPS-induced NF-B activation. Because NF-B has an important role in the development and progress of septic shock and different autoimmune and chronic inflammatory diseases, strategies that increase the expression or the activity of ABIN-3 might have an important therapeutic potential. Mice were infected with AdNF-BLuc and AdLacZ, together with either control AdRR5 (n ϭ 4) or AdABIN-3 (n ϭ 5) adenovirus. Three days later, mice were injected intraperitoneally with 200 ng of LPS or vehicle. Four hours after challenge, mice were killed and liver homogenates were prepared. Adenoviral expression of the ABIN-3 transgene in the liver was analyzed by Western blotting using an anti-E tag antibody (upper panel; each lane represents a different mouse). NF-B activity was analyzed by determining Luc and ␤-galactosidase activities (lower panel). Luc values were normalized for ␤-galactosidase values to adjust for differences in infection efficiency (plotted as Luc/Gal). B, ABIN-3 partially protects mice against LPS/GalN-induced mortality. Mice were injected intravenously with 5 ϫ 10 9 pfu of AdABIN-3 (f) or AdRR5 (ࡗ) as a control and challenged 3 days later with 200 ng of LPS plus 20 mg of GalN. Survival is presented as a combined Kaplan-Meyer plot of three independent experiments (n ϭ 15 in total). Mortality was counted over a period of 30 h, after which there were no further deaths. **, p Ͻ 0.01.
2018-04-03T01:29:13.308Z
2007-01-05T00:00:00.000
{ "year": 2007, "sha1": "79e85f7a891a27753adde114a59d00f14b3be154", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/282/1/81.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "3dd648cc2d0258fefa205ff2fc6073d186f4184f", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256667163
pes2o/s2orc
v3-fos-license
Competency profiles for evidence-informed policy-making (EIPM): a rapid review Background Evidence-informed policy-making (EIPM) requires a set of individual and organizational capacities, linked with background factors and needs. The identification of essential knowledge, skills and attitudes for EIPM can support the development of competency profiles and their application in different contexts. Purpose To identify elements of competency (knowledge, skills and attitudes) for EIPM, according to different professional profiles (researcher, health professional, decision-maker and citizen). Methods Rapid umbrella review. A structured search was conducted and later updated in two comprehensive repositories (BVSalud and PubMed). Review studies with distinctive designs were included, published from 2010 onwards, without language restrictions. Assessment of the methodological quality of the studies was not performed. A meta-aggregative narrative synthesis was used to report the findings. Results Ten reviews were included. A total of 37 elements of competency were identified, eight were categorized as knowledge, 19 as skills and 10 as attitudes. These elements were aggregated into four competency profiles: researcher, health professional, decision-maker and citizen. The competency profiles included different sets of EIPM-related knowledge, skills and attitudes. Strengths and limitations This study is innovative because it aggregates different profiles of competency from a practical perspective, favouring the application of its results in different contexts to support EIPM. Methodological limitations are related to the shortcuts adopted in this review: complementary searches of the grey literature were not performed, and the study selection and data extraction were not conducted in duplicate. Final considerations: conclusions and implications of the findings EIPM requires the development of individual and organizational capacities. This rapid review contributes to the discussion on the institutionalization of EIPM in health systems. The competency profiles presented here can support discussions about the availability of capacity and the need for its development in different contexts. Supplementary Information The online version contains supplementary material available at 10.1186/s12961-023-00964-0. Page 2 of 14 Background In the context of health systems, evidence-informed policy-making (EIPM) results from systematic and transparent processes to access, assess, adapt and apply scientific evidence in decision-making processes [1]. EIPM promotes the use of scientific knowledge in decision-making processes and in the development of innovative methods and strategies in the field of health systems. It also fosters technical cooperation between organizations and other interested social groups that produce and apply this scientific knowledge [2]. Thus, EIPM advocates the incorporation of scientific evidence as an input for decision-making processes in the formulation and implementation of health policies. In this context, evidence-informed decision-making (EIDM) emphasizes that decisions should be informed by the best available evidence, as well as other factors such as context, public opinion, equity, feasibility of implementation, accessibility, sustainability and acceptability to stakeholders [3]. In the context of EIDM institutionalization efforts, knowledge translation (KT) is a prior foundation to be considered [3]. Knowledge translation is a dynamic and interactive process that includes synthesis, dissemination, exchange and ethical application of knowledge to improve population health, provide more effective health services and products, and strengthen the health system [4]. This definition is part of a complex system of interactions, also known as knowledge translation platforms [5], which articulates producers, mediators and users of scientific knowledge, in different intensities, complexities and levels of involvement, depending on the nature of the research and the needs in different contexts. Therefore, four elements of knowledge translation are emphasized: synthesis, dissemination, exchange and practical application of knowledge in the formulation, implementation and evaluation of health policies, at any level of management of health systems and services. To include scientific evidence in decision-making processes, through systematic, transparent and balanced knowledge translation approaches, it is necessary that individual and institutional capacities are recognized and available. These capacities aim not only to support the use of structured and replicable methods, but also to consider the distinct factors that influence a priority public health problem and the process of implementing interventions to address it. Thus, the decisions to act on the causes and consequences of the problem would be informed in a comprehensive way [6][7][8]. This set of capacities constitutes a profile, considered from the perspective of professional competencies [9,10]. The concept of competency considers cognitive, psychomotor and attitudinal attributes as elements of a competent practice [11]. In this regard, competency includes the mobilization of different resources to solve, with relevance and success, problems of professional practice. These resources or attributes are the knowledge, skills and attitudes mobilized, in an integrated way, to conduct professional actions [12,13]. Although there are studies on the different individual and institutional capacities needed, a global synthesis is not yet available that systematically brings together all these elements, following the logic of competency profiles. Defining the essential competencies for EIPM professionals is key for identifying individual and institutional capacity development needs. This is necessary for establishing knowledge translation platforms in different organizational contexts. In addition, an EIPM competency profile also contributes to the theoretical discussion, but from an applied perspective, supporting the planning and implementation of EIPM initiatives in different contexts. This study is part of an initiative commissioned by the Brazilian Ministry of Health to support EIPM development in Brazil and aimed to identify EIPM-related competency (knowledge, skills and attitudes). The competency elements were classified according to different professional profiles (researcher, health professional, decision-maker and citizen), considered from a broad conceptual perspective, which can be applied to different socioeconomic contexts and organizational scenarios. The results of this study also supported the development of a specific competency profile for EIPM adapted to the Brazilian context. Methods This study is a rapid umbrella review, which followed a prospective protocol (https:// zenodo. org/ record/ 65391 37), according to the steps described in this section, including deviations from the protocol. The planning and execution of this review followed the recommendations of the World Health Organization manual for rapid reviews [14] and its report adhered to PRISMA 2020 [15]. Selection criteria The following study types were included: overviews of systematic reviews, systematic reviews, scoping reviews and (systematic or narrative) reviews of qualitative studies, that analyzed and/or described professional competencies (knowledge, skills and attitudes) for EIPM, without language restriction, from 2010 onwards (considered by the authors of this rapid review as the time when there has been a growth in global interest in the EIPM institutionalization). Review question The review question was: What are the general and specific competencies (knowledge, skills and attitudes) for professional performance in EIPM? The question was structured according to the population, concept, context (PCC) acronym, as presented in Table 1. Search strategies and indexed databases Searches were conducted on two comprehensive and upto-date databases, BVSalud and PubMed, on 16 March 2022. The search strategies are presented in Table 2. The protocol of this review included hand searching reference lists of the selected studies and relevant institutional websites. However, we did not consider this necessary to perform because the retrieved studies provided sufficient information for the purpose of this rapid review. Screening and selection of studies Duplicates were excluded, and three reviewers (JOMB, DMMR, CS) independently screened titles, abstracts and full texts, but not in duplicate, supported by the Rayyan platform [16]. Individual doubts were resolved by consensus with a second reviewer (JOMB). Prior to data extraction, a reviewer (CS) read the full texts of selected studies to confirm eligibility. Data extraction One reviewer (CS) extracted data and two other reviewers (JOMB and DMMR) verified the extraction. An electronic spreadsheet was used to systematize the following data from the individual studies selected for inclusion: author, year of publication, purpose of the study, study design, country where the study was carried out, context, target population, competencies identified, barriers and facilitators (when mentioned), knowledge gaps identified by the study, study limitations, conflict of interests declared and funding (when available). Data synthesis We performed a meta-aggregative narrative synthesis [14], based on quantitative and qualitative data from included studies, to combine the individual findings. Two classifications were used to categorize the findings. The first, regarding the competency element, considered the following categories, usually applied in the definition of competency profiles, as the knowledge, skills and attitudes (KSA) model: (1) knowledge: different types of knowledge and information; (2) skills: improved movements and non-verbal communication intertwined with knowledge, expressed as the psychomotor domain in the manipulation and construction of processes and products; (3) attitudes: feelings, positioning and values linked to skills and knowledge in the performance of professional tasks [17]. The second classification considered four professional profiles of interest: (1) researcher: professional who works in the production of scientific research; (2) health professional: professional who works in the provision of health services; (3) health systems and services decision-maker: professional who works in the management of health services and/or systems, at any level; and (4) citizen: individual inserted in civil society, participating or not in organizations representing specific groups. These categories were used to aggregate the different competency elements identified in this review. This process often led to overlapping elements in the different professional profiles, for example, the same element may be present in more than one profile. Methodological quality assessment We did not perform a methodological quality assessment of the included studies. Although it was included in the protocol of this review, we decided not to proceed with this step because the nature of the question of interest and the scope of this review, and because it would make little contribution to our practical goal. Shortcuts adopted and deviations from the protocol We adopted methodological shortcuts to reduce the time to conduct this rapid review, considering that its purpose was to inform institutional deliberations on a pre-defined schedule. Among the adopted shortcuts, those that potentially influence the completeness and reliability of the findings were: (1) the searches were only performed in the two repositories, including studies published from 2010 onwards, that is, we did not search the grey literature nor the reference list of included studies. This also is a deviation from the protocol, which included complementary searches. Restricting the grey literature search is a common shortcut for rapid reviews for policy topics, as well as tailoring (generally to adjust) the selection of literature databases to the topic, because the addition of a grey literature search depends on the topic, purpose and timeline [14]. In this review, we considered the potential contribution to the topic addressed and the time required for the complementary search, and decided not to extend the searches for grey literature; (2) selection and data extraction were not duplicated but performed individually and verified by another reviewer; (3) the assessment of the methodological quality of the selected studies was not conducted, and this was the second deviation from the protocol. While an assessment of the methodological quality of included studies is desirable in a review, scoping reviews do not require this step, given the potential variety of methodological designs and the nature of the topic or issue addressed [14]; and (4) the results were synthesized with a metaaggregative approach and presented only descriptively in synthetic tables. Although these shortcuts and deviations from the protocol suggest caution in the interpretation of the results of this review, they are recognized as potential opportunities to reduce the time spent for the development of rapid reviews that are still reliable [14,18,19]. Study selection The searches retrieved 714 documents. Nine duplicates were removed, 705 titles and abstracts were screened, and 35 documents were eligible for full-text reading, 25 of which were excluded for not meeting the inclusion criteria, and two were excluded after data extraction, by consensus of the authors on their eligibility. The list of excluded studies with the reasons for exclusion is provided in Additional file 1: Appendix 1. Ten studies were included in this rapid review (Fig. 1). Synthesis of findings General elements of competency in EIPM Most of the studies included in this rapid review did not explicitly present a framework of ideal competencies for EIPM professionals. However, all included studies reported, according to their purposes, elements that were interpreted to find competencies in EIPM. Thus, the allocation of competencies in the categories adopted (knowledge, skills and attitudes) was made observing the best suitability, according to the authors' understanding and consensus, as presented in Table 3 and detailed in Additional file 2: Appendix 2. Competencies were also coded and aggregated, whenever possible, to provide a summarized description of each identified element. The description resulting from this categorization and synthesis process is presented in Table 4, based on the findings of the included studies. Specific elements of competency in EIPM, per professional profile From the included studies, competency elements were identified and assigned to each professional profile in EIPM: (1) Some earlier studies included in this comprehensive review presented competencies related to knowledge translation and EIPM, but with approaches limited to specific profiles [7,[20][21][22][23][24]. To our knowledge, this is the first study that aggregates different competency profiles. The findings of this review showed that there are earlier frameworks of competencies in EIPM that can be incorporated into contextualized discussions, at various levels of health policies and systems. These frameworks present elements of competencies that can be classified as knowledge, skills and attitudes (KSA). These competencies, in turn, must be seen as an integrated and interactive set of individual capacities, which interacts with the organizational environment, to constitute professional profiles with different areas of activity. Despite the profiles being different from each other, the overlapping of some elements was common. Moreover, we acknowledge the need to conduct the reclassification and fill the gaps that a rigid classification may produce on these results. It is also important to emphasize that the practical application of this competencies profile must be broadly anchored in the local needs of each institution and/or professional. Advancing the institutionalization of EIPM requires the recognition of the capacities already available in an institution, which must be compared with the organization's tasks and attributions. It is this contextualization process that will generate the proper competency profile for each situation. Therefore, this study should be seen as a first input. Its application requires understanding the relevance of each element described here to each organization. For example, the competency elements presented above do not need to be associated with a single professional but can guide the composition of a team that has the necessary set of skills. Within the EIPM scope, there is a relevant movement aimed at strengthening the institutionalization of knowledge translation processes within governments, civil society organizations and academic institutions [27][28][29]. However, the lack of tools and frameworks focused on institutional and individual capacities is still a barrier to be overcome. The results of this review provide an acknowledgement of the global literature related to the individual capacities needed, and information that can be immediately applied in discussions and deliberations on the institutionalization of EIPM, in all parts of the world. Strengths and limitations The strengths of this rapid review include: (1) being the first to cover different professional profiles, and adopting a friendly format in the categorization and presentation of the findings to allow the immediate use of its results; (2) adopting systematic and transparent methods to provide, in a timely manner, a body of evidence on an issue of high interest in the current EIPM field, inside and outside Brazil; and (3) contributing to identifying and filling gaps related to the situational diagnosis of individual and organizational competencies for EIPM. As previously mentioned, methodological limitations include: (1) being a rapid review, we adopted shortcuts and deviations from the protocol, which may have led to the loss of relevant documents, especially from the grey literature. However, we believe that the set of published studies included in this review has sufficiently provided an overview of the available competency elements; (2) the meta-aggregative synthesis carried out to consolidate the results of the different studies included had a narrative character and may have oversimplified the concepts and definitions presented in the description tables of the competency elements. We believe that the guidance to apply the findings of this review in a manner adapted to each contexts' needs can minimize this limitation, as it will imply a process of re-signification of the findings; (3) the categories used to classify the competency profiles may not be so distinguishable in practice, including elements that are dynamically and interactively correlated. Knowledge, skills and attitudes should be seen as an integrated set of capacities. In the same way, because often there are overlaps and intersections in the profiles presented here, areas of activity should be recognized, rather than actual professional profiles. Conclusions This rapid umbrella review presented elements for professional competency profiles applied to EIPM, contributing to the discussion on the institutionalization of scientific evidence as inputs to systematic, transparent and balanced processes, within the scope of public health policies. The use of these findings will show their usefulness to support strategic planning in health organizations as well as civil society and academic organizations.
2023-02-09T15:01:21.259Z
2023-02-08T00:00:00.000
{ "year": 2023, "sha1": "092d23f02e88d268fe2270dbe10cf0e3440e21c7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "092d23f02e88d268fe2270dbe10cf0e3440e21c7", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
2867611
pes2o/s2orc
v3-fos-license
Using Cross-Entity Inference to Improve Event Extraction Event extraction is the task of detecting certain specified types of events that are mentioned in the source language data. The state-of-the-art research on the task is transductive inference (e.g. cross-event inference). In this paper, we propose a new method of event extraction by well using cross-entity inference. In contrast to previous inference methods, we regard entity-type consistency as key feature to predict event mentions. We adopt this inference method to improve the traditional sentence-level event extraction system. Experiments show that we can get 8.6% gain in trigger (event) identification, and more than 11.8% gain for argument (role) classification in ACE event extraction. Introduction The event extraction task in ACE (Automatic Content Extraction) evaluation involves three challenging issues: distinguishing events of different types, finding the participants of an event and determining the roles of the participants. The recent researches on the task show the availability of transductive inference, such as that of the following methods: cross-document, crosssentence and cross-event inferences. Transductive inference is a process to use the known instances to predict the attributes of unknown instances. As an example, given a target event, the cross-event inference can predict its type by well using the related events co-occurred with it within the same document. From the sentence: (1)He left the company. it is hard to tell whether it is a Transport event in ACE, which means that he left the place; or an End-Position event, which means that he retired from the company. But cross-event inference can use a related event "Then he went shopping" within the same document to identify it as a Transport event correctly. As the above example might suggest, the availability of transductive inference for event extraction relies heavily on the known evidences of an event occurrence in specific condition. However, the evidence supporting the inference is normally unclear or absent. For instance, the relation among events is the key clue for cross-event inference to predict a target event type, as shown in the inference process of the sentence (1). But event relation extraction itself is a hard task in Information Extraction. So cross-event inference often suffers from some false evidence (viz., misleading by unrelated events) or lack of valid evidence (viz., unsuccessfully extracting related events). In this paper, we propose a new method of transductive inference, named cross-entity inference, for event extraction by well using the relations among entities. This method is firstly motivated by the inherent ability of entity types in revealing event types. From the sentences: (2)He left the bathroom. (3)He left Microsoft. it is easy to identify the sentence (2) as a Transport event in ACE, which means that he left the place, because nobody would retire (End-Position type) from a bathroom. And compared to the entities in sentence (1) and (2), the entity "Microsoft" in (3) would give us more confidence to tag the "left" event as an End-Position type, because people are used to giving the full name of the place where they retired. The cross-entity inference is also motivated by the phenomenon that the entities of the same type often attend similar events. That gives us a way to predict event type based on entity-type consistency. From the sentence: (4)Obama beats McCain. it is hard to identify it as an Elect event in ACE, which means Obama wins the Presidential Election, or an Attack event, which means Obama roughs somebody up. But if we have the priori knowledge that the sentence "Bush beats McCain" is an Elect event, and "Obama" was a presidential contender just like "Bush" (strict type consistency), we have ample evidence to predict that the sentence (4) is also an Elect event. Indeed above cross-entity inference for eventtype identification is not the only use of entity-type consistency. As we shall describe below, we can make use of it at all issues of event extraction: For event type: the entities of the same type are most likely to attend similar events. And the events often use consistent or synonymous trigger. For event argument (participant): the entities of the same type normally co-occur with similar participants in the events of the same type. For argument role: the arguments of the same type, for the most part, play the same roles in similar events. With the help of above characteristics of entity, we can perform a step-by-step inference in this order: Step 1: predicting event type and labeling trigger given the entities of the same type. Step 2: identifying arguments in certain event given priori entity type, event type and trigger that obtained by step 1. Step 3: determining argument roles in certain event given entity type, event type, trigger and arguments that obtained by step 1 and step 2. On the basis, we give a blind cross-entity inference method for event extraction in this paper. In the method, we first regard entities as queries to retrieve their related documents from large-scale language resources, and use the global evidences of the documents to generate entity-type descriptions. Second we determine the type consistency of entities by measuring the similarity of the type descriptions. Finally, given the priori attributes of events in the training data, with the help of the entities of the same type, we perform the step-by-step cross-entity inference on the attributes of test events (candidate sentences). In contrast to other transductive inference methods on event extraction, the cross-entity inference makes every effort to strengthen effects of entities in predicting event occurrences. Thus the inferential process can benefit from following aspects: 1) less false evidence, viz. less false entity-type consistency (the key clue of cross-entity inference), because the consistency can be more precisely determined with the help of fully entity-type description that obtained based on the related information from Web; 2) more valid evidence, viz. more entities of the same type (the key references for the inference), because any entity never lack its congeners. Task Description The event extraction task we addressing is that of the Automatic Content Extraction (ACE) evaluations, where an event is defined as a specific occurrence involving participants. And event extraction task requires that certain specified types of events that are mentioned in the source language data be detected. We first introduce some ACE terminology to understand this task more easily: Entity: an object or a set of objects in one of the semantic categories of interest, referred to in the document by one or more (co-referential) entity mentions. Entity mention: a reference to an entity (typically, a noun phrase). Event trigger: the main word that most clearly expresses an event occurrence (An ACE event trigger is generally a verb or a noun). Event arguments: the entity mentions that are involved in an event (viz., participants). Argument roles: the relation of arguments to the event where they participate. Event mention: a phrase or sentence within which an event is described, including trigger and arguments. The 2005 ACE evaluation had 8 types of events, with 33 subtypes; for the purpose of this paper, we will treat these simply as 33 separate event types and do not consider the hierarchical structure among them. Besides, the ACE evaluation plan defines the following standards to determine the correctness of an event extraction: A trigger is correctly labeled if its event type and offset (viz., the position of the trigger word in text) match a reference trigger. An argument is correctly identified if its event type and offsets match any of the reference argument mentions, in other word, correctly recognizing participants in an event. An argument is correctly classified if its role matches any of the reference argument mentions. Consider the sentence: (5) It has refused in the last five years to revoke the license of a single doctor for committing medical errors. 1 The event extractor should detect an End-Position event mention, along with the trigger word "revoke", the position "doctor", the person whose license should be revoked, and the time during which the event happened: It is noteworthy that event extraction depends on previous phases like name identification, entity mention co-reference and classification. Thereinto, the name identification is another hard task in ACE evaluation and not the focus in this paper. So we skip the phase and instead directly use the entity labels provided by ACE. Related Work Almost all the current ACE event extraction systems focus on processing one sentence at a time (Grishman et al., 2005;Ahn, 2006;Hardyet al. 2006). However, there have been several studies using high-level information from a wider scope: Maslennikov and Chua (2007) use discourse trees and local syntactic dependencies in a patternbased framework to incorporate wider context to refine the performance of relation extraction. They claimed that discourse information could filter noisy dependency paths as well as increasing the reliability of dependency path extraction. Finkel et al. (2005) used Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. They used this technique to augment an information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. Ji and Grishman (2008) were inspired from the hypothesis of "One Sense Per Discourse" (Ya-1 Selected from the file "CNN_CF_20030304.1900.02" in ACE-2005 corpus. rowsky, 1995); they extended the scope from a single document to a cluster of topic-related documents and employed a rule-based approach to propagate consistent trigger classification and event arguments across sentences and documents. Combining global evidence from related documents with local decisions, they obtained an appreciable improvement in both event and event argument identification. Patwardhan and Riloff (2009) proposed an event extraction model which consists of two components: a model for sentential event recognition, which offers a probabilistic assessment of whether a sentence is discussing a domain-relevant event; and a model for recognizing plausible role fillers, which identifies phrases as role fillers based upon the assumption that the surrounding context is discussing a relevant event. This unified probabilistic model allows the two components to jointly make decisions based upon both the local evidence surrounding each phrase and the "peripheral vision". Gupta and Ji (2009) used cross-event information within ACE extraction, but only for recovering implicit time information for events. Liao and Grishman (2010) propose document level cross-event inference to improve event extraction. In contrast to Gupta's work, Liao do not limit themselves to time information for events, but rather use related events and event-type consistency to make predictions or resolve ambiguities regarding a given event. Motivation In event extraction, current transductive inference methods focus on the issue that many events are missing or spuriously tagged because the local information is not sufficient to make a confident decision. The solution is to mine credible evidences of event occurrences from global information and regard that as priori knowledge to predict unknown event attributes, such as that of cross-document and cross-event inference methods. However, by analyzing the sentence-level baseline event extraction, we found that the entities within a sentence, as the most important local information, actually contain sufficient clues for event detection. It is only based on the premise that we know the backgrounds of the entities beforehand. For instance, if we knew the entity "vesuvius" is an active volcano, we could easily identify the word "erupt", which co-occurred with the entity, as the trigger of a "volcanic eruption" event but not that of a "spotty rash". In spite of that, it is actually difficult to use an entity to directly infer an event occurrence because we normally don't know the inevitable connection between the background of the entity and the event attributes. But we can well use the entities of the same background to perform the inference. In detail, if we first know entity(a) has the same background with entity(b), and we also know that entity(a), as a certain role, participates in a specific event, then we can predict that entity(b) might participtes in a similar event as the same role. Consider the two sentences 2 from ACE corpus: Table 2: Cross-entity inference example From the sentences, we can find that the entities "Saddam" and "Qaeda chief" have the same background (viz., terrorist leader), and they are both the arguments of Attack events as the role of Target. So if we previously know any of the event mentions, we can infer another one with the help of the entities of the same background. In a word, the cross-entity inference, we proposed for event extraction, bases on the hypothesis: Entities of the consistent type normally participate in similar events as the same role. As we will introduce below, some statistical data from ACE training corpus can support the hypothesis, which show the consistency of event type and role in event mentions where entities of the same type occur. Entity Consistency and Distribution Within the ACE corpus, there is a strong entity consistency: if one entity mention appears in a type of event, other entity mentions of the same type will appear in similar events, and even use the same word to trigger the events. To see this we calculated the conditional probability (in the ACE corpus) of a certain entity type appearing in the 33 ACE event subtypes. Table 3, we can find that only Attack and Transport events co-occur frequently with Population-Center entities (see Figure 1 and Table 3: Events co-occurring with Population-Center with the conditional probability > 0.05 Actually we find that most entity types appear in more restricted event mentions than Population-Center entity. For example, Air entity only cooccurs with 5 event types (Attack, Transport, Die, Transfer-Ownership and Injure), and Exploding entity co-occurs with 4 event types (see Figure 1). Especially, they only co-occur with one or two event types with the conditional probability more than 0.05. Table 4 gives the distributions of whole ACE entity types co-occurring with event types. We can find that there are 37 types of entities (out of 43 in total) appearing in less than 5 types of event mentions when entity-event co-occurrence frequency is larger than 10, and only 2 (e.g. Individual) appearing in more than 10 event types. And when the frequency is larger than 50, there are 41 (95%) entity types co-occurring with less than 5 event types. These distributions show the fact that most instances of a certain entity type normally participate in events of the same type. And the distributions might be good predictors for event type detection and trigger determination. Attack event Fighter plane (subtype 1): "MiGs" "enemy planes" "warplanes" "allied aircraft" "U.S. jets" "a-10 tank killer" "b-1 bomber" "a-10 warthog" "f-14 aircraft" "apache helicopter" Spacecraft (subtype 2): "russian soyuz capsule" "soyuz" Civil aviation (subtype 3): "airliners" "the airport" "Hooters Air executive" Transport event Private plane (subtype 4): "Marine One" "commercial flight" "private plane" Besides, an ACE entity type actually can be divided into more cohesive subtypes according to similarity of background of entity, and such a subtype nearly always co-occur with unique event type. For example, the Air entities can be roughly divided into 4 subtypes: Fighter plane, Spacecraft, Civil aviation and Private plane, within which the Fighter plane entities all appear in Attack event mentions, and other three subtypes all co-occur with Transport events (see Table 5). This consistency of entities in a subtype is helpful to improve the precision of the event type predictor. Role Consistency and Distribution The same thing happens for entity-role combinations: entities of the same type normally play the same role, especially in the event mentions of the same type. For example, the Population-Center entities occur in ACE corpus as only 4 role types: Place, Destination, Origin and Entity respectively with conditional probability 0.615, 0.289, 0.093, 0.002 (see Figure 2). And They mainly appear in Transport event mentions as Place, and in Attack as Destination. Particularly the Exploding entities only occur as Instrument and Artifact respectively with the probability 0.986 and 0.014. They almost entirely appear in Attack events as Instrument. Table 6 gives the distributions of whole entityrole combinations in ACE corpus. We can find that there are 38 entity types (out of 43 in total) occur as less than 5 role types when the entity-role cooccurrence frequency is larger than 10. There are 42 (98%) when the frequency is larger than 50, and only 2 (e.g. Individual) when larger than 10. The distributions show that the instances of an entity type normally occur as consistent role, which is helpful for cross-entity inference to predict roles. Cross-entity Approach In this section we present our approach to using blind cross-entity inference to improve sentencelevel ACE event extraction. Our event extraction system extracts events independently for each sentence, because the definition of event mention constrains them to appear in the same sentence. Every sentence that at least involves one entity mention will be regarded as a candidate event mention, and a randomly selected entity mention from the candidate will be the staring of the whole extraction process. For the entity mention, information retrieval is used to mine its background knowledge from Web, and its type is determined by comparing the knowledge with those in training corpus. Based on the entity type, the extraction system performs our step-by-step cross-entity inference to predict the attributes of the candidate event mention: trigger, event type, arguments, roles and whether or not being an event mention. The main frame of our event extraction system is shown in Figure 3, which includes both training and testing processes. Figure 3. The frame of cross-entity inference for event extraction (including training and testing processes) In the training process, for every entity type in the ACE training corpus, a clustering technique (CLUTO toolkit) 3 is used to divide it into different cohesive subtypes, each of which only contains the entities of the same background. For instance, the Air entities will be divided into Fighter plane, Spacecraft, Civil aviation, Private plane, etc (see Table 5). And for each subtype, we mine event mentions where this type of entities appear from ACE training corpus, and extract all the words which trigger the events to establish corresponding trigger list. Besides, a set of support vector machine (SVM) based classifiers are also trained: Argument Classifier: to distinguish arguments of a potential trigger from non-arguments 4 ; Role Classifier: to classify arguments by argument role; Reportable-Event Classifier (Trigger Classifier): Given entity types, a potential trigger, an event type, and a set of arguments, to determine whether there is a reportable event mention. In the test process, for each candidate event mention, our event extraction system firstly predicts its triggers and event types: given an randomly selected entity mention from the candidate, the system determines the entity subtype it belonging to and the corresponding trigger list, and then all non-entity words in the candidate are scanned for a instance of triggers from the list. When an instance is found, the system tags the candidate as the event type that the most frequently co-occurs with the entity subtype in the events that triggered by the instance. Secondly the argument classifier is applied to the remaining mentions in the candidate; for any argument passing that classifier, the role classifier is used to assign a role to it. Finally, once all arguments have been assigned, the reportableevent classifier is applied to the candidate; if the result is successful, this event mention is reported. Further Division of Entity Type One of the most important pretreatments before our blind cross-entity inference is to divide the ACE entity type into more cohesive subtype. The greater consistency among backgrounds of entities in such a subtype might be good to improve the precision of cross-entity inference. For each ACE entity type, we collect all entity mentions of the type from training corpus, and regard each such mention as a query to retrieve the 50 most relevant documents from Web. Then we select 50 key words that the most weighted by TFIDF in the documents to roughly describe background of entity. After establishing the vector space model (VSM) for each entity mention of the type, we adopt a clustering toolkit (CLUTO) to further divide the mentions into different subtypes. Finally, for each subtype, we describe its centroid by using 100 key words which the most frequently occurred in relevant documents of entities of the subtype. In the test process, for an entity mention in a candidate event mention, we determine its type by comparing its background against all centroids of subtypes in training corpus, and the subtype whose centroid has the most Cosine similarity with the background will be assigned to the entity. It is noteworthy that global information from the Web is only used to measure the entity-background consistency and not directly in the inference process. Thus our event extraction system actually still performs a sentence-level inference based on local information. Cross-Entity Inference Our event extraction system adopts a step-bystep cross-entity inference to predict event. As discussed above, the first step is to determine the trigger in a candidate event mention and tag its event type based on consistency of entity type. Given the domain of event mention that restrained by the known trigger, event type and entity subtype, the second step is to distinguish the most probable arguments that co-occurring in the domain from the non-arguments. Then for each of the arguments, the third step can use the co-occurring arguments in the domain as important contexts to predict its role. Finally, the inference process determines whether the candidate is a reportable event mention according to a confidence coefficient. In the following sections, we focus on introducing the three classifiers: argument classifier, role classifier and reportable-event classifier. Cross-Entity Argument Classifier For a candidate event mention, the first step gives its event type, which roughly restrains the domain of event mentions where the arguments of the candidate might co-occur. On the basis, given an entity mention in the candidate and its type (see the pretreatment process in section 5.1), the argument classifier could predict whether other entity mentions co-occur with it in such a domain, if yes, all the mentions will be the arguments of the candidate. In other words, if we know an entity of a certain type participates in some event, we will think of what entities also should participate in the event. For instance, when we know a defendant goes on trial, we can conclude that the judge, lawyer and witness should appear in court. A SVM-based argument classifier is used to determine arguments of candidate event mention. Each feature of this classifier is the conjunction of: The subtype of an entity The event type we are trying to assign an argument to A binary indicator of whether this entity subtype co-occurs with other subtypes in such an event type (There are 266 entity subtypes, and so 266 features for each instance) Some minor features, such as another binary indicator of whether arguments co-occur with trigger in the same clause (see Table 7). Cross-Entity Role Classifier For a candidate event mention, the arguments that given by the second step (argument classifier) provide important contextual information for predicting what role the local entity (also one of the arguments) takes on. For instance, when citizens (Arg1) co-occur with terrorist (Arg2), most likely the role of Arg1 is Victim. On the basis, with the help of event type, the prediction might be more precise. For instance, if the Arg1 and Arg2 cooccur in an Attack event mention, we will have more confidence in the Victim role of Arg1. Besides, as discussed in section 4, entities of the same type normally take on the same role in similar events, especially when they co-occur with similar arguments in the events (see Table 2). Therefore, all instances of co-occurrence model {entity subtype, event type, arguments} in training corpus could provide effective evidences for predicting the role of argument in the candidate event mention. Based on this, we trained a SVM-based role classifier which uses following features: Feature 1 and Feature 2 (see Table 7) Given the event domain that restrained by the entity and event types, an indicator of what subtypes of arguments appear in the domain. (266 entity subtypes make 266 features for each instance) Reportable-Event Classifier At this point, there are still two issues need to be resolved. First, some triggers are common words which often mislead the extraction of candidate event mention, such as "it", "this", "what", etc. These words only appear in a few event mentions as trigger, but when they once appear in trigger list, a large quantity of noisy sentences will be regarded as candidates because of their commonness in sentences. Second, some arguments might be tagged as more than one role in specific event mentions, but as ACE event guideline, one argument only takes on one role in a sentence. So we need to remove those with low confidence. A confidence coefficient is used to distinguish the correct triggers and roles from wrong ones. The coefficient calculate the frequency of a trigger (or a role) appearing in specific domain of event mentions and that in whole training corpus, then combines them to represent its confidence degree, just like TFIDF algorithm. Thus, the more typical triggers (or roles) will be given high confidence. Based on the coefficient, we use a SVM-based classifier to determine the reportable events. Each feature of this classifier is the conjunction of: An event type (domain of event mentions) Confidence coefficients of triggers in domain Confidence coefficients of roles in the domain. Experiments We followed Liao (2010)'s evaluation and randomly select 10 newswire texts from the ACE 2005 training corpus as our development set, which is used for parameter tuning, and then conduct a blind test on a separate set of 40 ACE 2005 newswire texts. We use the rest of the ACE training corpus (549 documents) as training data for our event extraction system. To compare with the reported work on crossevent inference (Liao, 2010) and its sentence-level baseline system, we cross-validate our method on 10 separate sets of 40 ACE texts, and report the optimum, worst and mean performances (see Table 8) on the data by using Precision (P), Recall (R) and F-measure (F). In addition, we also report the performance of two human annotators on 40 ACE newswire texts (a random blind test set): one knows the rules of event extraction; the other knows nothing about it. Main Results From the results presented in Table 8, we can see that using the cross-entity inference, we can improve the F score of sentence-level event extraction for trigger classification by 8.59%, argument classification by 11.86%, and role classification by 11.9% (mean performance). Compared to the cross-event inference, we gains 2.87% improvement for argument classification, and 3.81% for role classification (mean performance). Especially, our worst results also have better performances than cross-event inference. Nonetheless, the cross-entity inference has worse F score for trigger determination. As we can see, the low Recall score weaken its F score (see Table 8). Actually, we select the sentence which at least includes one entity mention as candidate event mention, but lots of event mentions in ACE never include any entity mention. Thus we have missed some mentions at the starting of inference process. In addition, the annotator who knows the rules of event extraction has a similar performance trend with systems: high for trigger classification, middle for argument classification, and low for role classification (see Table 8). But the annotator who never works in this field obtains a different trend: higher performance for argument classification. This phenomenon might prove that the step-bystep inference is not the only way to predicate event mention because human can determine arguments without considering triggers and event types. Influence of Clustering on Inference A main part of our blind inference system is the entity-type consistency detection, which relies heavily on the correctness of entity clustering and similarity measurement. In training, we used CLUTO clustering toolkit to automatically generate different types of entities based on their background-similarities. In testing, we use K-nearest neighbor algorithm to determine entity type. Fighter plane (subtype 1 in Air entities): "warplanes" "allied aircraft" "U.S. jets" "a-10 tank killer" "b-1 bomber" "a-10 warthog" "f-14 aircraft" "apache helicopter" "terrorist" "Saddam" "Saddam Hussein" "Baghdad"… Table 9: Noises in subtype 1 of "Air" entities (The blod fonts are noises) We obtained 129 entity subtypes from training set. By randomly inspecting 10 subtypes, we found nearly every subtype involves no less than 19.2% noises. For example, the subtype 1 of "Air" in Table 5 lost the entities of "MiGs" and "enemy planes", but involved "terrorist", "Saddam", etc (See Table 9). Therefore, we manually clustered the subtypes and retry the step-by-step cross-entity inference. The results (denoted as "Visible 1") are shown in Table 10, within which, we additionally show the performance of the inference on the rough entity types provided by ACE (denoted as "Visible 2"), such as the type of "Air", "Population-Center", "Exploding", etc., which normally can be divided into different more cohesive subtypes. And the "Blind" in Table 10 denotes the performances on our subtypes obtained by CLUTO. It is surprised that the performances (see Table 10, F-score) on "Visible 1" entity subtypes are just a little better than "Blind" inference. So it seems that the noises in our blind entity types (CLUTO clusters) don't hurt the inference much. But by reinspecting the "Visible 1" subtypes, we found that their granularities are not enough small: the 89 manual entity clusters actually can be divided into more cohesive subtypes. So the improvements of inference on noise-free "Visible 1" subtypes are partly offset by loss on weakly consistent entities in the subtypes. It can be proved by the poor performances on "Visible 2" subtypes which are much more general than "Visible 1". Therefore, a reasonable clustering method is important in our inference process. Conclusions and Future Work We propose a blind cross-entity inference method for event extraction, which well uses the consistency of entity mention to achieve sentence-level trigger and argument (role) classification. Experiments show that the method has better performance than cross-document and cross-event inferences in ACE event extraction. The inference presented here only considers the helpfulness of entity types of arguments to role classification. But as a superior feature, contextual roles can provide more effective assistance to role determination of local argument. For instance, when an Attack argument appears in a sentence, a Target might be there. So if we firstly identify simple roles, such as the condition that an argument has only a single role, and then use the roles as priori knowledge to classify hard ones, may be able to further improve performance.
2014-07-01T00:00:00.000Z
2011-06-19T00:00:00.000
{ "year": 2011, "sha1": "e17a4ce03444b4cf8b11122ff980b6f2e3ec9b8e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "e17a4ce03444b4cf8b11122ff980b6f2e3ec9b8e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
247319041
pes2o/s2orc
v3-fos-license
R\'enyi State Entropy for Exploration Acceleration in Reinforcement Learning One of the most critical challenges in deep reinforcement learning is to maintain the long-term exploration capability of the agent. To tackle this problem, it has been recently proposed to provide intrinsic rewards for the agent to encourage exploration. However, most existing intrinsic reward-based methods proposed in the literature fail to provide sustainable exploration incentives, a problem known as vanishing rewards. In addition, these conventional methods incur complex models and additional memory in their learning procedures, resulting in high computational complexity and low robustness. In this work, a novel intrinsic reward module based on the R\'enyi entropy is proposed to provide high-quality intrinsic rewards. It is shown that the proposed method actually generalizes the existing state entropy maximization methods. In particular, a $k$-nearest neighbor estimator is introduced for entropy estimation while a $k$-value search method is designed to guarantee the estimation accuracy. Extensive simulation results demonstrate that the proposed R\'enyi entropy-based method can achieve higher performance as compared to existing schemes. I. INTRODUCTION R EINFORCEMENT learning (RL) algorithms have to be designed to achieve an appropriate balance between exploitation and exploration [1]. However, many existing RL algorithms suffer from insufficient exploration, i.e. the agent cannot keep exploring the environment to visit all possible state-action pairs [2]. As a result, the learned policy prematurely falls into local optima after finite iterations [3]. To address the problem, a simple approach is to employ stochastic policies such as the -greedy policy and the Boltzmann exploration [4]. These policies randomly select one action with a non-zero probability in each state. For continuous control tasks, an additional noise term can be added to the action to perform limited exploration. Despite the fact that such techniques can eventually learn the optimal policy in the tabular setting, they are futile when handling complex environments with high-dimensional observations. To cope with the exploration problems above, recent approaches proposed to leverage intrinsic rewards to encourage exploration. In sharp contrast to the extrinsic rewards explicitly given by the environment, intrinsic rewards represent the inherent learning motivation or curiosity of the agent [5]. Most existing intrinsic reward modules can be broadly categorized into novelty-based and prediction error-based approaches [6], [7], [8], [9]. For instance, [10], [11], [12] employed a state visitation counter to evaluate the novelty of states, and the intrinsic rewards are defined to be inversely proportional to the visiting frequency. As a result, the agent is encouraged to revisit those infrequent states while increasing the probability of exploring new states. In contrast, [13], [14], [3] followed an alternative approach in which the prediction error of a dynamic model is utilized as intrinsic rewards. Given a state transition, an auxiliary model was designed to predict a successor state based on the current state-action pair. After that, the intrinsic reward is computed as the Euclidean distance between the predicted and the true successor states. In particular, [15] attempted to perform RL using only the intrinsic rewards, showing that the agent could achieve considerable performance in many experiments. Despite their good performance, these count-based and prediction error-based methods suffer from vanishing intrinsic rewards, i.e. the intrinsic rewards decrease with visits [16]. The agent will have no additional motivation to explore the environment further once the intrinsic rewards decay to zero. To maintain exploration across episodes, [17] proposed a never-give-up (NGU) framework that learns mixed intrinsic rewards composed of episodic and life-long state novelty. NGU evaluates the episodic state novelty using a slot-based memory and pseudo-count method [12], which encourages the agent to visit as many distinct states as possible in each episode. Since the memory is reset at the beginning of each episode, the intrinsic rewards will not decay during the training process. Meanwhile, NGU further introduced a random network distillation (RND) module to capture the life-long novelty of states, which prevents the agent from visiting familiar states across episodes [8]. However, NGU suffers from complicated architecture and high computational complexity, making it difficult to be applied in arbitrary tasks. A more straightforward framework entitled rewardingimpact-driven-exploration (RIDE) is proposed in [18]. RIDE inherits the inverse-forward pattern of [13], in which two dynamic models are leveraged to reconstruct the transition process. More specifically, the Euclidean distance between two consecutive encoded states is utilized as the intrinsic reward, which encourages the agent to take actions that result in more state changes. Moreover, RIDE uses episodic state visitation counts to discount the generated rewards, preventing the agent from staying at states that lead to large embedding differences while avoiding the television dilemma reported in [19]. However, both NGU and RIDE pay excessive attention to specific states while failing to reflect the global exploration extent. Furthermore, they suffer from poor mathematical interpretability and performance loss incurred by auxiliary models. To circumvent these problems, [20] proposed a state entropy maximization method entitled random-encoder-forefficient-exploration (RE3), forcing the agent to visit the state space more equitably. In each episode, the observation data is collected and encoded using a randomly initialized deep neural network. After that, a k-nearest neighbor estimator is leveraged to realize efficient entropy estimation [21]. Simulation results demonstrated that RE3 significantly improved the sampling efficiency of both model-free and model-based RL algorithms at the cost of less computational complexity. Despite its many advantages, RE3 ignores the important k-value selection while its default random encoder entails low adaptability and robustness. Furthermore, [22] found that the Shannon entropy-based objective function may lead to a policy that visits some states with a vanishing probability, and proposed to maximize the Rényi entropy of state-action distribution (MaxRényi ). In contrast to RE3, RISE provides a more appropriate optimization objective for sustainable exploration. However, [22] leverages a variation auto-encoder (VAE) to estimate the state-action distribution, which produces high computational complexity and may mislead the agent due to imperfect estimation [23]. Inspired by the discussions above, we propose to devise a more efficient and robust method for state entropy maximization to improve exploration in RL. In this paper, we propose a RényI State Entropy (RISE) maximization framework to provide high-quality intrinsic rewards. Our main contributions are summarized as follows: • We propose a Rényi entropy-based intrinsic reward module that generalizes the existing state entropy maximization methods such as RE3, and provide theoretical analysis for the Rényi entropy-based learning objective. The new module can be applied in arbitrary tasks with significantly improved exploration efficiency for both model-based and model-free RL algorithms; • By leveraging (VAE) model, the proposed module can realize efficient and robust encoding operation for accurate entropy estimation, which guarantees its generalization capability and adaptability. Moreover, a search algorithm is devised for the k-value selection to reduce the uncertainty of performance loss caused by random selection; • Finally, extensive simulation is performed to compare the performance of RISE against existing methods using both discrete and continuous control tasks as well as several hard exploration games. Simulation results confirm that the proposed module achieve superior performance with higher efficiency. II. PROBLEM FORMULATION We study the following RL problem that considers a Markov decision process (MDP) characterized by a tuple M = S, A, T , r, ρ(s 0 ), γ [1], in which S is the state space, A is the action space, T (s |s, a) is the transition probability, r(s, a) : S × A → R is the reward function, ρ(s 0 ) is the initial state distribution, and γ ∈ (0, 1] is a discount factor, respectively. We denote by π(a|s) the policy of the agent that observes the state of the environment before choosing an action from the action space. The objective of RL is to find the optimal policy π * that maximizes the expected discounted return given by where Π is the set of all stationary policies, and τ = (s 0 , a 0 , . . . , a T −1 , s T ) is the trajectory collected by the agent. In this paper, we aim to improve the exploration in RL. To guarantee the completeness of exploration, the agent is required to visit all possible states during training. Such an objective can be regarded as the Coupon collector's problem conditioned upon a nonuniform probability distribution [24], in which the agent is the collector and the states are the coupons. Denote by d π (s) the state distribution induced by the policy π. Assuming that the agent takesT environment steps to finish the collection, we can compute the expectation ofT as where |·| stands for the cardinality of the enclosed set S. For simplicity of notation, we sometimes omit the superscript in d π (s) in the sequel. Efficient exploration aims to find a policy that optimizes min π∈Π E π (T ). However, it is non-trivial to evaluate Eq. (2) due to the improper integral, not to mention solving the optimization problem. To address the problem, it is common to leverage the Shannon entropy to make a tractable objective function, which is defined as However, this objective function may lead to a policy that visits some states with a vanishing probability. In the following section, we will first employ a representative example to demonstrate the practical drawbacks of Eq. (3) before introducing the Rényi entropy to address the problem. III. RÉNYI STATE ENTROPY MAXIMIZATION A. Rényi State Entropy We first formally define the Rényi entropy as follows: Definition 1 (Rényi Entropy). Let X ∈ R m be a random vector that has a density function f (x) with respect to Lebesgue measure on R m , and let X = {x ∈ R m : f (x) > 0} be the support of the distribution. The Rényi entropy of order α ∈ (0, 1) ∪ (1, +∞) is defined as [22]: Using Definition 1, we propose the following Rényi state entropy (RISE): Fig. 1 use a toy example to visualize the contours of different objective functions when an agent learns from an environment characterized by only three states. As shown in Fig. 1, − log(T ) decreases rapidly when any state probability approaches zero, which prevents the agent from visiting a state with a vanishing probability while encouraging the agent to explore the infrequently-seen states. In contrast, the Shannon entropy remains relatively large as the state probability approaches zero. Interestingly, Fig. 1 shows that this problem can be alleviated by the Rényi entropy as it better matches − log(T ). The Shannon entropy is far less aggressive in penalizing small probabilities, while the Rényi entropy provides more flexible exploration intensity. B. Theoretical Analysis To maximize H α (d), we consider using a maximum entropy policy computation (MEPC) algorithm proposed by [25], which uses the following two oracles: Definition 2 (Approximating planning oracle). Given a reward function r : S → R and a gap , the planning oracle returns a policy by π = O AP (r, 1 ), such that where V π is the state-value function. Definition 3 (State distribution estimation oracle). Given a gap and a policy π, this oracle estimates the state distribu- Given a set of stationary policiesΠ = {π 0 , π 1 , . . . }, we define a mixed policy as π mix = (ω,Π), where ω contains the weighting coefficients. Then the induced state distribution is Finally, the workflow of MEPC is summarized in Algorithm 1. 5: Invoke the state distribution oracle on π mix,t = (ω,Ĥ t ): Define the reward function r t as Approximate the optimal policy on r t : Update π mix,t = (ω t+1 ,Π) by: Consider the discrete case of Rényi state entropy and set α ∈ (0, 1), we have To maximize H α (d), we can alternatively maximizẽ SinceH α (d) is not smooth, we may consider a smoothed H α,σ (d) defined as where σ > 0. Proof: See proof in Appendix B. Now we are ready to give the following theorem: if Algorithm 1 is run for Proof: See proof in Appendix C. Theorem 1 demonstrates the computational complexity when using MEPC to maximizeH. Moreover, a small α will contribute to the exploration phase, which is consistent with the analysis in [22]. C. Fast Entropy Estimation However, it is non-trivial to apply MEPC when handling complex environments with high-dimensional observations. To address the problem, we propose to utilize the following knearest neighbor estimator to realize efficient estimation of the Rényi entropy [26]. Note that π in Eq. (15) denotes the ratio between the circumference of a circle to its diameter. a set of independent random vectors from the distribution X. For k < N, k ∈ N,X i stands for the k-nearest neighbor of X i among the set. We estimate the Rényi entropy using the sample mean as follows: Proof: See proof in [26]. Given a trajectory τ = {s 0 , a 0 , . . . , a T −1 , s T } collected by the agent, we approximate the Rényi state entropy in Eq. (4) using Eq. (15) aŝ where y i is the encoding vector of s i andỹ i is the k-nearest neighbor of y i . After that, we define the intrinsic reward that takes each transition as a particle: wherer(·) is used to distinguish the intrinsic reward from the extrinsic reward r(·). Eq. (18) indicates that the agent needs to visit as more distinct states as possible to obtain higher intrinsic rewards. Such an estimation method requires no additional auxiliary models, which significantly promotes the learning efficiency. Equipped with the intrinsic reward, the total reward of each transition (s t , a t , s t+1 ) is computed as where H(π(·|s t )) is the action entropy regularizer for improving the exploration on action space, λ t = λ 0 (1 − κ) t and ζ are two non-negative weight coefficients, and κ is a decay rate. IV. ROBUST REPRESENTATION LEARNING While the Rényi state entropy encourages exploration in high-dimensional observation spaces, several implementation issues have to be addressed in its practical deployment. First of all, observations have to be encoded into low-dimensional vectors in calculating the intrinsic reward. While a randomly initialized neural network can be utilized as the encoder as proposed in [20], it cannot handle more complex and dynamic tasks, which inevitably incurs performance loss. Moreover, since it is less computationally expensive to train an encoder than RL, we propose to leverage the VAE to realize efficient and robust embedding operation, which is a powerful generative model based on the Bayesian inference [23]. As shown in Fig. 2(a), a standard VAE is composed of a recognition model and a generative model. These two models represent a probabilistic encoder and a probabilistic decoder, respectively. We denote by q φ (z|s) the recognition model represented by a neural network with parameters φ. The recognition model accepts an observation input before encoding the input into latent variables. Similarly, we represent the generative model as p ψ (s|z) using a neural network with parameters ψ, accepting the latent variables and reconstructing the observation. Given a trajectory τ = {s 0 , a 0 , . . . , a T −1 , s T }, the VAE model is trained by minimizing the following loss function: where t = 0, . . . , T , D KL (·) is the Kullback-Liebler (KL) divergence. Next, we will elaborate on the design of the k value to improve the estimation accuracy of the state entropy. [21] investigated the performance of this entropy estimator for some specific probability distribution functions such as uniform distribution and Gaussian distribution. Their simulation results demonstrated that the estimation accuracy first increased before decreasing as the k value increases. To circumvent this problem, we propose our k-value searching scheme as shown in Fig. 2(b). We first divide the observation dataset into K subsets before the encoder encodes the data into low-dimensional embedding vectors. Assuming that all the data samples are independent and identically distributed, an appropriate k value should produce comparable results on different subsets. By exploiting this intuition, we propose to search the optimal k value that minimizes the min-max ratio of entropy estimation set. Denote by π θ the policy network, the detailed searching algorithm is summarized in Algorithm 2. Algorithm 2 k-value searching method 1: Initialize a policy network π θ ; 2: Initialize the number of sample steps N , the threshold k max of k, a null array δ with length k max , and the number of subsets K; 3: Execute policy π θ and collect the trajectory τ = {s 0 , a 0 , . . . , a N −1 , s N }; 4: Divide the observations dataset {s i } N i=0 into K subsets randomly; 5: for k = 1, 2, . . . , k max do 6: Calculate the estimated entropy on K subsets using Eq. (15): Calculate the min-max ratio δ(Ĥ k ) and δ[k] ← δ(Ĥ k ); 8: end for 9: Output k = argmin Finally, we are ready to propose our RISE framework by exploiting the optimal k value derived above. As shown in Fig. 2(c), the proposed RISE framework first encodes the high-dimensional observation data into low-dimensional embedding vectors through q : S → R m . After that, the Euclidean distance between y t and its k-nearest neighbor is computed as the intrinsic reward. Algorithm 3 and Algorithm 4 summarize the on-policy and off-policy RL versions of the proposed RISE, respectively. In the off-policy version, the entropy estimation is performed on the sampled transitions in each step. As a result, a larger batch size can improve the estimation accuracy. It is worth pointing out that RISE can be straightforwardly integrated into any existing RL algorithms such as Q-learning and soft actor-critic, providing high-quality intrinsic rewards for improved exploration. V. EXPERIMENTS In this section, we will evaluate our RISE framework on both the tabular setting and environments with highdimensional observations. We compare RISE against two representative intrinsic reward-based methods, namely RE3 and MaxRényi . A brief introduction of these benchmarking methods can be found in Appendix A. We also train the agent without intrinsic rewards for ablation studies. As for hyper-parameters setting, we only report the values of the best experiment results. A. Maze Games In this section, we first leverage a simple but representative example to highlight the effectiveness of the Rényi state entropy-driven exploration. We introduce a grid-based environment Maze2D [27] illustrated in Fig. 3. The agent can move one position at a time in one of the four directions, namely left, right, up, and down. The goal of the agent is to find the shortest path from the start point to the end point. In particular, the agent can teleport from a portal to another identical mark. 1) Experimental Setting: The standard Q-learning (QL) algorithm [2] is selected as the benchmarking method. We perform extensive experiments on three mazes with different sizes. Note that the problem complexity increases exponentially with the maze size. In each episode, the maximum environment step size was set to 10M 2 , where M is the maze size. We initialized the Q-table with zeros and updated the Q-table in every step for efficient training. The update formulation is given by: (22) where Q(s, a) is the action-value function. The step size was set to 0.2 while a -greedy policy with an exploration rate of 0.001 was employed. 2) Performance Comparison: To compare the exploration performance, we choose the minimum number of environment steps taken to visit all states as the key performance indicator (KPI). For instance, a 10 × 10 maze of 100 grids corresponds to 100 states. The minimum number of steps for the agent to visit all the possible states is evaluated as its exploration performance. As seen in Fig. 4, the proposed Q-learning+RISE achieved the best performance in all three maze games. Moreover, RISE with smaller α takes less steps to finish the exploration phase. This experiment confirmed the great capability of the Rényi state entropy-driven exploration. B. Atari Games Next, we will test RISE on the Atari games with a discrete action space, in which the player aims to achieve more points while remaining alive [28]. To generate the observation of the agent, we stacked four consecutive frames as one input. These frames were cropped to the size of (84, 84) to reduce the required computational complexity. 1) Experimental Setting: To handle the graphic observations, we leveraged convolutional neural networks (CNNs) to build RISE and the benchmarking methods. For fair comparison, the same policy network and value network are employed for all the algorithms, and their architectures can be found in Table I. For instance, "8×8 Conv. 32" represents a convolutional layer that has 32 filters of size 8×8. A categorical distribution was used to sample an action based on the action probability of the stochastic policy. The VAE block of RISE and MaxRényi need to learn an encoder and a decoder. The encoder is composed of four convolutional layers and one dense layer, in which each convolutional layer is followed by a batch normalization (BN) layer [29]. Note that "Dense 512 & Dense 512" in Table I means that there are two branches to output the mean and variance of the latent variables, respectively. For the decoder, it utilizes four deconvolutional layers to perform upsampling while a dense layer and a convolutional layer are employed at the top and the bottom of the decoder. Finally, no BN layer is included in the decoder and the ReLU activation function is employed for all components. In the first phase, we initialized a policy network π θ and let it interact with eight parallel environments with different random seeds. We first collected observation data over ten thousand environment steps before the VAE encoder generates the latent vectors of dimension of 128 from the observation data. After that, the latent vectors were sent to the decoder to reconstruct the observation tensors. For parameters update, we used an Adam optimizer with a learning rate of 0.005 and a batch size of 256. Finally, we divided the observation dataset into K = 8 subsets before searching for the optimal k-value over the range of [1,15] using Algorithm 2. Equipped with the learned k and encoder q φ , we trained RISE with ten million environment steps. In each episode, the agent was also set to interact with eight parallel environments with different random seeds. Each episode has a length of 128 steps, producing 1024 pieces of transitions. After that, we calculated the intrinsic reward for all transitions using Eq. (18), where α = 0.1, λ 0 = 0.1. Finally, the policy network was updated using a proximal policy optimization (PPO) method [30]. More specifically, we used a PyTorch implementation of the PPO method, which can be found in [31]. The PPO method was trained with a learning rate of 0.0025, a value function coefficient of 0.5, an action entropy coefficient of 0.01, and a generalized-advantage-estimation (GAE) parameter of 0.95 [32]. In particular, a gradient clipping operation with threshold [−5, 5] was performed to stabilize the learning procedure. As for benchmarking methods, we trained them following their default settings reported in the literature [22], [20]. 2) Performance Comparison: The average one-life return is employed as the KPI in our performance comparison. Table II illustrates the performance comparison over eight random seeds on nine Atari games. For instance, 5.24k±1.86k represents the mean return is 5.24k and the standard deviation is 1.86k. The highest performance is shown in bold. As shown in Table II, RISE achieved the highest performance in all nine games. Both RE3 and MaxRényi achieved the second highest performance in three games. Furthermore, Fig. 5 the benchmarking methods, and the frame per second (FPS) is set as the KPI. For instance, if a method takes 10 second to finish the training of an episode, the FPS is computed as the ratio between the time cost and episode length. The time cost involves only interaction and policy updates for the vanilla PPO agent. But the time cost needs to involve further the intrinsic reward generation and auxiliary model updates for other methods. As shown in Fig. 6, the vanilla PPO method achieves the highest computation efficiency, while RISE and RE3 achieve the second highest FPS. In contrast, MaxRényi has far less FPS that RISE and RE3. This mainly because RISE and RE3 require no auxiliary models, while MaxRényi uses a VAE to estimate the probability density function. Therefore, RISE has great advantages in both the policy performance and learning efficiency. C. Bullet Games 1) Experimental Setting: Finally, we tested RISE on six Bullet games [33] with continuous action space, namely Ant, Half Cheetah, Hopper, Humanoid, Inverted Pendulum and Walker 2D. In all six games, the target of the agent is to move forward as fast as possible without falling to the ground. Unlike the Atari games that have graphic observations, the Bullet games use fixed-length vectors as observations. For instance, the "Ant" game uses 28 parameters to describe the state of the agent, and its action is a vector of 8 values within [−1.0, 1.0]. We leveraged the multilayer perceptron (MLP) to implement RISE and the benchmarking methods. The detailed network architectures are illustrated in Table IV. Note that the encoder and decoder were designed for MaxRényi, and no BN layers were introduced in this experiment. Since the state space is far simpler than the Atari games, the entropy can be directly derived from the observations while the training procedure for the encoder is omitted. We trained RISE with ten million environment steps. The agent was also set to interact with eight parallel environments with different random seeds, and Gaussian distribution was used to sample actions. The rest of 2) Performance Comparison: Table III illustrates the performance comparison between RISE and the benchmarking methods. Inspection of Table III suggests that RISE achieved the best performance in all six games. In summary, RISE has shown great potential for achieving excellent performance in both discrete and continuous control tasks. VI. CONCLUSION In this paper, we have investigated the problem of improving exploration in RL by proposing a Rényi state entropy maximization method to provide high-quality intrinsic rewards. Our method generalizes the existing state entropy maximization method to achieve higher generalization capability and flexibility. Moreover, a k-value search algorithm has been developed to obtain efficient and robust entropy estimation by leveraging a VAE model, which makes the proposed method practical for real-life applications. Finally, extensive simulation has been performed on both discrete and continuous tasks from the Open AI Gym library and Bullet library. Our simulation results have confirmed that the proposed algorithm can substantially outperform conventional methods through efficient exploration. APPENDIX A. Benchmarking Methods 1) RE3: Given a trajectory τ = (s 0 , a 0 , . . . , a T −1 , s T ), RE3 first uses a randomly initialized DNN to encode the visited states. Denote by {x i } T −1 i=0 the encoding vectors of observations, RE3 estimates the entropy of state distribution d(s) using a k-nearest neighbor entropy estimator [21]: wherex i is the k-nearest neighbor of x i within the set {x i } T −1 i=0 , m is the dimension of the encoding vectors, and Γ(·) is the Gamma function, and Ψ(·) is the digamma function. Note that π in Eq. (23) denotes the ratio between the circumference of a circle to its diameter. Equipped with Eq. (23), the total reward for each transition (s t , a t , r t , s t+1 ) is computed as: where λ t = λ 0 (1 − κ) t , λ t ≥ 0 is a weight coefficient that decays over time, κ is a decay rate. Our RISE method is a generalization of RE3, which provides more aggressive exploration incentives. It uses VAE to estimate d(s) and take the evidence lower bound (ELBO) as the density estimation [23], which suffers from low efficiency and high variance.
2022-03-11T14:11:11.221Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "c7eb4eb7daf46f4d0ef848e08783bb67a64cfe57", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cca11c388bfedf34f21a36b0d48c283b26ba1168", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
236909824
pes2o/s2orc
v3-fos-license
B cell infiltration is highly associated with prognosis and an immune-infiltrated tumor microenvironment in neuroblastoma Aim: Neuroblastoma is the most common extracranial solid tumor in children. Recent advances in immunotherapy Approaches, including in neuroblastoma, have shown the important role of the immune system in mounting an effective anti-tumor response. In this study, we aimed to provide a comprehensive investigation of immune cell infiltration in neuroblastoma utilizing a large number of gene expression datasets. Methods: We inferred immune cell infiltration using an established immune inference method and evaluated the association between immune cell abundance and patient prognosis as well as common chromosomal abnormalities found in neuroblastoma. In addition, we evaluated co-infiltration patterns among distinct immune cell types. Results: The infiltration of naïve B cells, NK cells, and CD8+ T cells was associated with improved patient prognosis. Naïve B cells were the most consistent indicator of prognosis and associated with an active immune tumor microenvironment. Patients with high B cell infiltration showed high co-infiltration of other immune cell types and the enrichment of immune-related pathways. The presence of high B cell infiltration was associated with both recurrence-free and overall survival, even after adjusting for clinical variables. Conclusion: In this study, we have provided a comprehensive evaluation of immune cell infiltration in neuroblastoma using gene expression data. We propose an important role for B cells in the neuroblastoma tumor microenvironment and suggest that B cells can be used as a prognostic biomarker to predict recurrence-free and overall survival independently of currently utilized prognostic variables. INTRODUCTION Neuroblastoma is the most common extracranial childhood cancer and accounts for 8%-10% of all childhood cancers [1] . It originates from neural crest progenitor cells and can consequently occur anywhere along the sympathetic nervous system with the most common location being the adrenal glands [2,3] . Neuroblastoma can develop sporadically or display autosomal dominant inheritance. The latter occurs most commonly due to familial mutations in the ALK or PHOX 2 B genes [4,5] . The prognosis of neuroblastoma patients has improved in recent years [6] . However, the 5-year survival rate of patients with high-risk disease is still below 50% [2] , highlighting the need for additional therapies. Immunotherapy has recently led to a significant extension of survival rates in several adult cancers [7] . Although immunotherapy may also hold great promise for pediatric oncology, few clinical trials are currently being conducted in solid pediatric cancer types. The increased survival of patients with high-risk neuroblastoma following the success of anti GD 2 therapy exemplifies the potential of immunotherapy in neuroblastoma [8] . Thorough characterization of the tumor microenvironment (TME) is essential in identifying challenges and opportunities for additional immunotherapy use in neuroblastoma. Due to the limited availability of fresh neuroblastoma tumor material and the practical challenges of in-depth immune analyses, several studies have recently investigated the composition of the immune cell infiltrate in neuroblastoma using gene expression datasets [9][10][11][12] . These studies have shown that several immune cell types infiltrate the neuroblastoma TME, including B cells, CD 8 + T cells, NK cells, and macrophages [9][10][11][12] . In addition, these studies have also consistently shown that tumors displaying MYCN amplifications are significantly less immune infiltrated as compared to patients without MYCN amplifications [9][10][11][12] , which is consistent with more traditional immunohistochemistry (IHC) approaches [9,10] . However, several open questions remain. First, the majority of these studies only used a limited number of datasets, which potentially challenges external validity. Second, the association between individual immune cell types and patient prognosis has only been evaluated in a select number of studies for few immune cell types. Lastly, while the negative relationship between MYCN amplifications and immune infiltration is clear, the role of additional commonly altered chromosomal alterations in neuroblastoma and immune cell infiltration is unclear. In this study, we aimed at providing a comprehensive investigation of immune cell infiltration in neuroblastoma using a large number of independent gene expression datasets. We evaluated the relationship between immune cell infiltration and patient prognosis, co-infiltration of immune cells, and the association between common chromosomal abnormalities and immune cell infiltration. We found a surprising role for B cell infiltration in both prognostic and co-infiltration analyses. In conclusion, our findings both confirm previous studies and propose an important role for B cells in the TME of neuroblastoma. Utilized data A total of 11 publicly available gene expression datasets were utilized in this study. The Westerman [13] and Oberthuer [14] datasets were obtained from the European Molecular Biology Laboratory (EMBL) database under accession numbers E-TABM- 38 and E MTAB-179 , respectively. The Henrich [15] , SEQC [16] , Kocak [17] , Wang [18] , Rajbhandari [19] , Lastowska [20] , and Ackerman [21] datasets were obtained from the Gene Expression Omnibus (GEO) under accession numbers GSE 73517 , GSE 62564 , GSE 45547 , GSE 3960 , GSE 85047 , GSE 13136 , GSE 120572 , respectively. The Berwanger [22] dataset was obtained from the PREdiction of Clinical Outcomes from Genomic Profiles portal (https:// precog.stanford.edu/; accession: Berwanger_NB). The ICGC [23] dataset was obtained through the ICGC portal (https://dcc.icgc.org/). Microarray datasets were provided as normalized expression at the probeset level in which some genes might be represented by multiple probesets. We converted probeset expression into gene expression values. Specifically, for one-channel arrays, we selected the probeset with the highest hybridization intensity across all samples to represent gene expression. For two-channel arrays, the average expression values of all probesets were calculated to represent gene expression. Datasets from one-channel arrays were further median normalized for each gene to transform intensities into relative expression values. Depending on availability, associated clinical data were obtained through EMBL, GEO, or the manuscript accompanying the dataset. See Supplementary Table 1 for detailed information and available clinical variables for each dataset. Immune cell inference A detailed description of immune cell inference can be found in [24,25] . Briefly, patient specific immune cell type inference was determined by evaluating the similarity between six predefined gene expression weight profiles (one for each immune cell type) and patient gene expression profiles using BASE [26] , a rank-based gene set enrichment method. High similarity between a patient's gene expression profile and an immune cell weight profile resulted in high enrichment scores for that immune cell type for that particular patient. Due to the scale-free nature of resulting infiltration scores, immune cell infiltration scores are only comparable within each dataset and within an individual immune cell type. Survival analysis Survival analyses were performed using the R survival package (version 3.1-8). Log-rank tests were performed to evaluate overall survival probabilities between two groups using the survdiff function. KaplanMeier (KM) plots were generated using the survfit function. Results from Cox proportional hazards (Coxph) models shown in KM plots were performed on continuous immune infiltration scores in a univariate regression model, using the coxph function from the survival package. Shown P-values were obtained from a two-sided Wald test. Forest plots were based on the results of multivariate Coxph models in which all variables specified in the figure panels were included and immune cell infiltration was dichotomized based on the median infiltration score. Statistical methods The Spearman correlation coefficient (SCC) was reported for all correlation analyses as the assumptions underlying the Pearson correlation (i.e., normal distribution, homoscedasticity or linearity) were not met. SCC was calculated using the R function cor and significance was assessed using cor.-test. Immune cell infiltration variance explained by different chromosomal abnormalities was calculated using multivariate linear regression models using the lm and anova functions. The order of each of the four chromosomal abnormalities was randomly shuffled 100 times to obtain the standard deviation and mean variance. P-values smaller than 0.05 were considered significant. All analyses were conducted in R (version 3.6.2). Immune infiltration in neuroblastoma is associated with patient prognosis To interrogate immune cell infiltration in neuroblastoma gene expression datasets, we inferred the abundance of six common immune cell types, naïve B cells, memory B cells, CD 4+ T cells, CD 8+ T cells, NK cells, and monocytes. This method has been well-established and validated in multiple studies [24,25] . We first evaluated immune cell infiltration in the Oberthuer neuroblastoma gene expression dataset. We observed that the infiltration of certain immune cells was positively associated with overall survival, while the infiltration of other immune cells was negatively associated. High abundance of naïve B cells, memory B cells, CD 8+ T cells, and NK cells was significantly associated with longer overall survival [ Figure 1A-D]. Conversely, high infiltration of CD 4+ T cells was negatively associated with overall survival [ Figure 1E]. The infiltration of monocytes was not significantly associated with survival based on a Log-rank test [ Figure 1F]. Previous reports have suggested that few B cells infiltrate neuroblastoma tumors [27,28] and little is known about their exact role in neuroblastoma. In addition, several studies have recently shown that B cells are crucial in mounting an effective anti-tumor immune response [29][30][31] . In our study, naïve B cells were highly significantly associated with survival as compared to the other major immune cell types, which sparked our interest. To validate our findings in the Oberthuer dataset, we collected several additional independent datasets containing patient survival information [Supplementary Table 1] and inferred immune cell infiltration for each dataset. We found a highly reproducible pattern where high infiltration of naïve B cells was consistently associated with better overall survival [ Figure 2A]. Increased infiltration of CD 8+ T cells and NK cells were also significantly associated with longer survival in more than half of the datasets [Supplementary Figure 1A]. In addition to overall survival, recurrence-free survival was also significantly longer in patients with high naïve B cell infiltration [ Figure 2B]. In conclusion, these findings suggest that naïve B cells are a reliable prognostic indicator in neuroblastoma. Naïve B cells highly associated with survival independent of clinical variables Naïve B cells were highly associated with patient prognosis in all evaluated datasets using univariate analyses. However, several clinical variables are also known to be highly associated with overall patient survival. The presence of MYCN amplifications (MYC-Gain) automatically classifies neuroblastoma as high-risk [2,32] and patients with this amplification are treated more intensely to increase the probability of overall survival. We thus stratified patients based on MYCN amplification status and evaluated the association of naïve B cells with overall survival in MYC wild type and MYC-Gain patients [ Figure 3A]. Patients exhibiting MYC-Gains indeed had much shorter overall survival, but patients with MYC Gain and high naïve B cell infiltration did live significantly longer in two out of three evaluated datasets and the last datasets showed a trend of prolonged survival of patients with high naïve B cell infiltration [ Figure 3A]. The infiltration of naïve B cells in patients who did not have MYC-Gains was also significant in two out of three datasets (P < 0.05) [ Figure 3A]. In addition to MYCN amplification status, tumor stage and age are also important prognostic variables that are considered during risk stratification [2,32] . Even after adjusting for these prognostic clinical variables and MYCN amplification status, the infiltration of naïve B cells was still significantly associated with overall survival in all independent datasets [ Figure 3B]. Irrespective of tumor stage and patient age, the infiltration of naïve B cells was consistently associated with longer patient survival. In addition, RFS was also significantly longer in patients with high naïve B cell infiltration, irrespective of MYCN-amplification status [Supplementary Figure 1B]. Adjustment of clinical variables and MYCN amplification status in multivariate Coxph models showed that high naïve B cell infiltration is significantly associated with prolonged RFS independent of adjusted variables [Supplementary Figure 1C]. In conclusion, the infiltration of naïve B cells is associated with prognosis in neuroblastoma irrespective of clinical variables and MYCN amplification status. The infiltration of naïve B cells in neuroblastoma is correlated with an immune hot tumor microenvironment Previous studies have shown that different immune cell types are often present in a given tumor. Since naïve B cells were most consistently associated with prognosis, we evaluated if these cells are correlated with the presence of other immune cell types. CD 8+ T cells are of major interest due to their essential role in an anti-tumor immune response [33] . We indeed found that naïve B cells are highly correlated with the presence of CD 8+ T cells [ Figure 4A]. In addition, we observed an interesting pattern in which naïve B cells were highly positively correlated with the presence of memory B cells, CD 8+ T cells, and NK cells, but negatively correlated with monocytes and CD 4+ T cells [ Figure 4B]. The observed pattern in Figure 4B was highly reproducible in additional independent datasets [ Figure 4C]. Five out of six datasets showed an identical pattern of high correlations with memory B cells, CD 8+ T cells, and NK, but negative correlations with monocytes and CD 4+ T cells. The last dataset showed positive correlations between naïve B cells and all other cell types, although the correlations with memory B cells, CD 8+ T cells, and NK were much stronger as compared to the monocyte and CD4+ T cell correlations. As multiple CD 4+ T cell subsets are recognized, we evaluated if we could further narrow down on the precise CD 4+ T cell subset that is present in neuroblastoma. We utilized established CD4+ T cell subset marker genes [34] and found that the inferred CD 4+ T cells are most similar to activated CD 4+ T cells. More specifically, both Th 1 and Th 2 signals were enriched [Supplementary Figure 1D]. In conclusion, it seems that the infiltration of B cells is associated with a hot TME in neuroblastoma. High naïve B cells infiltration associated with enrichment of immune-related pathways To further investigate characteristics of the TME of B cell-infiltrated neuroblastoma, we separated patients based on low or high naïve B cell infiltration, using median naïve B cell infiltration as a separator. We performed Gene Set Enrichment Analysis to assess which pathways were enriched in either patient group. A distinct biological difference was observed, where pathways associated with cell proliferation were enriched in patients with low B cell infiltration, whereas immune-related pathways were enriched in patients with high B cell infiltration [ Figure 5A]. For example, the Translation and Ribosome pathways were among the most highly enriched pathways in patients with low B cell infiltration [ Figure 5B], potentially reflecting overall cell proliferation. Additional pathways related to cell proliferation, including eukaryotic translation initiation, rRNA processing, DNA replication, and chromosome maintenance were also among the most highly enriched pathways in patients with low B cell infiltration [ Figure 5A]. Autoimmune thyroid disease and IFNγ signaling were among the most highly enriched pathways in patients with high B cell infiltration [ Figure 5C]. Pathways related to transplant rejection were also enriched, including graft vs. host disease and allograft rejection [ Figure 5A], likely reflecting the presence of an ongoing immune response in B cell-infiltrated tumors. In conclusion, low naïve B cell infiltration is associated with proliferative pathways whereas high naïve B cell infiltration is associated with immune-related pathways. Chromosomal abnormalities in relation to immune infiltration Previous studies have investigated the relationship between specific copy number variations and immune cell infiltration in neuroblastoma. Several studies have suggested that neuroblastoma tumors with MYCN amplifications are poorly infiltrated [9][10][11][12] . We indeed confirmed the negative relationship between MYCN amplifications and immune cell infiltration in seven independent datasets [ Figure 6A]. Naïve B cells and NK were the most consistent cell types associated with MYCN amplification status, showing significantly lower immune infiltration in MYCN amplified samples in all seven datasets. In addition to MYCN amplifications, other chromosomal abnormalities commonly occur in neuroblastoma. Only a small number of datasets contained information on Chr1p, Chr11q, and Chr17q status, the most commonly altered chromosomal abnormalities in neuroblastoma [2,32] . Since MYCN amplification is strongly associated with immune infiltration, we separated samples based on MYCN status and each of the other chromosomal rearrangements. Although we did observe some differences between naïve B and NK cell infiltration in samples with and without Chr17q gains, MYCN amplification status was much more significantly associated with the infiltration of these immune cells [ Figure 6B]. TERT rearrangements and ATRX mutations are also commonly observed in neuroblastoma [2,32] . While no difference in immune cell infiltration based on TERT rearrangement status was observed, patients with ATRX mutations had significantly lower levels of CD 4+ T cell and higher levels of monocyte infiltration as compared to patients without ATRX mutations [ Supplementary Figure 2A-B]. In addition to assessing specific genotypic groups, we also assessed how much immune cell variation can be explained by individual chromosomal rearrangements. Since the order of variables affects the percentage of variation explained by each variable, we randomly shuffled the order of variables 100 times and calculated the mean and standard deviation of the percentage of immune cell variance explained (see Methods). There was considerable variation between datasets, but MYCN amplification status again showed the most consistent results, especially in naïve B cell and NK cell infiltration [ Figure 6C]. MYCN amplification status explained approximately 10% of naïve B cell infiltration when considering four chromosomal rearrangements in the model, while MYCN status accounted for approximately 25% of NK cell infiltration in two out of three datasets [ Figure 6C]. DISCUSSION The presence of tumor infiltrating leukocytes is indicative of a host immune response to tumors and infiltrating immune cells have been shown to be predictive of clinical outcomes for neuroblastoma patients [9,35] . In our study, we show that several immune cell types are associated with recurrence-fee survival (RFS) and overall survival, most notably naïve B cells, NK cells, and CD 8+ T cells. We have expanded on previous immune inference studies by utilizing a large number of gene expression datasets, as well as by evaluating the association between prognosis and several immune cell types. We propose a previously unappreciated role for naïve B cell abundance in neuroblastoma, which is highly associated with survival and a hot TME. Previous studies have suggested that only a small number of B cells infiltrate in neuroblastoma tumors [27,28] . However, larger numbers of B cells might reside just outside the tumor. The presence of organized lymphoid structures and B cell follicles at the edges of neuroblastoma tumors have been observed [27] . We hypothesize that the small number of tumor infiltrating B cells might originate from these B cell-enriched locations that might not always be captured during biopsies or tissue sections. This hypothesis is in line with recent observations of B cells in other cancer types, where B cell follicles can reside at the tumor margin [29][30][31] . The presence of these B cell structures is highly associated with survival and an effective anti-tumor immune response [29][30][31] . infiltration and patient survival. For example, the occurrence of antigen-specific interactions between T cells and B cells in tumor B cell structures promote CD 8+ T cell cytotoxicity in the TME [37,38] . Another anti-tumor mechanism is the secretion of tumor-specific antibodies that mediate opsonization, antibody-dependent cellular cytotoxicity by NK cells, or promote tumor cell phagocytosis by macrophages and granulocytes [39] . Lastly, the secretion of cytokines, including IFNγ and IL-12, by B cells promotes further activation of anti-tumor CD 8+ T cells and NK cells [39] . We confirmed the findings of previous studies which showed that MYCN amplified neuroblastoma tumors have significantly lower immune cell infiltration compared to patients without MYCN amplifications [9][10][11][12] . When separating patients without and with MYCN amplifications, we still observed that naïve B cell infiltration was associated with overall survival and RFS. This is consistent with a previous study that reported that certain immune characteristics are associated with patient survival irrespective of MYCN amplification status [40] . B cell infiltration could thus be used as a prognostic marker in neuroblastoma in addition to commonly utilized prognostic indications such as age, stage and MYCN amplification status. Consistently, we observed significant associations between B cell infiltration and prognosis when adjusting for clinical variables and MYCN status. Although our study provides important insights into immune infiltration in neuroblastoma, a few limitations should be noted. First, all of our findings are based on gene expression data, which might not always recapitulate protein expression. Protein-based approaches such as immunohistochemistry should corroborate our findings of high B cell infiltration in patients with a better prognosis. Evaluation of the presence of tertiary lymphoid structures adjacent to neuroblastoma tumors should be performed as well. Second, although we attempted to evaluate the association between immune cell infiltration and common chromosomal abnormalities in neuroblastoma, notably Chr1p deletion, Chr11q deletion, and Chr17q gain, only few studies contained this information. Additional studies with available information on chromosomal deletions and gains should be performed to validate our findings. Lastly, our prognostic analyses were all performed retrospectively and prospective studies should evaluate the exact value of B cell as a prognostic biomarker in neuroblastoma. In conclusion, we have provided a comprehensive evaluation of immune cell infiltration in neuroblastoma using gene expression data. The infiltration of naïve B cells, NK cells, and CD 8+ T cells is associated with better prognosis in neuroblastoma among which naïve B cells are the most consistent indicator of prognosis. Based on further analyses, we propose a critical role for B cells in the neuroblastoma TME. The presence of high B cell infiltration is associated with an immune-infiltrated TME and could be used as a prognostic biomarker to predict recurrence-free and overall survival independently of currently utilized prognostic variables.
2021-08-03T23:25:59.738Z
2021-06-06T00:00:00.000
{ "year": 2021, "sha1": "385c142a61297c17e1eb99ef5cf265b5da5b896c", "oa_license": "CCBY", "oa_url": "https://jcmtjournal.com/article/download/4090", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ee99a146c65a944265b788c07606e8141068478b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
268348476
pes2o/s2orc
v3-fos-license
Evaluation of the real applicability of diagnosis-related groups for urinary tract lithiasis surgeries postoperative hospital stay and whether they should be coded in separate DRGs. Methods: The medical records of 893 patients who underwent URL or PCNL surgeries between September 2017 and February 2021 at a University Hospital were analyzed. The variables analyzed were gender, age, type of procedure, location and size of the stone, presence of previous comorbidities or urinary tract infection, length of hospital stay, and stone recurrence. Associations and comparisons between variables were assessed using the Chi-square and Mann-Whitney tests. Results: Despite being grouped in the same DRG codes, URL and PCNL have different means of hospitalization periods. Most patients who underwent URL were hospitalized for only one day and most patients who underwent PCNL were hospitalized for two days. Conclusion: This study demonstrates that URL average length of hospital stay was shorter than the hospitalization period recommended by the DRG for this procedure. Thus, a reassessment of the DRG codes is necessary and the creation of subdivisions in order to separate URL and PCNL into distinct codes is required. ABSTRACT IntroductIon Nephrolithiasis is one of the most common urological conditions in clinical practice.Recent estimates indicate a prevalence of 10.6% for men and 7.1% for women in the US population 1 .In addition, the lifetime risk of developing an episode has increased continuously in the last two decades, reaching a rate of 10 to 15%.There is still a 50% risk of recurrences within 10 years 2 . This pathology is also related to major financial impacts on the health system.In the year 2000, the total cumulative cost for the care of patients with urolithiasis was estimated at 2.1 billion dollars.Due to population growth and the increasing prevalence of obesity and Diabetes mellitus, it is estimated that the costs for the management of this disease are expected to increase by 1.24 billion dollars per year by 2030 1 . The treatment of ureteral stones ranges from expectant conservative management to minimally invasive procedures, which allow a faster and less painful recovery.Thus, transureteroscopic ureterolithotripsy (URL) has been shown to be the treatment of choice for the removal of stones located in the middle or lower third of the ureter due to its high effectiveness and safety 3 . Percutaneous nephrolithotomy (PCNL), in turn, is indicated for patients with intrarenal stones larger than 20mm or stones larger than 10mm in the lower pole of the kidney 4 .However, despite being a minimally invasive procedure, PCNL is a major operation, with a risk of significant complications.The nephroscope can damage the kidney parenchyma and the calyceal neck, resulting in an increased risk of bleeding 5 .In addition, in some cases, it is not possible to guarantee a stone-free status for the patient 6 . RESUMO Transureteroscopic ureterolithotripsy and percutaneous nephrolithotomy are, therefore, widely different procedures in several aspects and have varying degrees of complexity, with different rates of postoperative complications and length of hospitalization.However, both are included in the same codes of the Diagnosis-Related Groups (DRG), which consists of a classification system for patients admitted to hospitals.The DRG was developed with the objective of providing a set of predictable products or services for classes of patients with similar care processes, with subcategories being created based on certain variables that demonstrated an effect on the length of hospital stay [7][8][9] . Despite the great success achieved by the DRG methodology in terms of reducing hospital costs, the coding of diseases presents certain obstacles.The DRG classification system was developed despite these difficulties, disregarding the existing variability within the same group 10 .Therefore, in some areas, a single code has been used to describe patients receiving a broad spectrum of therapeutic regimens.This is the case for DRG codes 660 (kidney and ureter procedures for non-neoplasm with complication or comorbidity) and 661 (kidney and ureter procedures for non-neoplasm without complication or comorbidity), which encompass percutaneous nephrolithotomy and transureteroscopic ureterolithotripsy.Thus, it is necessary to carry out more studies related to the existing variability within the urinary tract lithiasis surgeries and the real applicability of the DRG for these procedures. Our study aims to investigate how these highly distinct procedures vary in terms of length of postoperative hospital stay and whether they should be coded in separate DRGs.The alternative hypothesis is that the DRG classification is inadequate for urinary tract lithiasis procedures covered by codes 660 and 661.The null hypothesis is that these two DRG codes are correct. Methods selectIon crIterIa In this retrospective study, it was selected a total of 893 patients who underwent transureteroscopic ureterolithotripsy or percutaneous nephrolithotomy for the treatment of urinary tract lithiasis surgeries at a University Hospital, in Belo Horizonte, Minas Gerais, Brazil, between September 2017 and February 2021.The inclusion criteria used were that patients should have undergone transureteroscopic ureterolithotripsy or percutaneous nephrolithotomy, being coded in the DRG 660 or 661.Thirty-seven patients who did not have data related to the size and location of the stone, making the analysis unfeasible, were excluded from the study. This study was approved by the Research Ethics Committee of a Medical College from Brazil (4.956.203).The project was authorized by the ethics committee to waive the signing of the free and informed consent form by the participants, since it is a retrospective research using electronic medical records, in which there was no interference in patient care.In addition, the number of participants was very large, and in most cases, difficult to locate, as they no longer regularly attended the institution. InstruMents and procedures Data collection was carried from the tabulation of information recorded in the DRG system of a University Hospital and in the medical records of patients submitted to transureteroscopic ureterolithotripsy and percutaneous nephrolithotomy, in the same institution.The variables analyzed were: gender, age, previous comorbidities, presence of associated urinary tract infection, size and location of the stone, type of procedure performed, surgical complications and length of postoperative hospital stay. statIstIcal analysIs Categorical variables were presented as absolute and relative frequencies and numerical variables as mean ± standard deviation and median (1 st quartile -3 rd quartile).The associations of variables with the type of procedure, length of hospital stay and recurrence were evaluated using the chi-square test.Comparison of age with type of procedure was performed using the Mann-Whitney test, and with length of stay, using the Kruskal-Wallis test with multiple comparisons using the Nemenyi test.The analyses were performed using the R software version 4.0.3 and a significance level of 5% was considered. results The study included 856 patients who underwent percutaneous nephrolithotomy and transureteroscopic ureterolithotripsy.The mean age was 48.0 ± 14.3 years, with 436 (50.9%) males and 420 (49.1%) females.Of the total number of patients, 170 (19.8%) had systemic arterial hypertension as a comorbidity, 14 (1.6%) had diabetes mellitus, and 66 (7.7%) had both hypertension and diabetes.A total of 48 (5.6%) of the study participants had urinary tract infection associated with urolithiasis in the pre-or postoperative period.Of a total of 856 patients evaluated, 694 (81%) underwent transureteroscopic ureterolithotripsy, while 163 (19%) underwent percutaneous nephrolithotomy.The DRG 660 included 111 patients, of which 97 were submitted to the URL and 14 to PCNL.In the DRG 661, 745 participants were included, of which 597 were submitted to URL and 148 to PCNL (Table 1). There was a predominance of kidney stones with a size between 10 and 20 mm, which corresponded to 74 (37.9%) participants, followed by patients with multiple intrarenal stones, corresponding to 71 (36.4%) cases.Kidney stones smaller than 10mm and larger than 20mm occurred in 15 (7.7%) and 35 (17.9%) individuals, respectively.Among the ureteral stones, there was a predominance of stones with a size between 5 and 10 mm, which corresponded to 374 (59.7%) cases.Ureteral stones with a size between 11 and 15 mm had the second highest incidence with 176 (28.1%) patients.Ureteral stones measuring 16 to 20mm and stones larger than 20mm had an incidence of 4.3% and 7.8%, respectively (Table 1). Regarding length of hospital stay, a total of 500 (58.3%)patients were hospitalized for only 1 day, while 203 (23.7%) were hospitalized for 2 days, 25 (2.9%) for 3 days, and 129 (15.1%) for more than 3 days.Of the total number of patients involved in this study, 124 (14.5%) had stone recurrence (Table 1). Among patients undergoing transureteroscopic ureterolithotripsy there was a 5.3% incidence of urinary tract infection (UTI), while in percutaneous nephrolithotomy this incidence was 6.7%.Therefore, there was no significant association between the type of procedure and the occurrence of urinary tract infections (p=0.604).There was also no significant association between the type of procedure and the rate of stone recurrence (p=0.471),since the number of recurrences after transureteroscopic ureterolithotripsy was 97 (14%) and after percutaneous nephrolithotomy was 27 (16.6 %) (Table 2). There was a significant association between the type of procedure and length of hospital stay (p<0.001).Among the individuals submitted to transureteroscopic ureterolithotripsy, there was a predominance of patients hospitalized for only 1 day, corresponding to 464 (66.9%) patients.In the postoperative period of percutaneous nephrolithotomy, there was a predominance of individuals hospitalized for 2 days, corresponding to 103 (63.2%) patients (Table 2). There was a significant association between the proportion of patients with associated urinary tract infection and the length of hospital stay, so that the number of patients with UTI was proportionally higher according to the length of hospital stay (p<0.001).A total of 2 (0.4%) patients with UTI were hospitalized for only one day, 14 (6.9%) patients for 2 days, and 32 (20.8%) for 3 or more days (Table 3). dIscussIon Our findings indicated that the average length of hospital stay was significantly different between transureteroscopic ureterolithotripsy and percutaneous nephrolithotomy, being 1 day and 2 days, respectively.Seitz et al .(2012) 11 attributes the longer expected hospitalization time of PCNL to the higher risk of bleeding in the postoperative period.However, these two procedures are both grouped in the same DRG code, 660 and 661.According to the DRG database and website, the DRG 660 preconizes 2.8 days of hospitalization while the DRG 661 suggests only 1.4 days 12 .This confirms our main hypothesis that URL is wrongly classified in the DRG and that a new subdivision should be created to stipulate a smaller hospitalization time for surgical procedures on the ureter.Krambeck et al. (2008) 13 evaluated the 19 year followup data of 754 patients who underwent percutaneous nephrolithotomy, and the stone recurrence rate identified in this study was approximately 37%.Carr et al. (1996) 14 , assessed the stone-free state of 62 patients, and the post-PCNL recurrence rate at 1 and 2 years was 4.2% and 22.6%, respectively.Also, in a study by Raman et al. (2009) 15 , 537 patients who underwent percutaneous nephrolithotomy were evaluated and residual fragments were identified in 42 (8%) of the patients.The most common site for residual fragments was the lower calyx (47%).They also reported that 11 (61%) of the 18 patients who experienced a stonerelated event required a secondary surgical intervention. A study performed by Wang et al. (2019) 16 , appraised 178 patients who underwent URL and 201 patients who underwent PCNL during a period of 2 years, and found recurrence rates of 5.06%, and 3.48%, respectively.Therefore, they found no significant difference between the two groups (p>0,05).Similarly, in our study, for a total Table 2 . Comparison of variables with the type of procedure.
2024-03-12T15:36:29.368Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "1a8a8def3ff742a8eb8f9d91bfb1d927ec8409c7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5935/2238-3182.2023e33117-en", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1cd0e25c4fd11ef24ba43437558bd74d10bfa6eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
116953436
pes2o/s2orc
v3-fos-license
Geometrically Underpinned Maximally Entangled States Bases Finite geometry is used to underpin finite, two d-dimensional particles Hilbert space, d=prime 6= 2. A central role is allotted to states with mutual unbiased bases (MUB) labeling. Dual affine plane geometry (DAPG) points underpin single particle, MUB labeled, product states. The DAPG lines are shown to underpin maximally entangled states which form an orthonormal basis spanning the space. The relevance of mutually unbiased collective coordinates bases (MUCB) for dealing with maximally entangled states is discussed and shown to provide an economic alternative mode of study. These maximally entangled, geometrically reasoned states, provide the resource to a transparent solution to what may be termed tracking of the Mean King Problem (MKP): here Alice prepares a state measured by King along some orientation which Alice succeed in identifying with a subsequent measurement. Brief expositions of the topics considered: MUB, DAPG, MUCB and the MKP are included, rendering the paper self contained. INTRODUCTION Several recent studies [1, 3-7, 12, 15, 16] consider the affinity of finite, d, dimensional Hilbert space to finite Galois fields, GF(d), and thereby to finite geometry. These interrelations are of interest as they illuminate both subjects. The present work contains a novel intuitive geometrical underpinning for the MUB structure of d 2 dimensional Hilbert space accommodating two d-dimensional particles. (d=prime ( = 2).) The study gives for the first time, to our knowledge, explicit formulae that relates lines and points of the geometry to states (rather than projectors) [17] allowing a geometric view of the relation between product and maximally entangled states [23]. The analysis underpins Hilbert space states (and operators) with geometrical points and lines. In particular we introduce, in section IV, a simple, i.e. universal, balancing term. This term, denoted by R, arises upon the association of states addition in Hilbert space with geometrical requirements among points and lines.( The corresponding term is the unit operator in studies, [1,18], wherein the association was to Hilbert space projectors [1,17].) Its origin, given in the uncanny affinity of mutual unbiased bases (MUB) labeling for a geometrical coordination scheme, is outlined in section IV. Included in this section is a brief exposition of of DAPG which we use to underpin a d-dimensional single particle Hilbert space state projectors [1,2,12,17,28] the results of which are used in the present analysis that pertains to a d 2 dimensional Hilbert spaces. In section V we give the central result of this paper i.e. the demonstration that the state underpinned with geometrical line is a maximally entangled state of remarkable attribute: its overlap with (judicially defined to relate to one coordinate point) two particles product state is definitive. Thus it is 1/d if the underpinning point is on the line, nil otherwise. Since d lines share a point this relates to d lines. This holds while the constituent single particle state projectors have nonvanishing overlap with each of the d 2 (orthogonal maximally) entangled states that span the space. This issue is elaborated on in section VIII and leads to a novel tracking of the Mean King problem outlined in section IX : Alice produces a state which allow the tracking of the alignment of the King apparatus used in his measurement of the state by one subsequent measurement. An alternative approach to the construction of a d 2 dimensional maximally entangled states basis is given in section VII where these states are shown to be product states of collective coordinates MUB bases. The theory of this aproach is outlined in sections II and III. The account via the collective MUB states proves to be more economic and, perhaps, more informative physically. ∀|u , |v ǫ B 1 , B 2 resp., The physical meaning of this is that knowledge that a system is in a particular state in one basis implies complete ignorance of its state in the other basis. Ivanovich [25] proved that there are at most d+1 MUB, pairwise, in a d-dimensional Hilbert space and gave an explicit formulae for the d+1 bases in the case of d=p (prime number). Wootters and Fields [8] constructed such d+1 bases for d = p m with m an integer. Variety of methods for construction of the d+1 bases for d = p m are now available [2,9,26,28]. Our present study is confined to d = p = 2. We now give explicitly the MUB states in conjunction with the algebraically complete operators [14,24] set:Ẑ,X. Thus we label the d orthonormal states spanning the Hilbert space, termed the computational basis, by |n , n = 0, 1, ..d − 1; |n + d = |n Ẑ |n = ω n |n ;X|n = |n + 1 , ω = e i2π/d . The d states in each of the d+1 MUB bases [14,26] are the states of the computational basis and the d bases: Here the d sets labeled by b are the bases and the m labels the states within a basis. We have [26] For later reference we shall refer to the computational basis (CB) by b =0. Thus the above gives d+1 bases, b =0, 0, 1, ...d − 1 with the total number of states d(d+1) grouped in d+1 sets each of d states. We have of course, The MUB set is closed under complex conjugation: This completes our discusion of single particle MUB. Note that the formula is schematic. Thus |n 1 , n ′ 2 = |n 1 |n ′ 2 , |n r , n ′ c = |n r |n ′ c i.e. they refer to distinct bases. We have then, |n r , n c = |n 1 , n 2 , f or n r = (n 1 − n 2 )/2, n c = (n 1 + n 2 )/2 ⇄ n 1 = n r + n c , n 2 = n c − n r . (3.7) Thence we may consider collective MUB, Incorporating the respective CB b s =0 s it is proved in [23] that the two d-dimensional particles state, is a maximally entangled state. (For b r = b c it is a product state for both particles and collective coordinates.) Indeed that it is a maximally entangled state may be seen by tracing out the first particle coordinates, where we used Eqs. (3.7,3.8). This completes our review of mutual collective unbiased bases (MUCB). IV. FINITE GEOMETRY AND HILBERT SPACE OPERATORS We now briefly review the essential features of finite geometry required for our study [1,10,11,29]. A finite plane geometry is a system possessing a finite number of points and lines. There are two kinds of finite plane geometry: affine and projective. We shall confine ourselves to affine plane geometry (APG) which is defined as follows. An APG is a non empty set whose elements are called points. These are grouped in subsets called lines subject to: 1. Given any two distinct points there is exactly one line containing both. 2. Given a line L and a point S not in L (S ∈ L), there exists exactly one line L ′ containing S such that L L ′ = ∅. This is the parallel postulate. 3. There are 3 points that are not collinear. It can be shown [10, 29] that for d = p m (a power of prime) APG can be constructed (our study here is for d=p). Furthermore The existence of APG implies [10, 29]the existence of its dual geometry DAPG wherein the points and lines are interchanged. Since we shall study extensively DAPG, we list its properties [10,29]. We shall refer to these by DAPG(·): a. The number of lines is e. Each point of a set of disjoint points is connected to every other point not in its set. DAPG(c) allows the definition, which we adopt, of S α in terms of addition of L j which acquires a meaning upon viewing the points (S α ) and the lines (L j ) as underpinning Hilbert space entities (e.g. projectors or states, to be specified later): This equation, Eq(4.3), reflects relation among equivalent classes within the geometry [10]. It will be referred to as the balance formula: the quantity R serves as a balancing term. Thus, Eqs.(4.1),(4.3) imply, (Note that in previous studies, [1,17], where the geometrical point S α underpins the projector, S α → α=m,b ≡ |m, b b, m| gives R = I, i.e. independent of α.) A particular arrangement of lines and points that satisfies DAPG(x), x=a,b,c,d,e is referred to as a realization of DAPG. We outline in Appendix A the reasoning and proofs for the geometrically based interrelation among the geometrically underpinned Hilbert space operators. This completes our review of finite geometry. V. REALIZATION OF DAPG We now consider a particular realization of DAPG of dimensionality d = p = 2 which is the basis of our present study. We arrange the aggregate, the d(d+1) points, α, in a d · (d + 1)matrix like rectangular array of d rows and d+1 columns. Each column is made of a set of d points R α = α ′ ǫα∪Mα S α ′ ; (DAPG(d)). We label the columns by b=0,0,1,2,....,d-1 and the rows by m=0,1,2...d-1.( Note that the first column label of0 is for convenience and does not relate to a numerical value. It designates the computational basis, CB.) Thus α = m(b) denotes a point by its row, m, and its column, b; when b is allowed to vary -it gives the point's row position in every column defing thereby the line.. We label the left most column by b=0 and with increasing values of b, that relates to the basis label, we move to the right. Thus the right most column is b=d-1. The top most point in each column is labeled by m=0 with m values increasing as one move to lower rows -the bottom row being m=d-1. e.g. for d=3 the underpinning's schematics is: ( In the Hilbert space realization of DAPG, A stands for the Hilbert space entity being underpinned with coordinated point, (m,b). In [17] A represented an MUB projector: A α=(m,b) = α = |m, b b, m|. In the present paper A will be seen to signify a two particles state to be specified in a subsequent section.) We now assert that the d+1 points, m j (b), b = 0, 1, 2, ...d − 1, and m j (0), that form the line j which contain the two (specific) points m(0) and m(0) is given by (we forfeit the subscript j -it is implicit), The rationale for this particular form is clarified in the next section. Thus a line j is parameterized fully by j = (m(0), m(0)). We now prove that the set j = 1, 2, 3...d 2 lines covered by Eq.(5.1) with the points as defined above forms a realization of DAPG. 1. Since each of the parameters, m(0) and m(0), can have d values -the number of lines d 2 . The number of points in a line is evidently d+1 -one in each column: The linearity of the equation precludes having two points with a common value of b on the same line. DAPG(a). 2. Consider two points on a given line, 4. Consider two arbitrary points not in the same set, R α defined above: . The argument of 2 above states that, for d=p, there is a unique solution for the two parameters that specify the line containing these points. DAPG(e). In [17,18] we considered the DAPG's points as MUB projectors ( the present paper involves the underpinning of product states by the geometrical points): This scheme allows relating the underpinning DAPG lines to interrelation among the Hilbert space operators (or states) that form those lines as follows. For b =0 we have, cf. Eq.(2.3), The equality holds whenever, for fixed n, n ′ n = n ′ , We now assert that that all the d projectors, one for each value of b, with fixed value of n + n ′ belong to a line. Adjuncted by the projector |m m| (that belong to the first column, b =0), with 2m = n + n ′ , the set now forms line. We now note that projectors m,b that form the line share necessarily all the non diagonal matrix elements s| m,b |s ′ with s + s ′ =m and all the diagonal elements (=1/d). This, while matrix elements not abiding with these requirements are distinct. With this we may now evaluate the line operator for this underpinning scheme, [17,18], noting that for this case, as noted above, the balance formula Eq.(4.3), is R = I. To illustrate these considerations we evaluateP j , j = (m = 1, m(0) = 2). Via Eq.(5.1) and Eq.(4.4) we havê P j = (1,0) + (2,0) + (1,1) + (0,2) − I. (5.6) Via Eq. (5.4), Eq.(5.6) now givesP The general formula for the matrix elements of the line operator is The proof of this is outlined in Appendix B [17]. This mapping of the Hilbet space projectors onto lines and points of the underpinning geometry was shown in [18] to allow a convenient finite dimensional Radon trqansform. VI. GEOMETRIC UNDERPINNING OF TWO-PARTICLES STATES We now consider DAPG underpinning for states of a d 2 dimensional two particles, each of d-dimensional Hilbert space. Our coordination scheme is as outlined above α = (m, b); j = (m, m(0)), m(b) = m(0)+b/2(2m−1). However, now each point will refer to a two-particles state as is specified below. We have thus, |A α ; α = 1, 2....d(d + 1), |P j ; j = 1, 2, ..d 2 . (6.1) |A α are underpinned with the d(d+1) points, S α while the |P j with the d 2 lines, L j . We define the states , |A α , underpinned by the geometrical points , by |m,b is given by Eq.(2.6). With this we return to states interrelation implied by the geometry: Eqs.(4.1),(4.4) now read We now show that with the choice ,m = d − m,b = d − b, the balance formula, viz the base independence of the balancing term, Eq.(4.3), holds: cf. [12,20], This of course includes the first column, b =0, with the "point" in the n ′ row underpinning the state |n ′ 1 |n ′ 2 . The relation among the matrix elements of projectors, (m,b) = |m, b b, m|, residing on the line given by Eq.(5.1), [17,18], with the two particle states, |A (m,b) = |m, b 1 |m,b 2 , residing on the equivalent line, Eq.(5.1), are now used to obtain an explicit formula for the line state, Where we used Eq(2.6) and Eq.(5.9). The expression for the line state will be put now in a more pliable form, [20], The inversion operator I is defined via I|n = | − n = |d − n .X,Ẑ are defined in section II. The orthonormality of |P j is proved in appendix C. The central result of our geometrical underpinning is the following intuitively obvious overlap relation ) is on the line j the overlap is non zero. This is a remarkable attribute: Each and every one of the observables |m, b 1 |m,b 2 b ,m| 2 b, m| 1 has a definite and known value if measured in the state |P (m,m(0)) yet its constituents single particle observables do not commute. Indeed the single particle e.g. |m, b 1 b, m| has a finite probability to be found anywhere (on every line). The probability of finding our system in the state |A α given that the system is in the state |P j , α ∈ j is 1 d . We note, however, that there are d+1 points α, exposing that these probabilities are not mutually exclusive. This can be directly checked by noting the non vanishing of the overlap, | A α |A α ′ | = 1/d, α = α ′ , α, α ′ ∈ j. The probability when α ∈ j is nil. This allows a new approach to the Mean King Problem to which we shall turn after the collective coordinate formulation. VII. GEOMETRIC VIEW OF COLLECTIVE COORDINATES FORMULATION The simplification offered by the collective formulation is illustrated by considering the balance term, cf. Eq(4.2), including normalization, Eq.(6.5), We used Eq.(3.7)to get n ′ r = 0, n" c = 0. The RHS reads that within the collective coordinates the state |R is a product state: In the r (relative) coordinates it is in computational basis (b r =0) with eigenvalue (ofZ r ) 1 (i.e. m=0). In the c (center of mass) coordinate space it is in b c = 0 with eigenvalue (ofX c ) 1 too. We now turn to the expression for |P j within the collective coordinates system, using Eq.(6.5), |P j=m,m(0) = 1 √ d n,n ′ |n 1 |n ′ 2 δ n+n ′ ,2m ω −(n−n ′ )m(0) = 1 √ d n,n ′ n ′ r ,n"c |n ′ r |n" c n ′ r , n" c |n 1 , n ′ 2 δ n+n ′ ,2m ω −(n−n ′ )m(0) Identifying the state as a product state in the collective coordinates. (The product state above is notationally simplifid by |m c |2m 0 r .) The collective coordinate expression for the particles product state |A α=(m,b) is, 3) The probability amplitude of finding the particles in the state |A α=(m,b) given a system in the state |m;0 c |2m(0); 0 r ≡ |m c |m 0 r is Thus the probability is 1 d if the state is on the line (nil if it is not), confirming Eq.(6.7) and the efficiency of the collective coordinate formulation. VIII. LEAKY PARTICLES The maximally entangled state, Eq.(7.2), |P j=(m,m0) ≡ |m c |2m 0 r , was viewed as a "line" state. I.e. the product states underpinned by the geometrical point, α = (m, b), whose coordinates, (m,b), abides by the line equation, Eq. (5.1), form form line in the sense that ( cf. Eq.(VI)), Thus a pair of particles (the particle and its mate, the tilde particle) whose coordinates are α = m, b do wholy belong to the d lines that share the coordinated point. However each of the constituent particles (either 1 or 2) is equaly likely to be in anyone the d 2 of the lines, It is this attribute that allows the tracking of the King measurement alignment. IX. TRACKING THE MEAN KING The Mean King Problem (MKP), initiated by [13], was analyzed in several publications -see the comprehensive list in [12]. Briefly summarized it runs as follows. Alice may prepare a state to her liking. The King measures it in an MUB basis (i.e. for some value of b: a particular alignment of his apparatus). He does not inform Alice of his observational result nor the basis he used. Alice performs a control measurement of her choice. After her control measurement the King informs her the basis, b, he used for his measurement. Thence she must deduce the actual state (m,b) that he observed. In our case of tracking the King -He does not inform Alice of the basis he used -her control measurment is designed to track the basis used. (Note that in all the analyses time evolution is ignoredpresumed to be independently accountable.) The state that Alice prepars is one of the line vectors, [21], the line label specified withm and m(0) may be viewed as specification in terms of initial position and momentum [22]. The states |P j , j = 1, 2, ...d 2 are maximally entangled and form an orthnormal basis that spans the space. These states have a remarkable attribute: the probability of finding it in the state |A α , α ∈ j is 1/d, it vanishes otherwise. The number of points ,α, on a line is d+1 -reflecting the non-exclusiveness of these probabilities which , in turn, allows the tracking of the King measurement as is accounted above. We gave an alternative , perhaps more economic, parametrization of the d 2 maximally entangled states that span the space -parametrization based on a collective, viz center of mass and realtive, coordinates. Here the state vector underpined with a geometrical line is given by a product state of the collective coordinates, |P j=m,m0 = |m c |2m 0 r (c and r are center of mass and realtive coordinattes respectively). These states were shown to provide simplified noation for the calculation as well as a novel view of maximally entangled states. It was shown that adding up product states in geometrically reasoned manner yields maximally entangled states. These states are shown to be product states of two particles collective (center of mass and realtive) coordinates. The states are such as to allow unambiguous tracking of alignment of measurement of their constituent single particle. Appendix A: Geometrically based Hilbert space operators' interrelation Our task is to define consistently addition (and subtractions) of " line" and "point" Hilbert space operators (or states) which are underpinned by geometrical points and lines assuring that they abide by their geometrical underpinning interrelation. The logical interrelation symbols (S and L represents the geometrical point and line respectively), d α∈j S α = L j ; d j∈α L j = S α are to be realized by addition ( and subtraction) of Hilbert space entities, operators or states, supplamented with numerical values. Our starting point is : where we underpinned the Hilbert space operator (or state) A α with the point S α . We now consider a particular realization of the geometry, i.e. a set up where the points and lines abide by the geometry are realized by marking the points on each line subject to DAPG requirements such as, e.g., two distinct lines have a single point in common. The geometry is then realized via Eq.(10.1), coordinated as specified by MUB labelings. It, then, follows via DAPG(a,c,d,e) -cf. Eq.(4.2), that The RHS is clearly a universal quantity (i.e. independent of α and j) which implies that the LHS, i.e. universal too. Since a line is made of points we consider (try) where R is a universal quantity that may be required to balance the equation. Returning to Eq.(10.1) with Eq. (10.3), the geometry implies via DAPG(c,d), We illustrate the consistency of this by showing the validity of the geometrically derived realtion, Eq.(4.3): Where we used the universality of R and DAPG(c,d). Thus With, d . A line ,P j is given by the contribution of point projectors α with common matrix elements, i.e. α and α ′ belong to the same line whenever n| α |n ′ = n| α ′ |n ′ ( for b =0) . This reads for n+n'=common constant which we chose to be 2m -which a point on the line on the b =0 column. The next point on the line is the value of m on the b=0 column, m(0) = m 0 . The line is now defined by the two points j = (m, m 0 ). All other points are now given by Since alll theother matrix elements of the projectors forming the line are distinct and are, each, a d-root of unity their sum add up to zero. Thus the final formula forP j is n|P j |n ′ = δ (n+n ′ ),2m ω −(n−n ′ )m0 . QED Appendix C: Orthogonality of |Pj Noting that R|R = d and P j |R = d + 1, We get, for j=j': Where we used that d+1 α =α ′ [ A α |A α ′ = (d + 1) d d = d + 1. For j = j ′ the geometry dictates, DAPG(b), that distinct lines share one point. Thus the first term above is 1 rather than 1+d hence P j |P j ′ = 0, j = j ′ i.e. P j |P j ′ = δ j,j ′ . QED (10.8) Note: Using the collective coordinates , |P j=(m,m(0)) = |m c |m 0 r , the proof is immediate.
2012-07-09T18:56:26.000Z
2012-06-02T00:00:00.000
{ "year": 2012, "sha1": "6d2df01de63099090f55c7c2139de1d72c781f1b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6d2df01de63099090f55c7c2139de1d72c781f1b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
4539390
pes2o/s2orc
v3-fos-license
Equivalences between learning of data and probability distributions, and their applications Algorithmic learning theory traditionally studies the learnability of effective infinite binary sequences (reals), while recent work by [Vitanyi and Chater, 2017] and [Bienvenu et al., 2014] has adapted this framework to the study of learnability of effective probability distributions from random data. We prove that for certain families of probability measures that are parametrized by reals, learnability of a subclass of probability measures is equivalent to learnability of the class of the corresponding real parameters. This equivalence allows to transfer results from classical algorithmic theory to learning theory of probability measures. We present a number of such applications, providing many new results regarding EX and BC learnability of classes of measures, thus drawing parallels between the two learning theories. Introduction One of the central problems in statistics is, given a source of random data to determine a probability distribution according to which the data were generated [Vapnik, 1982]. This problem has also been studied extensively in the context of computational learning, in particular the probably approximately correct (PAC) learning model, starting with [Kearns et al., 1994]. More recently, [Vitanyi and Chater, 2017, Bienvenu et al., 2014 initiated the study of this problem in the context of algorithmic learning, 1 based on [Gold, 1967] and the theory of algorithmic randomness and Kolmogorov complexity. A class of computable probability measures C is learnable, if there exists an algorithm that while reading increasingly longer segments of any algorithmically random binary stream X with respect to some µ ∈ C, it eventually determines a description of some µ ′ ∈ C with respect to which X is algorithmically random. 2 There are a number of ways to formalize this definition, many akin to the various learning notions from algorithmic learning theory that originate in [Gold, 1967]. A learner is simply a function L : 2 <ω → N. We refer to infinite binary streams (sequences) as reals. According to [Gold, 1967], a class C of computable elements of 2 ω is EX-learnable if there exists a learner L such that for each Z ∈ C we have that lim n L(Z ↾ n ) exists and equals an index of Z as a computable function. Similarly, C is BC-learnable if there exists a learner L such that for each Z ∈ C there exists some n 0 such that for all n > n 0 the value of L(Z ↾ n ) is an index of Z. In this paper we study explanatory (EX) learning, behaviorally correct (BC) learning and partial learning of probability measures, based on the classic notion of algorithmic randomness by [Martin-Löf, 1966]. Given a measure µ on the reals and a real X, we say that X is µ-random if it is algorithmically random with respect to µ. We review algorithmic randomness with respect to arbitrary measures in Section 2.3. Definition 1.1 (EX learning of measures). A class C of computable measures is EX-learnable if there exists a computable learner L : 2 <ω → N such that for every µ ∈ C and every µ-random real X the limit lim n L(X ↾ n ) exists and equals an index of a measure µ ′ ∈ C such that X is µ ′ -random. [Vitanyi and Chater, 2017] introduced this notion and observed that any uniformly computable family of measures is EX-learnable. On the other hand, [Bienvenu et al., 2014] showed that the class of computable measures is not EX-learnable, and also not even BC-learnable in the following sense. Definition 1.2 (BC learning of measures). A class C of computable measures is BC-learnable if there exists a computable learner L : 2 <ω → N such that for every µ ∈ C and every µ-random real X there exists n 0 and µ ′ ∈ C such that for all n > n 0 the value L(X ↾ n ) is an index of µ ′ such that X is µ ′ -random. One could consider a stronger learnability condition, namely that given µ ∈ C and any µ-random X the learner identifies µ in the limit, when reading initial segments of X. Note that such a property would only be realizable in classes C where any µ, µ ′ ∈ C are effectively orthogonal, which means that the classes of µ-random and µ ′ -random reals are disjoint. 3 On the other hand we could considered a weakened notion 1 In algorithmic learning, one starts with a class of languages or functions which have a finite description and the problem is to find an algorithm (a learner) which can infer, given a sufficiently long text from any language in the given class, a description of the language or function in the form of a grammar or a program. 2 Two differences with classic algorithmic learning are: (a) the inputs on which the learner is supposed to succeed in the limit are random sequences with respect to some probability distribution in the given class C, and not elements of C; (b) there may be multiple acceptable guesses of a learner along real X, since X may be random with respect to many measures in C. 3 In this case we call C effectively orthogonal, and Definitions 1.1, 1.2 are equivalent with the versions where µ ′ is replaced by µ. of learning of a class C of computable measures, where given µ ∈ C and any µ-random X, the learner identifies some computable measure µ (possibly not in C) in the limit, with respect to which X is random, when reading initial segments of X. Definition 1.3 (Weak EX learning of measures). A class C of computable measures is weakly EX-learnable if there exists a computable learner L : 2 <ω → N such that for every µ ∈ C and every µ-random real X the limit lim n L(X ↾ n ) exists and equals an index of a computable measure µ ′ such that X is µ ′ -random. Definition 1.4 (Weak BC learning of measures). A class C of computable measures is weakly BC-learnable if there exists a computable learner L : 2 <ω → N such that for every µ ∈ C and every µ-random real X there exists n 0 and a computable measure µ ′ , such that for all n > n 0 the value L(X ↾ n ) is an index of µ ′ such that X is µ ′ -random. We note that the notions in Definitions 1.1 and 1.2 are not closed under subsets. Proposition 1.5. There exist classes C ⊆ D of measures such that D is EX-learnable and C is not even BC-learnable. Clearly C ⊆ D. If C was BC-learnable then ∅ ′′′ could be decided in ∅ ′′ , which is a contradiction. On the other hand the learner which guesses ν i on each extension of σ i is an EX-learner for D. On the other hand, the weaker notions of Definitions 1.3 and 1.4 clearly are closed under subsets. In Section 1.1 we also consider an analogue of the notion of partial learning from [Osherson et al., 1986] for measures, and prove an analogue of the classic result from the same paper that the computable reals are partially learnable. Our main results The aim of this paper is to establish a connection between the above notions of learnability of probability measures, with the corresponding classical notions of learnability of reals in the sense of [Gold, 1967]. To this end, we prove the following equivalence theorem, which allows to transfer positive and negative learnability results from reals to probability measures that are parametrized by reals, and vice-versa. Let M denote the Borel measures on 2 ω . Theorem 1.6 (The first equivalence theorem). Given a computable f : 2 ω → M let D ⊆ 2 ω be an effectively closed set such that for any X Y in D the measures f (X), f (Y) are effectively orthogonal. If D * ⊆ D is a class of computable reals, D * is EX-learnable if and only if f (D * ) is EX-learnable. The same is true of the BC learnability of D * . As a useful and typical example of a parametrization f of measures by reals as stated in Theorem 1.6, consider the function that maps each real X ∈ 2 ω to the Bernoulli measure with success probability the real in the unit interval [0, 1] with binary expansion X. 4 The proof of Theorem 1.6 is given in Section 3. 5 The next equivalence theorem concerns weak learnability. 6 Theorem 1.7 (The second equivalence theorem). There exists a map Z → µ Z from 2 ω to the continuous Borel measures on 2 ω , such that for every class C of computable reals, C is EX/BC learnable if and only if {µ Z | Z ∈ C} is a weakly EX/BC learnable class of computable measures, respectively. Finally we give a positive result in terms of partial learning. We say that a learner L partially succeeds on a computable measure µ if for all µ-random X there exists a j 0 such that (a) there are infinitely many n with L(X ↾ n ) = j 0 ; (b) if j j 0 then there are only finitely many n with L(X ↾ n ) = j; (c) µ j 0 is a computable measure such that X is µ j 0 -random. Theorem 1.8. There exists a computable learner which partially succeeds on all computable measures. Theorems 1.6 and 1.7 allow the transfer of learnability results from the classical theory on the reals to probability measures. Detailed background on the notions that are used in our results and their proofs is given in Section 2. Applications of our main results The equivalences in Theorems 1.6 and 1.7 have some interesting applications, some of which are stated below, deferring their proofs to Section 4. [Adleman and Blum, 1991] showed that an oracle can EX-learn all computable reals if and only if it is high, i.e. it computes a function that dominates all computable functions. Using Theorem 1.7 we may obtain the following analogue for measures. Corollary 1.9. The computable (continuous) measures are (weakly) EX-learnable with oracle A if and only if A is high. We may write EX[A] to indicate that the EX-learner is computable in A. A class C of measures is (weakly) EX * [A]-learnable for an oracle A, if there exists an EX-learner L ≤ T A for C such that for each X, the function n → L(X ↾ n ) uses finitely many queries to A. The following is an analogue of a result from [Kummer and Stephan, 1996] If we apply Theorem 1.6 we obtain an analogue of the [Adleman and Blum, 1991] [Blum and Blum, 1975] showed the so-called non-union theorem for EX-learning, namely that EX-learnability of classes of computable reals is not closed under union. We may apply our equivalence theorem in order to prove an analogue for measures. Corollary 1.12 (Non-union for measures). There are two EX-learnable classes of computable (Bernoulli) measures such that their union is not EX-learnable. in Section 3. 4. 6 For the special case where we allow measures with atoms in our classes, Theorem 1.7 has a somewhat easier proof than the one given in Section 3.5. One can find applications of Theorem 1.6 on various more complex results in algorithmic learning theory. As an example, we mention the characterization of low oracles for EX-learning that was obtained in Pleszkoch, 1989, Slaman andSolovay, 1991] (also see [Fortnow et al., 1994]). An oracle A is low for EX-learning of classes of computable measures, if any class of computable measures that is learnable with oracle A, is learnable without any oracle. The characterization mentioned above is that, an oracle is low for EX-learning if and only if it is 1-generic and computable from the halting problem. This argument consisted of three steps, first showing that 1-generic oracles computable from the hating problem are low for EX-learning, then that oracles that are not computable from the halting problem are not low for EX-learning, and finally that oracles that are computable from the halting problem but are not 1-generic are not low for EX-learning. The last two results can be combined with Theorem 1.6 in order to show one direction of the characterization for measures: if an oracle A is either not computable from the halting problem or not 1-generic, then there exists a class of computable (Bernoulli) measures which is not EX-learnable but which is EX-learnable with oracle A. (1) In other words, low for EX-learning oracles for measures are 1-generic and computable from the halting problem. Corollary 1.13. If an oracle is low for EX-learning for measures, then it is also low for EX-learning for reals. We do not know if the converse of Corollary 1.13 holds. [Bienvenu et al., 2014] say that a learner L EX-succeeds on a real X if lim n L(X ↾ n ) equals an index of a computable measure with respect to which X is random. Similarly, L BC-succeeds on X if there exists a measure µ such that X is µ-random, and for all sufficiently large n, the value of L(X ↾ n ) is an index of µ. The results in [Bienvenu et al., 2014[Bienvenu et al., , 2017 are of the form 'there exists (or not) a learner which succeeds on all reals that are random with respect to a computable measure'. Hence [Bienvenu et al., 2014[Bienvenu et al., , 2017 refer to the weak learnability of Definitions 1.3 and 1.4. [Bienvenu and Monin, 2012] introduced and studied layerwise learnability, in relation to uniform randomness extraction from biased coins. This notion is quite different from learnability in the sense of algorithmic learning theory, but it relates to the 'only if' direction of Theorem 1.6. Let M denote the class of Borel measures on 2 ω . 7 A class C ⊆ M of measures (not necessarily computable) is layerwise learnable if there is a computable function F : 2 ω × N → M which, given any µ ∈ C and any µ-random real X, if the µ-randomness deficiency of X is less than c then F(X, c) = µ. In other words, this notion of learnability of a class C ⊆ M requires to be able to compute (as an infinite object) any measure µ ∈ C from any µ-random real and a guarantee on the level of µ-randomness of the real. 8 As a concrete example of the difference between the two notions, consider the class of the computable Bernoulli measures which is layerwise learnable [Bienvenu and Monin, 2012] but is not (weakly) EX-learnable or even (weakly) BC-learnable by [Bienvenu et al., 2014]. Background We briefly review the background on the Cantor space 2 ω and the space of Borel measures on that is directly relevant for understanding our results and proofs. We focus on effectivity properties of these concepts and the notion of algorithmic randomness. This is textbook material in computable analysis and algorithmic randomness, and we have chosen a small number of references where the reader can obtain more detailed presentations that are similar in the way we use the notions here. Representations of Borel measures on the Cantor space We view 2 ω and the space M of Borel measures on 2 ω as computable metric spaces. 9 The distance between two reals is 2 −n where n is the first digit where they differ, and the basic open sets are σ =: {X ∈ 2 ω | σ X}, σ ∈ 2 <ω , where denotes the prefix relation. 10 The distance between µ, ν ∈ M is given by The basic open sets of M are the balls of the form We represent measures in M as the functions µ : 2 <ω → [0, 1] such that µ(∅) = 1 (here ∅ is the empty string) and µ(σ) = µ(σ * 0) + µ(σ * 1) for each σ ∈ 2 <ω . We often identify a measure with its representation. A measure µ is computable if the its representation is computable as a real function. There are two equivalent ways to define what an index (or description) of a computable measure is. One is to define it as a computable approximation to it with uniform modulus of convergence. For example, we could say that a partial computable measure is a c.e. set W of basic open sets (σ, I) of M, where σ ∈ 2 <ω , I is a basic open interval of [0, 1]; 12 In this case we can have a uniform enumeration (µ e ) of all partial computable measures, which could contain non-convergent approximations. Then µ e , represented by the c.e. set W e , is total and equal to some measure µ if µ ∈ [(σ, I)] for all σ, I with (σ, I) ∈ W e , and for each σ we have 9 All of the notions and facts discussed in this section are standard in computable analysis and are presented in more detail in Monin, 2012, Bienvenu et al., 2017]. More general related facts, such that the fact that for any computable metric space C the set of probability measures over C is itself a computable metric space, can be found in [Gács, 2005]. 10 If V ⊆ 2 ω then V := ∪ σ∈V σ . 11 These are the intervals (q, p), [0, q), (p, 1] for all dyadic rationals p, q ∈ (0, 1). 12 if one wishes to ensure that in case of convergence the property µ(σ) = µ(σ * 0) + µ(σ * 1) holds, we could also require that if (σ, I), inf{|I| | (σ, I) ∈ W e } = 0. Alternatively, one could consider the fact that for every computable measure µ there exists a computable measure ν which takes dyadic values on each string σ, and such that µ = Θ(ν) (i.e. the two measures are the same up to a multiplicative constant). 13 Moreover from µ one can effectively define ν, and the property µ = Θ(ν) implies that the µ-random reals are the same as the ν-random reals. This means that we may restrict our considerations to the computable measures with dyadic rational values on every string, without loss of generality. Then we can simply let (µ e ) be an effective list of all partial computable functions from 2 <ω to the dyadic rationals such that µ(σ) = µ(σ * 0) + µ(σ * 1) for each σ such that the values µ(σ), µ(σ * 0), µ(σ * 1) are defined. The two formulations are effectively equivalent, in the sense that from one we can effectively obtain the other, so we do not explicitly distinguish them. In any case an index of a computable measure µ is a number e such that µ e is total and equals µ. An important exception to this equivalence is when we consider subclasses of computable measures, such as the computable Bernoulli measures which feature in Section 4. In this case we have to use the first definition of (µ e ) above, since it is no longer true that every computable Bernoulli measure can be replaced with a computable Bernoulli measure with dyadic values which has the same random reals. Computable functions and metric spaces There is a well-established notion of a computable function f between computable metric spaces from computable analysis, e.g. see Monin, 2012, Weihrauch, 1993]. The essence of this notion is effective continuity, i.e. that for each x and a prescribed error bound ǫ for an approximation to f (x), one can compute a neighborhood radius around x such that all of the y in the neighborhood are mapped within distance ǫ from f (x). Here we only need the notion of a computable function f : 2 ω → M, which can be seen to be equivalent to the following (due to the compactness of 2 ω ). Let M * denote the basic open sets of M. Definition 2.1. A function f : 2 ω → M is computable if there exists a computable function f * : 2 <ω → M * which is monotone in the sense that σ τ implies f * (σ) ⊆ f * (τ), and such that for all Z ∈ 2 ω we have More generally, a computable metric space is a tuple (X, d x , (q i )) such that (X, d x ) is a complete separable metric space, (q i ) is a countable dense subset of X and the function ( In this way, as it is illustrated in Definition 2.1, computable functions between 2 ω , N, M and their products can be thought of as induced by monotone computable functions between the corresponding classes of basic open sets, such that the sizes of the images decrease uniformly as a function of the size of the arguments. 13 See [Juedes and Lutz, 1995]. Algorithmic randomness with respect to arbitrary measures There is a robust notion of algorithmic randomness with respect to an arbitrary measure µ on 2 ω , which was manifested in approaches by [Levin, 1976[Levin, , 1984 and [Gács, 2005] in terms of uniform tests, and in [Reimann and Slaman, 2015] in terms of representations of measures, all of which were shown to be equivalent by [Day and Miller, 2013]. In this paper we will mainly use the specific case when the measure is computable, which is part of the classic definition of [Martin-Löf, 1966]. Given a computable measure µ, a Martin-Löf µ-test is a uniformly c.e. sequence (U i ) of sets of strings (viewed as the sets of reals with prefixes the strings in the sets) such that µ( Occasionally it is useful to refer to the randomness deficiency of a real, which can be defined in many equivalent ways. 14 For example, we could define µ-deficiency to be the least i such that . Clearly Z is µ-random if and only if it has finite µ-deficiency. Randomness with respect to arbitrary measures only plays a role in Section 3.1. We define it in terms of randomness deficiency, following [Bienvenu et al., 2017]. We define the (uniform) randomness deficiency function to be the largest, up to an additive constant, function d : Given any µ ∈ M and Z ∈ 2 ω , the µ-deficiency of Z is d(Z, µ) and Z is µ-random if it has finite µ-deficiency. This definition is based on the uniform tests approach as mentioned before, and is equivalent to Martin-Löf randomness for computable measures. Moreover the deficiency notions are equivalent in the sense of footnote 14. 3 Proof of Theorem 1.6 and Theorem 1.7 We start with Theorem 1.6. Let D ⊆ 2 ω be an effectively closed set and let D * ⊆ D contain only computable reals. Also let f : 2 ω → M be a computable function such that for any X Y in D the measures f (X), f (Y) are effectively orthogonal. The easiest direction of Theorem 1.6 is that if D * is (EX or BC) learnable then f (D * ) is (EX or BC, respectively) learnable, and is proved in Section 3.1. 16 Sections 3.2 and 3.3 prove the 'if' direction of Theorem 1.6 for EX and BC learnability respectively, and are the more involved part of this paper. In Section 3.5 we prove Theorem 1.7. 14 Equivalent in the sense that from an upper bound of one notion with respect to a real, we can effectively obtain an upper bound on another notion with respect to the same real. 15 We can get a precise definition of d by starting with a universal enumeration W e (k) all uniform c.e. sequences of sets W(k), where each W(k) is a set of pairs (σ, I) of basic open sets of 2 ω , M respectively (viewed as basic open set of the product space 2 ω × M) with the property that for each µ is the maximum k such that (X, µ) is in the open set W e (k). 16 We stress that the effective orthogonality property of f , and hence the fact that it is injective, is used in a crucial way in the argument of Section 3.1. From learning reals to learning measures We show the 'only if' direction of Theorem 1.6, first for EX learning and then for BC learning. Let f, D, D * be as in the statement of Theorem 1.6. Since D is effectively orthogonal, given X ∈ 2 ω there exists at most one µ ∈ f (D) such that X is µ-random. By the properties of f , there is also at most one Z ∈ D such that X is f (Z)-random. Moreover for each X ∈ 2 ω , c ∈ N, the class of Z ∈ D such that X is f (Z)-random with redundancy ≤ c is a Π 0 1 (X) class P(X, c) (uniformly in X, c) which either contains a unique real, or empty. Moreover the latter case occurs if and only if there is no µ ∈ f (D) with respect to which X is µ-random with deficiency ≤ c. Now note that given a Π 0 1 (X) class P ⊆ 2 ω , by compactness the emptiness of P is a Σ 0 1 (X) event, and if P contains a unique path, this path is uniformly computable from X and an index of P. It follows that there exists a computable function h : 2 <ω → 2 <ω such that for all X which is f (Z)-random for some Z ∈ D, Indeed, on the initial segments of X, the function h will start generating the classes P(X, c) as we described above, starting with c = 0 and increasing c by 1 each time that the class at hand becomes empty. While this process is fixed on some value of c, it starts producing the initial segments of the unique path of P(X, c) (if there are more than one path, this process will stop producing longer and longer strings, reaching a finite partial limit). In the special case that X is f (Z)-random for some Z ∈ D, such a real Z ∈ D is unique, and the process will reach a limit value of c, at which point it will produce a monotone sequence of longer and longer prefixes of Z. 17 Note that since f : 2 ω → M is computable, there exists a computable g : N → N such that for each e, if e is an index of a computable Z ∈ 2 ω , then g(e) is an index of the computable measure f (Z). We are ready to define an EX-learner V for f (D * ), given an EX-learner L for D * and the functions h, g that we defined above. For each σ we let V(σ) = g (L(h(σ))). It remains to verify that for each X which is µ-random for some computable µ ∈ f (D * ), the limit lim s V(X ↾ s ) exists and equals an index for (the unique such) µ. By the choice of X and h we have that there exists some s 0 such that for all s > s 0 , the string h(X ↾ s ) is an initial segment of the unique Z ∈ D * such that f (Z) = µ; moreover lim s |h(X ↾ s )| = ∞ and since µ is computable and D is effectively closed, it follows that Z is computable. Hence, since L learns all reals in D * , we get that lim s L(h(X ↾ s )) exists and is an index of Z. Then by the properties of g we get that g(lim s L(h(X ↾ s ))) = lim s g(L(h(X ↾ s ))) is an index for µ. Hence lim s V(X ↾ s ) is an index of the unique computable µ ∈ f (D * ) with respect to which X is random, which concludes the proof. Finally we can verify that the same argument shows that if D * is BC-learnable, then f (D * ) is BC-learnable. The definitions of h, g remain the same. The only change is that now we assume that L is a BC-learner for D * . We define the BC-learner V for f (D * ) in the same way: V(σ) = g (L(h(σ))). As before, given X such that there exists (a unique) Z ∈ D * such that X is f (Z)-random, we get that there exists some s 0 such that for all s > s 0 , the string h(X ↾ s ) is an initial segment of the unique computable Z ∈ D * such that f (Z) = µ, and moreover lim s |h(X ↾ s )| = ∞. Since L is a BC-learner for D * , there exists some s 1 such that for all s > s 1 the integer L(h(X ↾ s )) is an index for the computable real Z. Then by the properties of g we get that for all s > s 1 , the integer g(L(h(X ↾ s ))) is an index for the computable measure f (Z). Since X is f (Z)-random, this concludes the proof of the BC clause of the 'only if' direction of Theorem 1.6. From learning measures to learning reals: the EX case We show the 'if' direction of the EX case of Theorem 1.6. Let f, D, D * be as given in the theorem and suppose that f (D * ) is EX-learnable. This means that there exists a computable learner V such that for every Z ∈ D * and every f (Z)-random X, the limit lim s V(X ↾ s ) exists and is an index of f (Z). We are going to construct a learner L for D * so that for each Z ∈ D * the limit lim s L(Z ↾ s ) exists and is an index for Z. Since D is effectively closed and f is computable and injective on D, by the compactness of 2 ω , there exists a computable g : N → N such that for each e, if e is an index of a computable µ ∈ f (D), the image g(e) is an index of the unique and computable Z ∈ D such that f (Z) = µ. (2) Hence it suffices to construct a computable function L * : 2 <ω → N with the property that for each Z ∈ D * the limit lim s L * (Z ↾ s ) exists and is an index for f (Z) ( 3) because then the function L(σ) = g(L * (σ)) will be a computable learner for D * . Note that by the properties of f * we have for each Z ∈ 2 ω , each n ∈ N and any measures µ, ν ∈ f * (Z ↾ h(n) ) we have Below we will also use the fact that there is a computable function that takes as input any basic open interval I of M and returns (an index of) a computable measure (say, as a measure representation) µ ∈ I. Proof idea. Given Z ∈ D * we have an approximation to the measure µ * = f (Z). Given µ * and V we get a majority vote on each of the levels of the full binary tree, where each string σ votes for the index V(σ) and its vote has weight µ * (σ). In search for the index of Z ∈ D * we approximate the weights of the various indices as described above, and aim to chose an index with a positive weight. If V EX-learns µ * , it follows that such an index will indeed be an index of µ * . One obvious way to look for such an index is at each stage to choose the index whose current approximated weight is the largest. This approach has the danger that there may be two different indices with the same weight, in which case it is possible that the said approximation lim n L * (X ↾ n ) does not converge. We deal with this minor issue by requiring a sufficient difference on the current weights for a change of guess. Construction of L * . We let L * map the empty string to index 0 and for every other string σ we define L * (σ) as follows. So it remains to define L * in steps, where at step n we define L * on all strings σ ∈ 2 h(n) . Properties of L * . It remains to show (3), so let Z ∈ D * . First we show the claimed convergence and then that the limit is an index for f (Z). Let µ * := f (Z) and for each e define Since Z ∈ D * it follows that V learns µ * . Hence the µ * -measure of all the reals X such that lim s V(X ↾ s ) exists and equals an index of a measure with respect to which X is random, is 1. If we take into account that f (D) is effectively orthogonal, it follows that the µ * -measure of all the reals X such that lim s V(X ↾ s ) exists and equals an index of µ * is 1. Hence there exists an index t of µ * = f (Z) such that w t > 0, and moreover each e with w e > 0 is an index of µ * . Lemma 3.1. For each e, lim n w e [n] = w e . Proof. Since V learns µ * , the µ * -measure of the reals on which V reaches a limit is 1. For each n let Q n be the open set of reals on which V changes value after n bits. Then Q n+1 ⊆ Q n and lim n µ * (Q n ) = µ * (∩ n Q n ) = 0. Let P e [n] be the closed set for reals X such that V(X ↾ i ) = e for all i ≥ n. Then P e [n] ⊆ P e [n + 1] for all n and w e is the µ * -measure of ∪ n P e [n]. Hence w e = lim n µ * (P e [n]). Given n 0 , for each n ≥ n 0 we have w e [n] ≤ µ * (P e [n 0 ]) + µ * (Q n 0 ). This shows that lim sup n w e [n] ≤ lim sup n µ * (P e [n]) = w e . On the other hand P e [n 0 ] ⊆ {σ ∈ 2 n | V(σ) = e)} for all n ≥ n 0 . So w e = lim n (µ * (P e [n])) ≤ lim inf n w e [n]. It follows that lim n w e [n] = w e . Now, given Z consider the sequence of computable measures µ Z↾ h(n) ∈ f * (Z ↾ h(n) ) that are defined by the function σ → µ σ applied on Z, and let From (4) we get that for each n, e, In particular, by Lemma 3.1, w e = lim n w e [n] = lim n w * e [n]. Let m be some index such that w m = max e w e . Lemma 3.2. There exists n 0 such that for all n ≥ n 0 and all e, |w e − w * e [n]| < w m /5. Proof. By (6) we have e w e = 1 and 0 < w m ≤ 1. Then there exists e 0 such that e<e 0 w e > 1 − w m /20. And as we also have for all e, lim n w * e [n] = w e , then there exists n 0 such that for all n ≥ n 0 , e<e 0 |w e − w * e [n]| < w m /20. Then for e < e 0 , it is clear that for all n ≥ n 0 , |w e − w * e [n]| < w m /5. On the other hand, we have e≥e 0 w e = 1 − e<e 0 w e < w m /20. And for all n ≥ n 0 , e≥e 0 w * e [n] = 1 − e<e 0 w * e [n] ≤ 1 − ( e<e 0 w e − w m /20) < w m /10. So for all e ≥ e 0 , 0 ≤ w e < w m /20 and 0 ≤ w * e [n] < w m /10, and thus, |w e − w * e [n]| < w m /5. Let us now fix the constant n 0 of Lemma 3.2. Lemma 3.3 (The limit exists). The value of L * (Z ↾ n ) will converge to some index i with w i > 0. Proof. Let L * (Z ↾ h(n 0 ) ) = e 0 . In case there is some n ≥ h(n 0 ) such that L * (Z ↾ n ) e 0 , there should be some n 1 ≥ n 0 such that L * (Z ↾ h(n 1 ) ) = e 1 e 0 . It then follows from the construction of L * that w * e 1 [n 1 ] ≥ w * m [n 1 ] > 4w m /5. Then by Lemma 3.2 for all n ≥ n 1 , w * e 1 [n] > w e 1 − w m /5 > w * e 1 [n 1 ] − 2w m /5 = 2w m /5 and on the other hand for all e, w * e [n] < w e + w m /5 ≤ 6w m /5 < 3w * e 1 [n]. This means that after step n 1 the value of L * (Z ↾ n ) will not change and thus, lim n L * (Z ↾ n ) = e 1 and w e 1 > 4w m /5 > 0. In case for all n ≥ h(n 0 ) we have L * (Z ↾ n ) = e 0 , then we only need to show that w e 0 > 0. Assume w e 0 = 0, then there will be some n 2 > n 0 such that for all n ≥ n 2 , w * e 0 [n] < w m /4. Noticed that w * m [n] > 4w m /5 > 3w * e 0 [n], by the construction of L * the value of L * (Z ↾ h(n 2 ) ) need to be changed. This is a contradiction. The above lemma together with (6) concludes the proof of (3) and the 'only if' direction of Theorem 1.6. From learning measures to learning reals: the BC case We show the 'if' direction of the BC case of Theorem 1.6. So consider f : 2 ω → M, D, D * ⊆ 2 ω as given and assume that f (D * ) is a BC-learnable class of computable measures. This means that there exists a learner V such that for all µ ∈ f (D * ) and µ-random X there exists some s 0 such that for all s > s 0 the value of V(X ↾ s ) is an index of µ. We use the expression lim n V(X ↾ n ) ≈ µ in order to denote property (8). Hence our hypothesis about V is for all µ ∈ f (D * ) and µ-random X we have lim n V(X ↾ n ) ≈ µ. Proof idea. We would like to employ some kind of majority argument as we did in Section 3.2. The problem is that now, given Z ∈ D * , there is no way to assign weight on the various indices suggested by V, in a way that this weight can be consistently approximated. The reason for this is that V is only a BClearner and at each step the index guesses along the random reals with respect to µ * = f (Z) may change. However there is a convergence in terms of the actual measures that the various indices represent, so we use a function that takes any number of indices, and as long as there is a majority with respect to the measures that these indices describe, it outputs an index of this majority measure. With this modification, the rest of the argument follows the structure of Section 3.2. The formal argument. In the following we regard each partial computable measure µ e as a c.e. set of tuples (σ, I) such that I is a basic open set of [0, 1] and µ e (σ) ∈ I (see Section 2.1). Definition 3.5 (Majority measures). Given a weighted set A and a partial computable measure µ, if the weight of A ∩ {e | µ e = µ} is more than 1/2 we say that µ is the majority partial computable measure of A. Note that there can be at most one majority partial computable measure of a weighted set. In the case that µ of Definition 3.5 is total, we call it the majority measure of A. Lemma 3.6. There is a computable function that maps any index of a weighted set A to an index of a partial computable measure µ with the property that if A has a majority partial computable measure ν then µ = ν. 21 Proof. Given a weighted set A we effectively define a partial computable measure µ and then verify its properties. We view partial computable measures as c.e. sets of tuples (σ, I) where σ ∈ 2 <ω and I is an open rational interval of [0, 1] and (σ, I) ∈ µ indicates that µ(σ) ∈ I. Define the weight of tuple (σ, I) to be the weight of {i ∈ A : (σ, I) ∈ µ i . Then define µ as the tuples (σ, I) of weight > 1/2. It remains to verify that if A has a majority partial computable measure then µ is the majority partial computable measure of A. If ν is the majority partial computable measure of A it is clear that for each (σ, I) ∈ ν we have (σ, I) ∈ µ. Conversely, if (σ, I) ∈ µ, there would be a subset B ⊆ A of weight > 1/2 such that (σ, I) ∈ µ i for all i ∈ B. Since ν is the majority partial computable measure of A, it follows that there is an index of ν in B (otherwise the weight of A would exceed 1). Hence (σ, I) ∈ ν, which concludes the proof. Recall the function g from (2). It suffices to show that there exists a computable function L * : 2 <ω → 2 <ω such that for each Z ∈ D * we have lim s L * (Z ↾ s ) ≈ f (Z) (10) because then the function L(σ) = g(L * (σ)) will be a computable BC-learner for D * . Definition of L * . We let L * map the empty string to index 0 and for every other string σ we define L * (σ) as follows. So it remains to define L * in steps, where at step n we define L * on all string σ ∈ 2 h(n) . Since f * (σ) is basic open interval in M so we may use (5) in order to get a computable function σ → µ σ from strings to computable measures, such that for each σ the measure µ σ belongs to f * (σ). Given n and σ ∈ 2 h(n) , for each e define wgt (e) = µ σ ({τ ∈ 2 n | V(τ) = e}). Let A n be the weighted set of all e such that wgt (e) > 0 (clearly there are at most 2 n many such numbers e) where the weight of e ∈ A n is wgt (e). Then apply the computable function of Lemma 3.6 to A n and let L * (σ) be the resulting index. Properties of L * . We show that L * satisfies (10), so let Z be a computable member of D * . Proof. Since Z ∈ D * it follows that V learns µ * , hence w = 1. It remains to show that lim n w n = w. For each n let Q n be the open set of reals X with the property that there exists some t > n such that V(X ↾ t ) is not an index of µ * . Then Q n+1 ⊆ Q n and since V learns µ * we have lim n µ * (Q n ) = µ * (∩ n Q n ) = 0. Let P n be the closed set for reals X such that for all i ≥ n the value of V(X ↾ i ) is an index of µ * . Then P n ⊆ P n+1 for all n and w is the µ * -measure of ∪ n P n . Hence w = µ * (∪ n P n ) = lim n µ * (P n ). Given n 0 , for each n ≥ n 0 we have w n ≤ µ * (P n 0 ) + µ * (Q n 0 ). This shows that lim sup n w n ≤ lim sup n µ * (P n ) = w e . On the other hand P n 0 ⊆ {σ ∈ 2 n | V(σ) is an index of µ * )} for all n ≥ n 0 . So w = lim n (µ * (P n )) ≤ lim inf n w n . It follows that lim n w n = w. Lemma 3.8. For each Z ∈ D * , there exists some n 0 such that for all n > n 0 the value of L * (Z ↾ n ) is an index of f (Z) = µ * . Proof. Given Z ∈ D * consider the definition of L * (Z ↾ h(n) ) during the various stages n, and the associated weighted sets A n . According to the construction of L * and Lemma 3.6 it suffices to show that there exists n 0 such that for all n > n 0 the weighted set A n in the definition of L * (Z ↾ h(n) ) has a majority measure which equals µ * . Consider the sequence µ Z↾ h(n) ∈ f * (Z ↾ h(n) ) of computable measures that are defined by the function σ → µ σ applied on Z, and let w * n = µ Z↾ h(n) σ ∈ 2 n | V(σ) is an index of µ * ) . From (4) we get that for each n, |w n − w * n | < 2 −n . In particular, by Lemma 3.7, lim n w n = lim n w * n = 1. For (11) it suffices to consider any n 0 such that for all n > n 0 we have w * n > 1/2. Then by the construction of L * at step n and the definition of w * n it follows that for each n > n 0 , the majority measure of the weighted set A n is µ * . Lemma 3.8 shows that L * satisfies (10), which concludes the BC case of the proof of the 'if' direction of Theorem 1.6. From learning measures to learning reals: an extension There is a way in which we can relax the hypotheses of the 'if' direction of Theorem 1.6 for EX-learning, which concerns the strength of learning as well as the orthogonality hypothesis. Definition 3.9 (Partial EX-learnability of classes of computable measures). A class C of computable measures is partially EX-learnable if there exists a computable learner V : 2 <ω → N such that (a) C is weakly EX-learnable via V (recall Definition 1.3); (b) for every µ ∈ C there exists a µ-random X such that lim n V(X ↾ n ) is an index of µ. The idea behind this notion is that not only for each µ ∈ C the learner eventually guesses a correct measure (possibly outside C) along each µ-random real, but in addition every measure µ ∈ C is represented as a response of the learner along some µ-random real. Theorem 3.10 (An extension). Suppose that a computable function f : 2 ω → M is injective on an effectively closed set D ⊆ 2 ω , and D * ⊆ D is a set of computable reals. If f (D * ) is a partially EX-learnable class of computable measures then D * is an EX-learnable class of computable reals. Proof idea. We would like to follow the argument of Section 3.2, but now we have a weaker assumption which allows the possibility that given Z ∈ D * , µ * = f (Z), there are indices e with positive weight, which do not describe µ * . In order to eliminate these guesses from the approximation n → L * (Z ↾ n ) to an index of f (Z), we compare how near the candidate measures are to our current approximation to µ * . Using this approach, combined with the crucial fact (to be proved) that indices with positive weight correspond to total measures, allows us to eliminate the incorrect total measures (eventually they will be contained in basic open sets that are disjoint from the open ball f (Z ↾ n ) containing f (Z)) and correctly approximate an index of µ * . The formal argument. Recall the argument from Section 3.2 and note that (2) continues to hold under the hypotheses of Theorem 3.10. Hence it suffices to construct a computable L * : 2 <ω → N such that (3) holds. Since f (D * ) is a partially EX-learnable class of computable measures, there exists V with the properties of Definition 3.9 with respect to C := f (D * ). Lemma 3.11. Every measure µ * ∈ f (D * ) has an index e such that lim n V(X ↾ n ) = e for a positive µ *measure of reals X. Proof. Let µ * ∈ f (D * ) and consider a µ * -random X such that lim n V(X ↾ n ) is an index e of µ * . Consider the Σ 0 2 class F of reals Z with the property that lim n V(Z ↾ n ) = e. It remains to show that µ * (F ) > 0. Since F is the union of a sequence of Π 0 1 classes and X ∈ F , there exists a Π 0 1 class P ⊆ F which contains X. Since X is µ * -random, it follows that µ * (P) > 0, so µ * (F ) ≥ µ * (P) > 0. Given µ * ∈ f (D * ) define w e , w e [n] as we did in Section 3.2. Note that Lemma 3.1 still holds by the same argument, since it only uses the hypotheses we presently have about D, f, V. Lemma 3.12. For every µ * ∈ f (D * ) there exists an index e of µ * such that w e > 0. Conversely, if w e > 0 then e is an index of a computable measure µ ′ . Proof. The first claim is Lemma 3.11. For the second claim, if w e > 0 it follows from clause (a) of Definition 3.9 applied on V that e is an index of a computable measure µ ′ such that all reals in some set Q with µ * (Q) = w e > 0 are µ ′ -random. Let H be a partial computable predicate such that for every basic open set B of M and every e such that µ e is total, we have H(B, e) ↓ if and only if µ e B. 22 Hence if µ e is total then, ∃n H( f * (X ↾ n ), e)[n] ↓ ⇐⇒ µ e lim n f * (X ↾ n ). where the suffix '[n]' indicates the state of H after n steps of computation. Construction of L * . We let L * map the empty string to index 0 and for every other string σ we define L * (σ) as follows. So it remains to define L * in steps, where at step n we define L * on all string σ ∈ 2 h(n) . Since f * (σ) is basic open interval in M so we may use (5) in order to get a computable function σ → µ σ from strings to computable measures, such that for each σ the measure µ σ belongs to f * (σ). Properties of L * . We show that (3) holds, i.e. that for each Z ∈ D * the limit lim s L * (Z ↾ s ) exists and is an index for f (Z). Let Z ∈ D * , µ * = f (Z) and consider the sequence of computable measures µ Z↾ h(n) ∈ f * (Z ↾ h(n) ) that are defined by the function σ → µ σ applied on Z, and are used in the steps n of the definition of L * with respect to Z. Let w * e [n] = µ Z↾ h(n) {σ ∈ 2 n | V(σ) = e)} . and note that these are the weights used in the definition of L * at step n with respect to Z ↾ h(n) . Proof. The first equality is Lemma 3.1. From (4) we get that for each n, e, |w e [n] − w * e [n]| < 2 −n , which establishes the second equality. Proof. As e w e = 1 and 0 < w m ≤ 1, then there exists e 0 such that e<e 0 w e > 1 − w m /20. And as we also have for all e, lim n w * e [n] = w e , then there exists n 0 such that for all n ≥ n 0 , e<e 0 |w e − w * e [n]| < w m /20. Then for e < e 0 , it is clear that for all n ≥ n 0 , |w e − w * e [n]| < w m /5. On the other hand, we have e≥e 0 w e = 1− e<e 0 w e < w m /20. And for all n ≥ n 0 , e≥e 0 w * e [n] = 1− e<e 0 w * e [n] ≤ 1−( e<e 0 w e −w m /20) < w m /10. So for all e ≥ e 0 , 0 ≤ w e < w m /20 and 0 ≤ w * e [n] < w m /10, and thus, |w e − w * e [n]| < w m /5. If w e > 4w m /5, clearly, it must be be case that e < e 0 , and thus, there are only finitely much such index e. For every such index e, if e T , then there will be some n e such that for all n ≥ n e H e [n] ↓. Let n 1 be the largest number among these n e and n 0 , and then n 1 is the number we need. Let us now fix the constant n 0 of Lemma 3.14. Lemma 3.15 (The limit exists). The value of L * (Z ↾ n ) will converge to some index i ∈ T . Proof. Let L * (Z ↾ h(n 0 ) ) = e 0 . In case there is some n ≥ h(n 0 ) such that L * (Z ↾ n ) e 0 , then there should be some n 1 > n 0 such that L * (Z ↾ h(n 1 ) ) = e 1 e 0 . It then follows from the construction of L * that w * e ↓. This means that after step n 1 the value of L * (Z ↾ n ) will not change and thus, lim n L * (Z ↾ n ) = e 1 ∈ T . In case for all n ≥ h(n 0 ) we have L * (Z ↾ n ) = e 0 , then we only need to show that e 0 ∈ T . Assume e 0 T , then there exists some step n 2 ≥ n 0 such that H e 0 [n 2 ] ↓. As m ∈ T then for all n ≥ n 0 H m [n] ↑. By the construction of L * the value of L * (Z ↾ h(n 2 ) ) need to be changed. This is a contradiction. The above lemma concludes the proof of Theorem 3.10. Proof of Theorem 1.7 It is well known that if Z is computable and µ-random for some computable measure µ, then Z is an atom of µ and µ(Z ↾ n * Z(n))/µ(Z ↾ n ) tends to 1. Here is a generalization. Proof. We prove the contrapositive: fix computable µ, Z, and suppose that for some Y there exists a rational q ∈ (0, 1) such that for infinitely many n. For each t consider the set V t of the strings of the form (Z ↾ j ⊕X ↾ j ) * Z( j) for some j, X, such that j is minimal with the property that there exist at least t many n ≤ j with (13) by replacing Y with X. For each nonempty string σ, let σ − denote the largest proper prefix of σ. By the minimality of the choice of n above, we have that (a) V t is prefix-free; (b) each string τ ∈ V t+1 extends a string σ is the set of all the strings in V t+1 extending σ ∈ V t then µ(V t+1 (σ)) < q · µ(σ). It follows that µ(V t+1 ) < q · µ(V t ) so there exists a computable sequence (m j ) such that µ(V m j ) < 2 − j for each j. So (V m j ) is a µ-test and by its definition, if Y satisfies (13) for infinitely many n, then Z ⊕ Y has a prefix in V t for each t, and so in V m j for each j. Hence in this case Z ⊕ Y is not µ-random. Lemma 3.17. Given any computable Z, a real X is µ Z -random if and only if it is of the form Z ⊕ Y for some random Y with respect to the uniform measure. So Y is not random with respect to the uniform measure. Hence if Z X are computable, the measures µ Z , µ X are effectively orthogonal. Then the 'only if' direction of Theorem 1.7 follows from the 'only if' direction of Theorem 1.6 (with D := 2 ω and D * := C). The following concludes the proof of Theorem 1.7. Lemma 3.18. For each class C of computable reals, if {µ Z | Z ∈ C} is a weakly EX/BC learnable class of measures then C is EX/BC learnable. Proof. We first show the EX case. Fix C and let V be a learner which EX-succeeds on all measures in {µ Z | Z ∈ C}. It remains to construct an EX-learner L for C. Proof idea. Given a computable Z, in order to define L(Z ↾ n ) we use V on the strings Z ↾ n ⊕σ, σ ∈ 2 n and take a majority vote in order to determine Z(n). According to Lemmas 3.16 and 3.17, eventually the correct value of Z(n) will be the j such that (Z ↾ n ⊕σ) * j gets most of the measure on (Z ↾ n ⊕σ), with respect to any measure correctly guessed by V(Z ↾ n ⊕σ), for the majority of σ ∈ 2 n . Construction of L. First, define a computable g 0 : 2 <ω → N as follows, taking a majority vote via V. For each Z, n we define g 0 (Z ↾ n ) to be an index of the following partial computable real X. For each m < n we leet X(m) = Z(m). If m ≥ n, suppose inductively that it has already defined X ↾ m . In order to define X(m), it calculates the measure-indices V(X ↾ m ⊕σ) = e for all σ ∈ 2 m and waits until, for some j ∈ {0, 1}, at least 2/3 these partial computable measures µ e have the property µ e ((X ↾ n ⊕σ) * j) ↓> µ e (X ↾ n ⊕σ)/2. If and when this happens it defines X(m) = j. Fix Z ∈ C. By Lemma 3.16, if V weakly EX-learns µ Z , for all sufficiently large n the value of g 0 (Z ↾ n ) will be an index of Z (possibly different for each n). In order to produce a stable guess, define the function L : 2 <ω → N as follows. In order to define L(Z ↾ n ), consider the least n 0 ≤ n such that (i) at least proportion 2/3 of the strings σ ∈ 2 n have not changed their V-guess since n 0 , i.e. V(Z ↾ i ⊕σ ↾ i ) = V(Z ↾ n 0 ⊕σ ↾ n 0 ) for all integers i ∈ (n 0 , n]; (ii) no disagreement between Z ↾ n and the reals defined by the indices L(Z ↾ i ), i ∈ (n 0 , n) has appeared up to stage n. Then let L(Z ↾ n ) be g 0 (Z ↾ n 0 ). Given Z ∈ C we have that V weakly learns µ Z , so V(Z ↾ n ⊕Y ↾ n ) converges for almost all Y (with respect to the uniform measure). Hence in this case (i) will cease to apply for large enough n. Moreover by the properties of g 0 , clause (ii) will also cease to apply for sufficiently large n. Hence the n 0 in the definitions of L(Z ↾ n ) will stabilize for large enough n, and L(Z ↾ n ) will reach the limit g 0 (Z ↾ n 0 ) which is an index for Z. For the BC case, assume instead that V BC-succeeds on all measures in {µ Z | Z ∈ C}. We define g 0 exactly as above, and the BC-learner L by L(Z ↾ n ) = g 0 (Z ↾ n ). Given Z ∈ C we have that V weakly BC-learns µ Z , so for almost all Y (with respect to the uniform measure), V(Z ↾ n ⊕Y ↾ n ) eventually outputs indices of a computable measure µ (dependent on Y ↾ n ) with the property that µ((Z ↾ n ⊕Y ↾ n ) * Z(n)) > 2/3 · µ(Z ↾ n ⊕Y ↾ n ). By the definition of g 0 , this means that for sufficiently large n, the value of L(Z ↾ n ) is an index of Z. Hence L is a BC-learner for C. Define the eth randomness deficiency function by setting d e (σ) to be ⌈− log µ e (σ)⌉ − K(σ) for each string σ, where K is the prefix-free complexity of σ. Define the eth randomness deficiency on a real X as: d e (X) = sup n d e (X ↾ n ) where the supremum is taken over the n such that d e (X ↾ n ) ↓. By [Levin, 1984], if µ e is total then X is µ e -random if and only if d e (X) < ∞. At stage s, we define L(σ) for each σ of length s as follows. For the definition of L(σ) find the least i such that s is i-expansionary and d i (σ)[s] ≤ i. Then let j be the least such that p(i, j) is larger than any k-expansionary stage t < |σ| for any k < i such that d k (σ ↾ k )[t] ≤ k, and define L(σ t ) = p(i, j). Let X be a real. Note that L(X ↾ n ) = x for infinitely many n, then x = p(i, j) for some i, j, which means that µ i = µ x is total and there are infinitely many x-expansionary stages as well as infinitely many i-expansionary stages. This implies that there are at most x many y-expansionary stages t for any y < x with d y (σ ↾ y )[t] ≤ y. Moreover for each z > x there are at most finitely may n such that L(X ↾ n ) = z. Indeed, for each z if n 0 is an i-expansionary stage then L(X ↾ n ) z for all n > n 0 . Moreover if L(X ↾ n ) = x for infinitely many n, then d x (X) = d i (X) ≤ i and µ i is total, so X is µ i -random. We have shown that for each X there exists at most one x such that L(X ↾ n ) = x for infinitely many n, and in this case µ x is total and X is µ x -random. It remains to show that if X is µ-random for some computable µ, then there exists some x such that L(X ↾ n ) = x for infinitely many n. If X is µ i -random for some i such that µ i is total, let i be the least such number with the additional property that d i (X) ≤ i (which exists by the padding lemma). Also let j be the least number such that p(i, j) is larger than any stage t which is k-expansionary for any k < i with d k (σ ↾ k )[t] ≤ k. Then the construction will define L(X ↾ n ) = p(i, j) for each i-expansionary stage n after the last k-expansionary stage t for any k < i with d k (σ ↾ k )[t] ≤ k. We have shown that L partially succeeds on every µ-random X for any computable measure µ. Applications For the 'if' direction of Corollaries 1.9 and 1.11 we need the following lemma. Proof. We first show the part for the computable Bernoulli measures. The function which maps a real X ∈ 2 ω to the measure representation µ : 2 ω → R of the Bernoulli measure with success probability the real in R with binary expansion X is computable. Hence, given an effective list (µ e ) of all partial computable measures in M and an effective list (ϕ e ) of all partial computable reals in 2 ω , there exists a computable function g : N → N such that for each e such that ϕ e is total, µ g(e) is total and is the measure representation of the Bernoulli measure with success probability the real with binary expansion ϕ e . . We define an A-computable learner V as follows: for each σ let V(σ) be g(e) for the least index e ≤ |σ| which minimizes cost(e, σ) subject to the condition h(e, |σ|) = 1. It remains to show that for each X ∈ 2 ω which is random with respect to a computable Bernoulli measure µ, lim n V(X ↾ n ) exists and equals an index of µ. According to our working assumption about X, there exist numbers e such that ϕ e is total and sup n cost(e, X ↾ n ) < ∞. These numbers e are the indices of reals in 2 ω which are the binary representations of the success probability of the Bernoulli measure with respect to which X is random. Now consider the least e with this property, and which minimizes sup n cost(e, X ↾ n ). Note that, by the definition of cost(e, σ), for each k, σ there are only finitely many e such that cost(e, σ) < k. It follows by the construction of V that lim n V(X ↾ n ) = e. The proof for the class of all computable measures is the same as above, except that we take g to be the identity function. For the other direction of Corollary 1.9, let C be the class of all computable reals, and assume that the computable measures are weakly EX[A]-learnable. Then {µ Z | Z ∈ C} is also weakly EX[A]-learnable, and by Theorem 1.7 we get that C is EX[A]-learnable. Then by [Adleman and Blum, 1991] it follows that A is high. Applying Theorem 1.6 to classes of Bernoulli measures Perhaps the most natural parametrization of measures on 2 ω by reals is the following. Definition 4.2. Consider the function f b : 2 ω → M mapping each X ∈ 2 ω to the Bernoulli measure with success probability the real whose binary expansion is X. Clearly f b is computable, but it is not injective since dyadic reals have two different binary expansions. In order to mitigate this inconvenience, we consider the following transformation. Lemma 4.6. If A ≤ T B ′ then every class of computable measures which is EX-learnable by A with finitely many queries, is also EX-learnable by B. Proof. This is entirely similar to the analogous result for EX-learning of classes computable reals from [Fortnow et al., 1994]. By A ≤ T B ′ one can obtain a B-computable function that approximates A. Given an A-computable learner and replacing the oracle with the approximation given by B, the resulting learner will converge along every real on which the original learner converges and uses finitely many queries on A. Moreover in this case, the limit will agree with the limit with respect to the original A-computable learner. This shows that any class that is EX-learnable via the A-computable learner will also be EX-learned by the new B-computable learner. Now given an oracle A, by the jump-inversion theorem, since ∅ ′ ≤ T A ⊕ ∅ ′ , there exists some B such that B ′ ≡ T A ⊕ ∅ ′ . So A ≤ T B ′ . By Lemma 4.6, if the computable measures are EX-learnable with oracle A and finitely many queries, then it will also be EX-learnable by B. Then by Corollary 1.9 it follows that B is high, so B ′ ≥ T ∅ ′′ and ∅ ′′ ≤ T A ⊕ ∅ ′ as required. Conversely, assume that ∅ ′′ ≤ T A⊕∅ ′ . Let (µ e ) be a universal enumeration of all partial computable measure representations with dyadic values and note that by the discussion of Section 2.1 it is sufficient to restrict our attention these measures, which may not include some measures with non-dyadic values. By Jocksuch [Jockusch, 1972] there exists a function h ≤ T A such that (µ h(e) ) is a universal enumeration of all total computable measure representations with dyadic values. The fact that uniformly computable families of measures are EX-learnable (originally from Vinanyi and Chater [Vitanyi and Chater, 2017]) relativizes to any oracle. Since (µ h(e) ) contains all computable measure representations with dyadic values, it follows that the class of all computable measures is EX-learnable with oracle A. Conclusion and open questions We have presented tools which allow to transfer many of the results of the theory of learning of integer functions or reals based on [Gold, 1967], to the theory of learning of probability distributions which was recently introduced in [Vitanyi and Chater, 2017] and studied in [Bienvenu et al., 2014[Bienvenu et al., , 2017. We demonstrated the usefulness of this result with numerous corollaries that provide parallels between the two learning theories. We also identified some differences; we found that although in the special case of effectively orthogonal classes, the notions of Definitions 1.1 and 1.2 are closed under the subset relation, in general they are not so. 24 We showed that the oracles needed for the EX-learning of the computable measures are exactly the oracles needed for the EX-learning of the computable reals, which are the high oracles. In the classic theory there exists no succinct characterization of the oracles that BC-learn the computable functions. On the other hand, Theorem 1.7 shows that if an oracle can BC-learn the class of computable (continuous) measures, then it can also BC-learn the class of computable functions. Open problem. If an oracle can BC-learn the class of computable functions, is it necessarily the case that it can learn the class of computable (continuous) measures? Another issue discussed is the low for EX-learning oracles for learning of measures. We showed that every such oracle is also low for EX-learning in the classical learning theory of reals. We do not know if the converse holds.
2018-01-23T16:08:10.000Z
2018-01-05T00:00:00.000
{ "year": 2018, "sha1": "c7607fef53728ae3be8194ba8828fded8ad4a5f4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1801.02566", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4548469737dd4a96cdc2e53d1f4a92a3a9dbe3d7", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
56107963
pes2o/s2orc
v3-fos-license
Interactive comment on “ The Irminger Sea and the Iceland Sea time series measurements of sea water carbon and nutrient chemistry 1983 – 2006 ” The Irminger Sea and the Iceland Sea time series measurements of sea water carbon and nutrient chemistry 1983–2006 J. Olafsson, S. R. Olafsdottir, A. Benoit-Cattin, and T. Takahashi Marine Research Institute, Skulagata 4, IS 121 Reykjavik, Iceland Institute of Earth Sciences, University of Iceland, Sturlugata 7, IS 101 Reykjavik, Iceland Lamont-Doherty Earth Institute, Palisades, NY 10964, USA Received: 23 September 2009 – Accepted: 28 September 2009 – Published: 16 October 2009 Correspondence to: J. Olafsson (jon@hafro.is) Published by Copernicus Publications. For a complete list of all parameters available in CARINA see Key et al. (2009).Note the different names for the parameters in the Exchange files (the individual cruise files) and the merged data product. Introduction In 1983 a study of the seasonal variability of carbon-nutrient chemistry was initiated off the Iceland shelf in two hydrographically different regions of the northern North Atlantic (Takahashi et al., 1985;Peng et al., 1987).One station was in the northern Irminger Sea (IRM-TS) with relatively warm and saline (S > 35) Modified North Atlantic Water derived from the North Atlantic Drift.This location may also be described as representing the sub-polar gyre (Hátún et al., 2005b).The other station was in the Iceland Sea (IS-TS) where cold Arctic Intermediate Water, formed from Atlantic Water and low salinity Polar Water usually predominates but the Polar Water influence in the surface layers is variable (Stefánsson, 1962;Hansen and Østerhus, 2000).Both stations are thus in regions important as sources for North Atlantic Deep Water (NADW).The original seasonal variability study was expanded in time and the sampling incorporated into the quarterly cruises of the repeat hydrography network of the Marine Research Institute (MRI) in Reykjavik, Iceland.Quarterly sampling is insufficient to adequately describe annual biochemical processes in these waters.It has, however, been estimated, with respect to hydrographic variability in sub-arctic waters of the N-Atlantic, that 4 observations/year are sufficient to record decadal variability (Hátún et al., 2005a).The Iceland Sea time series data has recently been evaluated to describe the rate of seawater acidification, at surface and deep levels (Olafsson et al., 2009).The time series observations have been carried out under the EC projects ESOP-2, TRACTOR and currently CARBOOCEAN and EPOCA. Data provenance The CARINA database includes data and metadata from 188 oceanographic cruises/campaigns, of which five entries consist of multiple cruises (Key et al., 2009).The IS and IRM time series contribute to the CARINA data.However, since these stations are in relatively shallow locations compared to the CARINA secondary quality control criteria, depth > 1500 m and considering also the high temporal variability, IRM-TS could not be included in the secondary QC.Data from the IS-TS is however included in the secondary QC for TCO 2 and nutrients as described in Olsen (2009) and Olafsson and Olsen (2010).Here we describe the methods and quality control procedures applied in gathering the two time series data sets.The IRM-TS is included in the CARINA-ATL region (Tanhua et al., 2010) and the IS-TS is included in the CARINA-AMS region (Olsen et al., 2009). The repeat hydrography network of the Marine Research Institute (MRI) is carried out in quarterly cruises conducted generally in February, May, August and November each year.The time series stations are located at 64.33 • N, 28.0 • W (IRM-TS) where the depth is 1000 m and the IS-TS station is at 68.0 • N, 12.67 • W where the depth is 1850 m (Fig. 1).When it has not been possible to reach the time series location but observations have been available at a nearby station in the region, these have been incorporated into the time series.The nearby stations are generally within 70 km from the time series location except in 1983-1984 when the initial Iceland Sea sampling was 250 km to the west of the fixed location.Samples from all collection depths have been taken for salinity, dissolved oxygen and inorganic nutrients.From 1983 to 1991 only surface samples for pCO 2 and TCO 2 were collected.Water column sampling for TCO 2 started in 1991 and for pCO 2 in 1993. Hydrography From 1983 to the end of 1989 the station water sampling was conducted with TPN-Nansen water bottles, from HYDROBIOS GmbH, on a hydrowire. They were fitted with reversing mercury thermometers.From the beginning of 1990 the station work has been conducted using SEA-BIRD Conductivity-Temperature-Depth (CTD) profiling instruments and water bottles on a rosette.Sample salinity measurements were carried out using Guildline Autosal Model 8400 salinometers. Dissolved oxygen Dissolved oxygen has been determined throughout the time series by an in-bottle microburette Winkler titration and visual end point detection (Carpenter, 1965).The sample bottles are volume calibrated Quickfit brand Erlenmeyer type of 50 ml nominal volume. Inorganic nutrients Samples for the determinations of the phosphate, nitrate (nitrate+nitrite) and silicate concentrations have been collected in 250 ml soft low density polyethylene bottles washed with dilute hydrochloric acid prior to each cruise.They are kept refrigerated if the analysis is carried onboard, as is common in the spring, but frozen for analysis ashore as is common in the other seasons.In spring and summer, samples from the surface layer, 0-60 m, are syringe filtered through a 0.45 µm Whatman PURADISC syringe filter to avoid turbidity blank effects, particularly on phosphate.Samples from deeper water are not filtered.Prior to 1987 a single channel Technicon AutoAnalyzer II was used for nitrate and silicate and a manual method for phosphate (Murphy and Riley, 1962).A Chemlab three channel autoanalyzer has been used since 1987, set up for determinations of dissolved phosphate, nitrate and silicate.The methods were those described by Grasshoff (Grasshoff, 1970) except for phosphate were a modified version of the Murphy and Riley method was automated (Murphy and Riley, 1962).A series of 5 working standards is prepared with each batch of samples and the response fitted to concentration with a 3rd order polynomial regression. DATE In addition, a laboratory reference material, LRM (Aminot and Kérouel, 1998), is regularly produced, assessed and compared to QUASIMEME materials.The LRM has nutrient concentrations within the range found for the regional seawater and samples of the LRM are run with each sample batch.Results from these measurements are indicative of the precision and accuracy of the daily procedures.The LRM results are generally within accepted limits, ±0.2 µmol/l for nitrate and silicate, ±0.03 µmol/l for phosphate.Corrections based on the LRM results need rarely to be applied but the results do occasionally give cause for re-running samples. Partial pressure of carbon dioxide in seawater From 1983 to spring 1993, samples for pCO 2 were collected onboard by recirculating 500 ml marine air in a closed system through a bubbler immersed in a 4 l surface seawater sample.The equilibrated gas was isolated and sealed in a 300 ml glass flask equipped with a stopcock at each end and shipped for analysis at Lamont-Doherty Earth Observatory (LDEO) by means of gas chromatography.Air samples were also collected in 300 ml glass flasks by suction using a hand pump and were analysed the same way.The gas chromatograph was calibrated using air-CO 2 gas mixtures which had been analysed by C. D. Keeling of the Scripps Institution of Oceanography.The procedure changed after mid year 1993.Then 500 ml seawater samples for pCO 2 were brought back to MRI in screw caped Pyrex bottles, inoculated with saturated HgCl 2 solution (400 microliters) and kept in dark cold storage until analyzed.Their pCO 2 values were determined at a known temperature and pressure using the bubble-type equilibrator system coupled with a gas chromatograph (Chipman et al., 1993), which was calibrated with three air-CO 2 mixtures tied to the Keeling standards.The pCO 2 was measured in the stored sea water samples generally within 14 days after the samples were collected at sea.The pCO 2 data are not included in the merged CARINA data products, but are included in the individual cruise file available at http://cdiac.ornl.gov/oceans/CARINA/Carina inv.html. Storage experiment of pCO 2 samples The effects of sample storage were evaluated at MRI by collection of two sets of 12 paired samples during spring bloom, 2 June 1998 in the Iceland Sea at 68 • N, 18.8 • W. One set was from the surface (t = 1.83 • C) and the other from 200 m (t = 1.05 • C), well below the euphotic layer.Duplicate samples were analysed ashore over the period 3 to 23 June (Fig. 3).The average concentration for the storage experiment was 163.8 µatm at 5 m depth (s.d.0.6 µatm, n = 6) and 383.4 µatm at 200 m depth (s.d.1.7 µatm, n = 6).The change with time of pCO 2 in the surface samples and 200 m samples were −0.006 ± 0.05 µatm d −1 and −0.06 ± 0.08 µatm d −1 respectively, and was found to be very Earth Syst.Sci.Data, 2, 99-104, 2010 www.earth-syst-sci-data.net/2/99/2010/ small.This indicates that sample storage of up to three weeks has insignificant influence on pCO 2 and that the overall precision of the pCO 2 determinations is better than ± 2 µatm. Dissolved inorganic carbon Total dissolved inorganic carbon (TCO 2 ) has been determined in HgCl 2 -preserved samples by coulometry (Chipman et al., 1993) using at LDEO the Coulometrics CM-5010 instrument.HgCl 2 was found to be an effective preservation agent for TCO 2 in seawater samples in tightly sealed bottles stored up to 7 months (Takahashi et al., 1970).Prior to 1991 the samples were analysed at LDEO where the coulometer was calibrated using three different methods: a) with weighed quantities of clear Iceland spar (CaCO 3 ), b) with weighed quantities of heat treated Na 2 CO 3 ,and c) volumetrically determined CO 2 gas (Takahashi et al., 1985).These calibrations were consistent with Keeling's manometric values as demonstrated by Wong (1970) using Keeling's manometric system.The overall precision in the period 1983 to 1990 is estimated as ± 4 µmol kg −1 (Takahashi et al., 1985).As demonstrated extensively through the WOCE Program, the gas loop calibration method yielded TCO 2 values consistent with those in the Dickson reference waters, which were tied to the Keeling's manometric system (Rubin et al., 1998;Takahashi et al., 2006).From 1991 the coulometric determinations (using an improved Coulometrics Model CM-5011) were performed at MRI where the coulometer was calibrated using 99.998% CO 2 gas in fixed volume gas loops at known pressure and temperature (Chipman et al., 1993). The Iceland Sea time series TCO 2 data were assesed as a part of the Nordic Seas CARINA data (Olsen, 2009).They were not included in the crossover and inversion analyses but it was concluded on the basis of nitrate-TCO 2 relations that the TCO 2 data "appeares reasonable" (Olsen, 2009). TCO 2 quality control Accuracy of the TCO 2 determinations at MRI is maintained since 1992 by comparison of results with sea water Certified Reference Material (CRM) calibrated and supplied by Dr. Andrew G. Dickson of the Scripps Institution of Oceanography.The evaluation of the CRM results indicated a systematic error of about −4.7 ± 2.0 µmol C kg −1 in 212 CRM determinations.In 1999 the coulometer system gas loop volumes were redetermined by gravimetry and the systematic error ascribed to errors in the gas loop volumes.A correction multiplier of 1.0029 was hence applied to all results from analyses in the 1991-1998 period.The CRM results representing this period are shown in Fig. 4. Due to instrument problems there were no TCO 2 analyses of samples or CRMs in 1999 and 2000.Samples collected in this period were analysed in 2001 and 2002.The samples were all spiked with HgCl 2 , stored refrigerated at TCO2 differance (µmol/kg) 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 ) 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 The precision of the TCO 2 determinations at MRI is estimated from the standard deviation of the analysis of individual CRM batches. It ranges from 1.4 to 2.3 µmol kg −1 for the period 1993 to 2008 based on 382 CRM determinations in 13 batches.The average is 1.85 µmol kg −1 which rounds to ± 2 µmol kg −1 .Since all the time series TCO 2 data are tied to CRMs, as described above, we estimate the overall accuracy as ± 2 µmol kg −1 , the uncertainty in the CRM determinations. Summary The ways and means of assembling and quality controling the Irminger Sea and Iceland Sea time-series biogeochemical data are described.These stations are in relatively shallow, but oceanographically important locations.Compared to the CARINA criteria, depth > 1500 m, IRM-TS could not be included in secondary QC and the IS-TS only in a limited way.However, with the information provided here, the quality of the data can be assessed e.g. on the basis of the results obtained with the use of reference materials. Data access The whole CARINA database set is published at http://cdiac.ornl.gov/oceans/CARINA/Carinainv.html.It contains 188 individual cruise files in comma-separated, WHPO exchange www.earth-syst-sci-data.net/2/99/2010/ Earth Syst.Sci.Data, 2, 99-104, 2010 format.Condensed metadata are contained in the header of each data file.In addition, the CARINA database contains three merged, comma-separated data files with the data products.These files are divided into the three geographical regions of CARINA.No special software is needed to access the data, but software for MATLAB users is offered to facilitate reading of the data. Figure 1 . Figure 1.Locations of the Irminger Sea and the Iceland Sea time series stations inserted on the N-Atlantic surface current chart of Hansen and Østerhus (2000). Figure 2 . Figure 2. Average MRI-IS differences of (a) nitrate, (b) phosphate and (c) silicate concentrations (in µmol/l) from assigned values in QUASIMEME sea water test materials 1993-2008 (silicate from 1996).The overall long term mean deviations are shown with black slashed lines and their standard deviations are shown with a red dotted line. Figure 3 . Figure 3. Results of pCO 2 sample storage experiment.Average pCO 2 (µatm at 4 • C) in (a) samples from 5 m depth, and (b) samples from 200 m depth.The average concentrations for the storage experiment are shown with a black slashed line and their standard deviations are shown with red dotted lines. Figure 4 . Figure 4. Results of MRI TCO 2 determinations in Dickson's sea water reference materials (CRMs) in two periods 1992-1998 and 2001-2008.The average differences (MRI-CRM) are shown as black slashed lines and their standard deviations are shown with red dotted lines.
2018-12-12T06:29:08.209Z
2009-10-16T00:00:00.000
{ "year": 2009, "sha1": "6aa4952ada7fd49a7fb94f0e996710fe81fa5206", "oa_license": "CCBY", "oa_url": "https://essd.copernicus.org/articles/2/99/2010/essd-2-99-2010.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "981f1ff4a56647bf75468aeaec8472fe5c2b6a3e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
233424365
pes2o/s2orc
v3-fos-license
The Comparison of In Vitro Photosensitizing Efficacy of Curcumin-Loaded Liposomes Following Photodynamic Therapy on Melanoma MUG-Mel2, Squamous Cell Carcinoma SCC-25, and Normal Keratinocyte HaCaT Cells The research focused on the investigation of curcumin encapsulated in hydrogenated soy phosphatidylcholine liposomes and its increased photoactive properties in photodynamic therapy (PDT). The goal of this study was two-fold: to emphasize the role of a natural photoactive plant-based derivative in the liposomal formulation as an easily bioavailable, alternative photosensitizer (PS) for the use in PDT of skin malignancies. Furthermore, the goal includes to prove the decreased cytotoxicity of phototoxic agents loaded in liposomes toward normal skin cells. Research was conducted on melanoma (MugMel2), squamous cell carcinoma (SCC-25), and normal human keratinocytes (HaCaT) cell lines. The assessment of viability with MTT (3-(4,5-dimethylthiazolyl-2)-2,5-diphenyltetrazolium bromide) evaluated cell death after exposure to blue light irradiation after 4 h of pre-incubation with free and encapsulated curcumin. Additionally, the wound healing assay, flow cytometry, and immunocytochemistry to detect apoptosis were performed. The malignant cells revealed increased phototoxicity after the therapy in comparison to normal cells. Moreover, liposome curcumin-based photodynamic therapy showed an increased ratio of apoptotic and necrotic cells. The study also demonstrated that nanocurcumin significantly decreased malignant cell motility following PDT treatment. Acquired results suggest that liposomal formulation of a poor soluble natural compound may improve photosensitizing properties of curcumin-mediated PDT treatment in skin cancers and reduce toxicity in normal keratinocytes. Introduction Skin cancers are among the most widespread types of neoplasm, affecting people from less pigmented, Caucasian populations, usually more than 50 years of age [1]. Those malignancies are divided into two main subgroups composed of more lethal melanoma and more prevalent non-melanoma skin cancers (NMSC). Currently, non-melanoma skin cancers are generally represented by tumours from transformed keratinocytes: cutaneous squamous cell carcinoma (cSCC) and basal cell carcinoma (BCC). There are also other skin-related neoplasms, including Kaposi's sarcoma, Merkel cell and adnexal carcinoma, cutaneous lymphoma, or dermatofibrosarcoma protuberans. Nevertheless, those conditions are not as common as NMSCs [2,3]. Like most tumours, skin cancer treatment involves widely used methods, such as surgical intervention, radiotherapy, chemotherapy, and immunotherapy. However, despite the effectiveness of the former, it cannot be applied in all cases due to adverse malignancy's localization and potential patient's co-morbidities [2]. Moreover, radiotherapy and chemotherapy can not only be toxic and lead to side effects but also might be ineffective in cells with developed resistance [4], whereas immunotherapy currently seems too complex for general use [5]. Therefore, further development of more efficient therapeutic strategies is still required. One of the novel approaches that could potentially overcome those obstacles is the usage of photodynamic therapy (PDT). The main principle of this method is to apply and accumulate the chosen substance with photosensitive properties (called further photosensitizer (PS)) within tumour tissue. Later, after local irradiation with a specific wavelength laser, excited PS can transform surrounding molecules into highly active reactive oxygen forms (ROS). Depending on the mechanisms, excited PS may transfer electrons on organic compounds (via type I reaction), creating radicals such as hydrogen peroxide, or transmits its energy on molecular oxygen by developing a singlet oxygen ( 1 O 2 ) (via type II reaction) [6]. Accumulated ROS can damage plenty of biomolecules, including lipids, proteins, DNA, and carbohydrates. However, due to the limited diffusing capabilities of radicals, their damaging properties rely on PS. Whether PS exhibits a higher affinity to concentrate closely to mitochondria's membrane or enzymes, radicals' activities may have a different impact on cells. ROS and singlet oxygen from an activated photosensitizer can induce cell death, mainly by damaging lipids of plasma and organelles membranes, triggering caspases cascade, and inactivating anti-apoptotic proteins. Depending on the efficiency of PS, photokilling can occur by rough conditions of necrosis or (preferably) in milder cases apoptosis or/and autophagy [7][8][9]. Photodynamic therapy is gaining growing interest due to its low invasiveness, high selectivity, and comparable lower costs to other treatments. Nonetheless, presently, PDT is not applicable in the treatment of metastatic cells [10]. The most common PS evoke a low therapeutic effect against highly pigmented melanoma cells [5], and the method itself can be burdened with pain, especially in combination with commonly used 5-aminolevulinic acid (5-ALA) [4]. For all those reasons, PDT enhancement is currently extensively investigated, especially with novel, high-efficient, and less toxic photosensitizers. Curcumin is a natural polyphenol extracted from turmeric (Curcuma longa), with well-documented anti-tumour, anti-inflammatory, and photoactive properties [11][12][13]. This golden polyphenol has already been used for its anti-inflammatory effects, as a treatment in various dermatological conditions [14]. Due to its exceptional attributes, this plant-derived substance could potentially play a dualistic role in PDT functioning simultaneously as PS and a direct therapeutic molecule. Experiments conducted on animal models and in vitro suggest that curcumin can downregulate various molecular responses in boosting up inflammatory and pro-survival pathways, such as those related to transcription factors like Nf-κB or AP-1 [15,16]. Thus, curcumin could potentially not only increase the chances of apoptosis in defective cells but also stimulate the production of cell killing radicals, making it a promising compound to use in PDT therapies. Among all the previously mentioned benefits, the extremely poor water solubility and low bioavailability of this natural plant derivative limit its clinical use in cancer treatment. Moreover, basic skin properties made it an excellent barrier decreasing percutaneous penetration of curcumin. For this reason, the development of stable formulations of drug carriers that improve skin penetration and therapeutic effectiveness with reduced side effects is an essential challenge for many researchers [17]. Nowadays, various nanocarriers, which could greatly enhance the bioavailability of drugs, are under intensive development and some of them had already been functionalized for active targeting of skin cancers, including those based on gels or liposomes that are modified with aptamers [18,19]. According to several studies, nano-formulations of photosensitizers improve the pharmacokinetic effects and therapeutic advantage of free compounds [20][21][22]. Besides that, lipid formulations are proposed as an alternative strategy to potentiate the effect of PDT against resistant melanoma cells [23]. Liposomes are a versatile drug delivery system. They are not toxic and (when pegylated) exhibit longer circulation time among all drug carriers. Liposomes can encapsulate hydrophilic, hydrophobic, and amphiphilic molecules. Liposomes have many advantages, such as controlled release properties, cell affinity, tissue compatibility, reducing drug toxicity, and improving drug stability. As most drug carriers, liposomes can accumulate in inflammatory tissues by using the enhanced permeability and retention (EPR) effect. This accumulation can be further increased by decreasing particle size as well as pretreatment by some drugs and substances [24][25][26][27]. In general, at least in the animal model, an essential increase of drug concentration is observed in the tumour tissue when liposomal drugs are applied. In the case of curcumin, which is a hydrophobic, the use of liposomes may diminish issues with its low solubility and bioavailability, enhancing pharmacokinetics and accumulation in cancer tissues [16,20,21]. In this study, a relatively novel curcumin formulation has been used, in which curcumin is encapsulated in liposomes composed from hydrogenated soy phosphatidylcholine (HSPC), which exhibited high stability, due to a relatively rigid liposomes' bilayer and, therefore, low curcumin diffusion properties. This formulation proved its superiority in comparison to a free substance on pancreatic cancer cell lines and can be regarded as an essential improvement of the traditional route of curcumin supply [28][29][30][31]. Herein, a comparison of the phototoxic and anti-cancerous effects of curcumin and its stable HSPC liposomal formulation on skin cancer cell lines was conducted, including SCC-25 representing cutaneous squamous cell carcinoma, MUG-Mel2 representing a melanoma cell line, and normal human keratinocytes HaCaT representing control normal skin cells ( Figure 1). To evaluate the effects of encapsulated curcumin as a photosensitizer in PDT on different skin cell lines, MTT (3-(4,5-dimethylthiazolyl-2)-2,5-diphenyltetrazolium bromide) dark cytotoxicity, phototoxicity assay, immunocytochemical staining against markers of apoptosis, Bcl-2, and Bax, measuring apoptosis with flow cytometry and a wound-healing assay, were performed. The effect of curcumin and liposome-curcumin-based PDT was performed on skin cancer cell MUG-Mel2 (melanoma cells), SCC-25 (squamous cell carcinoma), and normal keratinocyte cells HaCaT. Effectiveness of free and encapsulated curcumin was compared in doses of 5 and 10 µM after blue light low irradiation (2.5 J/cm 2 ). Results indicated that liposome curcumin-mediated PDT caused a significantly higher reduction of viability in both cancer cell lines than a free natural compound. Curcumin mediated-PDT in 10 µM concentration caused decreased viability in SCC-25 (34%) and MUG-Mel2 (27%). Liposomes with curcumin mediated-PDT inhibited cancer cells' growth more than a free compound after irradiation reaching IC50. Liposomal-curcumin-PDT exhibit cytotoxicity of 53% in MUG-Mel2 and 58% in SCC-25 at the same dose-10 µM and low irradiation dose (2.5 J/cm 2 ) while the viability of HaCaT was decreased only by 11%. Interestingly, HaCaT cells maintained viability of around 90% after different treatments. Liposomal curcumin in the concentration of 10 µM was chosen in all subsequent biological studies ( Figure 2). The Effect of Liposomal Curcumin Based PDT on MUG-Mel2, SCC-25, and HaCaT Cells in the Wound-Healing Process To check whether liposomal curcumin-based PDT decreases HaCaT, SCC-25, and MUG-Mel2 cells' motility, the wound healing test was performed. The assay shows the migration of cells by evaluating a primaeval scratch's closure in a 24 h observation. The results show that PDT with liposomal curcumin caused the strongest effect of migration properties in MUG-Mel2 cancer cells. After 24 h from the treatment (liposomal curcumin and the light), there was no migration observed. In SCC-25 cells, the wound was minimally closed, whereas, in normal HaCaT cells, the wound closed almost entirely within 24 h of incubation after therapy. The results are presented in Figure 3. The Effect of Liposomal Curcumin-Based PDT on MUG-Mel2, SCC-25, and HaCaT Cells on Bax and Bcl-2 Expression Immunocytochemical staining allows examining whether the proposed therapy with liposomal curcumin and irradiation has a cytotoxic effect on cancer cells. To assess whether the treatment causes apoptosis in cancer cells, apoptosis-related proteins bax and bcl-2 were used for the immunocytochemical analysis and then evaluation of immunoreactivity was performed. An increase in the expression of bax and decreased expression of bcl-2 in cancer cells, MUG-Mel2 and SCC-25, was observed ( Figure 4). In both cancer cell lines, pro-apoptotic bax protein showed strong expression after treatment of cells with liposomal curcumin and irradiation. The expression of bcl-2 was weak or moderate. Nonetheless, HaCat cells did not significantly change the expression of the previously described proteins after irradiation only, liposomal curcumin only, and PDT treatment. The Impact of Liposomal Curcumin on Cells Lines' Apoptosis Flow cytometry analysis was applied to evaluate cell death caused by liposomal curcumin in SCC-25, MUG-Mel2, and HaCaT cells ( Figure 5). As shown in Figure 4A,B after 24 h of treatment, early and late apoptosis and necrosis in SCC-25 and MUG-Mel2 cells were observed. The combination of liposomal curcumin and PDT increased apoptosis to 40% and 30% in SCC-25 and MUG-Mel2 cells, respectively. Interestingly, after 24 h from irradiation, in SCC-25, cell death is mainly caused by early and late apoptosis, whereas, in MUG-Mel2, cell death is caused by late apoptosis and necrosis. In control cells, HaCaT, a slight increase in the apoptosis ratio in cells after treatment (10%) was observed. activity was performed. An increase in the expression of bax and decreased expression of bcl-2 in cancer cells, MUG-Mel2 and SCC-25, was observed ( Figure 4). In both cancer cell lines, pro-apoptotic bax protein showed strong expression after treatment of cells with liposomal curcumin and irradiation. The expression of bcl-2 was weak or moderate. Nonetheless, HaCat cells did not significantly change the expression of the previously described proteins after irradiation only, liposomal curcumin only, and PDT treatment. Discussion In the past, various approaches were undertaken in order to increase the efficacy of photodynamic therapy [32,33]. The studies included the application of chemically functionalized PS [34][35][36] as well as liposomal derivatives of photosensitizers for both in vitro and in vivo studies. Several studies showed an advantage of the latter modality over the routine way of photosensitizer delivery to targeted cells [37,38]. Curcumin, which revealed promising effects in PDT, can act as a direct photosensitizer exhibiting cytotoxic properties in various types of tumours, including skin cancers [12,39,40]. Although curcumin can be applied in a pure form and then sensitized with light at the proper wavelength, its liposomal formulation was proposed as the more effective strategy in killing the malignant cells [41,42]. Free curcumin is characterized by low water solubility and poor bioavailability. It is rapidly metabolized or degraded in the cell culture media or after oral administration. In contrast, nano-capsules in which the compound is confined into phospholipid bilayers dismiss the significant drawbacks and promote increased absorption of curcumin into the cells [43]. In the present study, the effectiveness of curcumin loaded in PEGylated, cholesterolfree formulation based upon hydrogenated soya PC liposomes has been investigated on three skin cell lines: melanoma MUG-Mel2, squamous cell carcinoma SCC-25, and immortalized keratinocytes HaCaT cells. In previous studies, the previously described formulation of liposomes was evaluated on the pancreatic cell line and in human plasma [31]. The results indicated that this formulation presented the best parameters of the hydrophobic drug incorporation by improved bioavailability, increased stability, and cytotoxicity. In this article, the MTT results revealed statistically significant phototoxicity of this liposomal formulation of 10 µM curcumin compared to the free substance before and after photodynamic therapy. In the case of a free substance, it interacts with the cellular outer membrane, while liposomes are quickly internalized and enter the cell through the endosomal route, which increases its bioavailability and, thus, results in more potent cytotoxic effects [43,44]. Acquired results are in accordance with the observations and conclusions of Vetha et al. and Ambreen et al. on different cancer cell lines [41,45]. Although the effect was evident for both malignant cell lines, normal HaCaT keratinocytes were slightly resistant to the therapy. These spontaneously immortalized human keratinocytes from adult skin have been used as a model cell line to study normal keratinocyte functions in different studies [46]. Additionally, HaCaT cells maintained in a culture medium without the calcium display normal morphogenesis and expression of the cellular membrane markers as keratinocytes isolated from adult skin [47]. Based on gleaned, different experimental results, conclusions emerge that immortalized HaCaT keratinocytes are less susceptible to photosensitization with curcumin than MUG-Mel2 and SCC-25 malignant cells in terms of phototoxicity. These observations are following the results of Popovic et al. [48]. The authors found that 3 µM hypercin-mediated-PDT is completely refractory to keratinocytes. Moreover, they indicated a different response toward a natural plant derivative compound-PDT in each skin cell type. On the other hand, Szlasa et al. [12] presented the increased cytotoxic impact of the free curcumin-mediated photodynamic therapy on the keratinocytes. However, according to the cell line and light dose used in their studies described in the methods paragraph, the authors used normal human epidermal keratinocytes (HEK) and 6 J/cm 2 to irradiate cells in their experiments. These differences in the cells' response to curcumin irradiated with blue light may be considered due to the distinct vulnerability of cell lines to the cell-stress induction and different PDT protocols. The present study results showed that liposomal formulation of a compound considered a potent photosensitizer can also enhance the effectiveness of liposomal curcuminmediated-PDT by increasing the apoptosis ratio validated by flow cytometry and the production of pro-apoptotic factors, e.g., Bax protein. At the same time, the proposed therapy decreases the production of anti-apoptotic proteins, which is, in this case, Bcl-2. The significantly increased strong Bax expression was observed in both cancer cell lines, whereas, in HaCaT cells, Bax expression was lower in the sample treated with liposomal curcumin irradiated with the light. A flow cytometry assay confirmed this effect. Cells were stained with Annexin V-FITC and propidium iodide to detect early and late apoptosis and dead cells after treatment. It has been noticed that the late apoptosis in SCC-25 and MUG-Mel2 cells was increased after 24 h from the proposed therapy. Interestingly, SCC-25 cells revealed apoptosis as a leading cause of cell death, while MUG-Mel2 cells showed both types of cell death as a possible mechanism. The above finding remains in concordance with the results of other authors and shows that, in hydrophobic photosensitizers, an increase of photodynamic efficacy could be achieved by trapping them in liposomes [41,45]. The presented observations also point toward a possible mechanism of action of curcumin in PDT via an apoptotic pathway. Cells in all three examined groups showed necrosis, which is routinely observed after the PDT [7,49]. As a result of the different proliferative and migration capabilities of examined skin cell lines, a designed treatment on migration potency by a wound healing assay has been evaluated. A further examination confirms a decreased motility of melanoma and squamous cell carcinoma cell lines compared to normal keratinocytes after liposomal curcumin only and liposomal curcumin following irradiation, which is consistent with Szlasa et al. examination of the wound [12]. Normal cells nearly filled the wound (15% remaining) by 24 h, whereas the wound in malignant cells remained unfilled after 24 h. According to Ambreen et al., it is evident that liposomal curcumin-PDT reduces cancer cell migration and contributes to malignant cell metastasis inhibition. Conducted investigations indicate the promising role of curcumin encapsulated in hydrogenated soy phosphatidylcholine liposomes in enhancing the photokilling effect on melanoma and squamous skin cancer cells following blue light PDT. Additionally, a minimal phototoxic reaction was observed in normal, human, immortalized keratinocytes with the same curcumin dose after irradiation. In conclusion, further experiments on the specific, cellular functional differences between the skin cells and in vivo testing will help confirm the effectiveness of nanocurcumin as a photosensitizer in PDT. Cell Culture Melanoma MUG-Mel2 (DSMZ, Germany) cells were cultured in RPMI 1640 cell culture medium, SCC-25-tongue squamous carcinoma (DSMZ, Braunschweig, Germany) cells in DMEM-F12, and HaCaT human epidermal keratinocytes (CLS, Eppelheim, Germany) were cultured in DMEM (Dulbecco's Modified Eagle Medium) without calcium to maintain normal morphogenesis and expression of the cellular membrane markers. To prepare a full cell culture media, 10% FBS, 1% glutamine, and 1% antibiotics were added to the bottle. Culture reagents were bought from Gibco (Thermo Fisher Scientific Inc., Waltham, MA, USA). Cells were maintained at 37 • C and 5% CO 2 in a humidified atmosphere. For experiments, cells from the 3rd to the 10th passages were used. Preparation of Curcumin-Loaded Liposomes and Curcumin in DMSO Curcumin-loaded liposomes of the composition HSPC/DSPE-PEG2000 9.5:0.5 mol/mol were formulated using the extrusion technique. Hydrogenated soy phosphatidylcholine (Phospholipon 90H, HSPC), 1,2-distearol-sn-glycero-phosphoethanolamin-N-(poly[ethylene glycol]2000) (DSPE-PEG2000) were purchased from Lipoid GmbH (Ludwigshafen, Germany). In brief, lipids and curcumin were dissolved in chloroform or methanol to obtain stock solutions at 10 and 5 mg/mL, respectively. Curcumin (2 mg) was mixed together with 40 mg of lipid in a borosilicate glass tube. Solvents were removed from the sample via evaporation under a stream of nitrogen gas and the resultant lipid film was dissolved in a mixture of cyclohexane and methanol (99:1, v/v). The sample was frozen in liquid nitrogen and freeze-dried for 8 h at a low pressure using a Savant Modulyo apparatus (Thermo Fisher Scientific, Waltham, CA, USA). The lipid film was hydrated by the addition of 1.5 mL of 150 mM NaCl at 64 • C, in a water bath, with gentle mixing. The liposomal suspension was finally sonicated in a water bath sonicator for 8 min at 64 • C. The newly-formed multilamellar vesicles (MLVs) were extruded 10 times through Nucleopore polycarbonate filters (Whatman, Maidstone, UK) with pore sizes of 400 and 100 nm, respectively, using a Thermobarrel Extruder (10 mL Lipex extruder, Northern Lipids, Canada) to obtain large uni-lamellar vesicles (LUVs). The extruder was maintained at 64 • C throughout the liposome extrusion procedure. The curcumin: (1E, 6E)-1,7-bis-(4-hydroxy-3-methoxyphenyl)-1,6-heptadiene-3,5dione (LKT Laboratories, Inc., St. Paul, MN, USA) was diffused in dimethyl sulfoxide (DMSO, suitable for hybridoma, Sigma Aldrich, Germany) to make 25 mM stock of the drug. Afterward, a decent amount of stock was compounded with a cell culture medium to achieve the composite's appropriate concentration. The DMSO amount in the final solute used to perform incubation did not surpass 0.01% and it was affirmed that the peak amount did not statistically influence the cells. Determination of Incorporation Efficiency and Characterization of Curcumin-Loaded Liposomes Non-incorporated drug-crystals were separated from the curcumin-loaded liposomes during the liposome extrusion procedure (only curcumin-loaded liposomes can pass through Nucleopore polycarbonate filters). Additionally, the samples were centrifuged and then collected to ensure the absence of any free curcumin liposome samples. In total, 50 µL were taken before extrusion (initial) and after centrifugation. The lipid concentration was determined by the ammonium ferrothicyanate assay on a Varian Cary1 50 UV-Vis Spectrophotometer (Varian, Ltd., Victoria, Australia). The concentration of curcumin in the liposomes was determined photometrically at λ = 425 nm on the same spectrophotometer after the curcumin-loaded liposomes were dissolved in methanol. Curcumin encapsulation efficiency was 95 ± 1.6%. The size of the liposomes was 102 nm ± 2.3 and the polydispersity index was very low (0.051). Curcumin-Mediated PDT Experimental Protocol Cells were incubated with free or encapsulated curcumin (5, 10 µM) for 4 h according to Szlasa et al. [12] and Ambreen et al. [45] observations in FBS-free culture medium. Then the wells were washed twice with DPBS, fresh medium was added, and irradiation was performed using a halogen lamp (Penta Lamps, Teclas, Lugano, Switzerland) with the radiation power consistency set to 20 mW/cm 2 . The cells were irradiated for 2 min (2.5 J/cm 2 ). The blue light (380−500 nm) was chosen to achieve the photodynamic effect (the light absorption peak of curcumin of 410 nm). Cells involved in curcumin and PDT treatment were protected from light at all times. After 24 h from irradiation, experiments were conducted according to the protocols. Cell Viability Assay The MTT assay is a colorimetric assay used to measure cellular metabolic activity to indicate cell viability, proliferation, and cytotoxicity. In the MTT assay, living cells transform yellow tetrazolium salt MTT into purple formazan crystals. This process is possible because living cells have an enzyme-mitochondrial dehydrogenase, which causes this change. Cells were seeded at 8 × 10 4 in 96-well culture plates and cultured as mentioned in the experiment description with curcumin and liposomal curcumin for 4 h in the dark. Different doses of curcumin and liposomal curcumin were experimentally established for the next experiments on MUG-Mel2, SCC-25, and HaCaT to obtain IC50. The MTT assay was performed after 24 h from irradiation. The MTT solution was added to the wells in a final concentration of 1 mg/mL for 3 h. Next, formazan dye was solubilized with 50 µL DMSO for 15 min. Absorbance was measured at 490 nm in BioTek Well-plate Reader (Winooski, VT, USA). The control group absorbance was 100%, whereas treated samples' cell viability was counted using the formula: % = (A of experimental wells/A of the control wells) × 100%. After preliminary studies with different curcumin and liposomal-derivative doses (1, 2, 5, 10 µM) for the MTT assay, curcumin and liposomal curcumin was chosen in doses of 5 and 10 µM. Wound-Healing Assay A wound-healing assay was used to inquire cells' interactions and cell migration. According to the manufacturer's instructions, a wound-healing assay was made with the Culture-Insert 2 Well in µ-Dish 35 mm (Ibidi, Germany). The cells were seeded to achieve the monolayer in both parts of the insert. Following liposomal curcumin mediated PDT, the inserts were removed, the culture medium was exchanged, and the cells were cultured until about 100% confluency was reached in control cells. Control samples were without treatment at all. The photographs were taken after removal of the inserts at a time point 0 h and after 24 h of incubation by using a light microscope with a 10× magnifying objective (Olympus IX73 with a camera and CellSens Programme, Hamburg, Germany). Flow Cytometry-Apoptosis Assay Cells were drawn from each of the wells and transferred to Eppendorf tubes. Afterward, cells were centrifuged with PBS washing (7 min, 20 • C, 1000× g). The supernatant was gently removed and 1 mL of the Binding Buffer per 1 × 10 6 cells was added. For the next step, 4 µL AAD-7 and 8 µL FITC was added to each sample, according to the manufacturer's instruction. Eppendorf tubes were vortexed and incubated without the light for 15 min at room temperature. After incubation time, samples were analyzed with a flow cytometer using the FICT channel for Annexin 5 and PC5.5 channel for AAD-7 (Cytoflex, Beckman Coulter Life Sciences, Indianapolis, IN, USA). Negative samples were prepared without the staining and samples stained with one fluorochrome were used for compensation. Immunocytochemistry (ICC) Staining for Apoptosis Detection Cells were fixed with 4% paraformaldehyde for 10 min at room temperature, and then rinsed 2 × 5 min with PBS. Next, cells were blocked with endogenous peroxidase for 10 min using Peroxidase Blocking Reagent and rinsed with PBS 2 × 4 min. Non-specific proteins were blocked by Protein Block Serum-Free Ready to Use for 1 h. Following serum excess removal, anti-Bax and anti-Bcl-2 primary antibodies (Sigma-Aldrich) in dilution 1:200 were added on the slides for overnight incubation. Afterward, primary antibodies were rinsed with PBS for 2 × 4 min. A secondary rabbit antibody (Abcam, UK) in dilution 1:500 was added for 1 h at room temperature. After incubation time, cells were rinsed with PBS for 2 × 4 min and DAB Substrate in Chromogen Solution was added for 2-5 min until the light brown color was achieved. Cells were rinsed with distilled water for 2 × 4 min, and then hematoxylin was used for 1-2 min to stain cell nuclei. Next, cells were rinsed with tap water 2 × 5 min. The Fluoromount™ Aqueous Mounting Medium (Sigma Aldrich) was added onto the slides, and, the following day, the photographs were taken under the microscope (Olympus BX34 with camera DP74 and CellSens Programme, Hamburg, Germany). All ICC reagents were purchased from DAKO, Agilent (Glostrup, Denmark). Statistical Analysis All experiments were performed in triplicates and the values are presented as a mean ± standard deviation. Analysis between the groups was conducted using the nonparametric test Kruskal-Wallis for abnormal distributed data. a p-value below 0.05 was considered significant. PQStat Programme, version 1.8.2 (PQStat Software, Poland) was used for the calculations. Conclusions In conclusion, natural plant derivative-curcumin encapsulated in liposomes has been confirmed as a viable photosensitizer in PDT of skin cancer cell lines. Improved bioavailability and increased stability revealed potent anti-cancer activity in squamous cell carcinoma and melanoma cell lines. The encapsulated compound preferentially accumulated in malignant skin cells. Contrarily, it showed decreased phototoxicity in normal skin keratinocytes HaCaT cells after PDT treatment. These results collectively support liposomal curcumin as a potential photosensitizer in developing natural-based photosensitizers that improve photodynamic therapy safety and efficacy. Thus, additional in vitro and in vivo studies on different normal and cancer cells are essential to confirm this less toxic natural plant derivative PS in the PDT approach.
2021-04-29T05:22:42.900Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "93a380d575f7be59c2c7e51aff08e229ce472c6c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8247/14/4/374/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93a380d575f7be59c2c7e51aff08e229ce472c6c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
268716668
pes2o/s2orc
v3-fos-license
Lipid Profile and Cardiovascular Risk Modification after Hepatitis C Virus Eradication The eradication of the hepatitis C virus (HCV) has revolutionized the hepatology paradigm, halting the progression of advanced liver disease in patients with chronic infection and reducing the risk of hepatocarcinoma. In addition, treatment with direct-acting antivirals can reverse the lipid and carbohydrate abnormalities described in HCV patients. Although HCV eradication may reduce the overall risk of vascular events, it is uncertain whether altered lipid profiles increase the risk of cerebrovascular disease in certain patients. We have conducted a review on HCV and lipid and carbohydrate metabolism, as well as new scientific advances, following the advent of direct-acting antivirals. Introduction The hepatitis C virus (HCV) is recognized as a significant human pathogen that initially causes acute hepatitis.However, it has the potential to evolve into chronic hepatitis, leading to severe liver complications such as cirrhosis and hepatocellular carcinoma, posing a substantial global public health challenge.As a single-stranded RNA virus belonging to the Flaviviridae family, HCV's mechanism of infection and replication is complex.It involves evading the host's immune response, contributing to its chronicity in infected individuals.The virus's genetic diversity, marked by multiple genotypes and subtypes, complicates vaccine development and treatment strategies.The progression from acute to chronic HCV infection stresses the importance of early detection and effective antiviral therapies to prevent long-term liver damage and reduce the risk of liver cancer.Despite advances in treatment, HCV remains a leading cause of liver transplantation worldwide, highlighting the need for continued research and public health efforts to combat this virus [1]. HCV transmission occurs primarily through blood-to-blood contact.In healthcare environments, reusing or inadequately sterilizing medical equipment, notably syringes and needles, presents a significant risk.Additionally, the transfusion of blood and blood products that have not undergone thorough screening processes can serve as a conduit for HCV transmission.Another prevalent route is through the sharing of injection equipment among individuals using injectable drugs [2]. HCV is classified into seven genotypes, with multiple subtypes, which are unevenly distributed geographically and differ in response to treatment [3]. Epidemiology of HCV Infection and Clinical Course The prevalence of HCV infection has been declining since the second half of the 20th century [4].This is due to improved hygienic and dietary conditions in developing countries and active surveillance in high-incidence countries.Together, these strategies have played a pivotal role in reducing the global burden of HCV, showcasing the importance of comprehensive public health initiatives in combating infectious diseases. However, accurate estimates of global HCV prevalence are difficult to establish due to underdiagnosis, underreporting, and a lack of routine surveillance in most countries [5].The estimated global prevalence of HCV viremia in early 2020 was 0.7 percent, reflecting 56.8 million people with chronic HCV infection.These data reflect a decrease in prevalence compared to 2015 when there were 63.6 million chronic HCV infections, representing 0.9 percent of the global population. In Europe, the main incidence areas are the Eastern Mediterranean countries (62.5 per 100,000), where it is associated with healthcare, and the Eastern European region (61.8 per 100,000), where it is associated with injectable drug use [6]. Hepatitis C often progresses stealthily, mirroring other liver diseases with an initial asymptomatic phase in most cases.Over time, it may lead to cirrhosis, presenting complications such as ascites, variceal bleeding, and hepatic encephalopathy.A notable distinction of hepatitis C from other liver diseases is its propensity to cause extrahepatic manifestations, including joint pain (arthralgias), cryoglobulinemia, and various metabolic changes.This broad spectrum of potential effects highlights the complexity of hepatitis C, affecting not just liver function but also other bodily systems and requiring comprehensive management strategies. Characteristics of the Hepatitis C Virus HCV is a particle between 50 and 80 nm in diameter containing a single-stranded RNA genome, nucleus, E1 and E2 glycoproteins, and type I transmembrane proteins, which form covalent bonds with infected hepatocytes [7].They are closely associated with lipoproteins, which gives them a very low density [8].The interactions governing the relationship between HCV virions and the different lipoproteins involved remain to be fully characterized [9]. It has been suggested that HCV virion is a hybrid, consisting of a viral part merged with a lipoprotein capsule (Figure 1) [10].Another hypothesis is that the relationship occurs through the interaction of apolipoproteins and lipid molecules that are part of the HCV envelope [11].In both cases, the interaction with host lipoproteins could contribute to protecting and concealing the virion particles, covering their surface.This glycoprotein coat is essential in the process of inclusion of the viral particle into the target cells.It plays a crucial role in the binding and fusion process between the viral envelope and the endosomal membrane of the host cells [12]. In viral replication, HCV relies on the host cellular mechanism, which is associated with endoplasmic reticulum-derived membranes and various proteins [13].HCV induces a massive reorganization of intracellular membranes, creating a membranous network [14]. Several electron microscopic studies have shown that the predominant structure is a double membrane vesicle consisting of proteins and cholesterol, as well as deposits of triglycerides (TGs) and cholesterol esters [15][16][17].HCV alters the expression of genes involved in lipid metabolism, resulting in the accumulation of intracellular lipids [18]. Lipoproteins Lipoproteins serve as vehicles for lipid transport, consisting of a nonpolar core filled with triglycerides (TGs) and esterified cholesterol, encased in a polar outer layer composed of apoproteins, phospholipids, and free cholesterol.This diverse group includes chylomicrons, very low-density lipoproteins (VLDLs), intermediate-density lipoproteins (IDLs), low-density lipoproteins (LDLs), and high-density lipoproteins (HDLs).The diversity among these lipoproteins lies in their free cholesterol and TG content and their unique compositions of apolipoproteins, reflecting their varied roles in lipid transport and metabolism within the body [19].This variation emphasizes the complexity of lipid dynamics and their critical functions in maintaining cellular and systemic health. Lipoprotein metabolism encompasses both exogenous and endogenous pathways.The exogenous pathway involves the absorption of dietary lipids through intestinal enterocytes, which are packaged into chylomicrons and enter the lymphatic system before reaching the bloodstream.On the other hand, the endogenous pathway occurs primarily in the liver (hepatocytes), where lipoproteins such as VLDL are synthesized and released into the circulation.These pathways are crucial for distributing lipids across different tissues for energy use, storage, or membrane synthesis. Exogenous Pathway of Lipoprotein Metabolism The lipids we obtain from the diet are mainly TGs.Once in the intestine, they bind to apoprotein B-48 in enterocytes, forming chylomicrons.These are secreted into the lymphatic vessels, reaching the general circulation via the thoracic duct.Chylomicrons become mature once they receive APOCII and APOE from HDL particles.There is also an exchange of TG with LDL particles located in the vascular endothelium, becoming remnant chylomicrons, taken up by hepatocytes through an interaction with APOE [20]. Endogenous Pathway of Lipoprotein Metabolism The liver is the main organ involved in the endogenous lipoprotein metabolism pathway.Hepatocytes secrete VLDL, the formation of which is initiated in the sarcoplasmic reticulum by the incorporation of TG into APOB100 particles through the action of microsomal TG transfer protein.Cholesterol esters and APOE are incorporated into this particle.This is followed by the exocytosis of VLDL lipoproteins into the bloodstream, acquiring more APOE and APOC from the HDL particles [21].Mature Lipoproteins Lipoproteins serve as vehicles for lipid transport, consisting of a nonpolar core filled with triglycerides (TGs) and esterified cholesterol, encased in a polar outer layer composed of apoproteins, phospholipids, and free cholesterol.This diverse group includes chylomicrons, very low-density lipoproteins (VLDLs), intermediate-density lipoproteins (IDLs), low-density lipoproteins (LDLs), and high-density lipoproteins (HDLs).The diversity among these lipoproteins lies in their free cholesterol and TG content and their unique compositions of apolipoproteins, reflecting their varied roles in lipid transport and metabolism within the body [19].This variation emphasizes the complexity of lipid dynamics and their critical functions in maintaining cellular and systemic health. Lipoprotein metabolism encompasses both exogenous and endogenous pathways.The exogenous pathway involves the absorption of dietary lipids through intestinal enterocytes, which are packaged into chylomicrons and enter the lymphatic system before reaching the bloodstream.On the other hand, the endogenous pathway occurs primarily in the liver (hepatocytes), where lipoproteins such as VLDL are synthesized and released into the circulation.These pathways are crucial for distributing lipids across different tissues for energy use, storage, or membrane synthesis. Exogenous Pathway of Lipoprotein Metabolism The lipids we obtain from the diet are mainly TGs.Once in the intestine, they bind to apoprotein B-48 in enterocytes, forming chylomicrons.These are secreted into the lymphatic vessels, reaching the general circulation via the thoracic duct.Chylomicrons become mature once they receive APOCII and APOE from HDL particles.There is also an exchange of TG with LDL particles located in the vascular endothelium, becoming remnant chylomicrons, taken up by hepatocytes through an interaction with APOE [20]. Endogenous Pathway of Lipoprotein Metabolism The liver is the main organ involved in the endogenous lipoprotein metabolism pathway.Hepatocytes secrete VLDL, the formation of which is initiated in the sarcoplasmic reticulum by the incorporation of TG into APOB100 particles through the action of microsomal TG transfer protein.Cholesterol esters and APOE are incorporated into this particle.This is followed by the exocytosis of VLDL lipoproteins into the bloodstream, acquiring more APOE and APOC from the HDL particles [21].Mature VLDLs are catabolized by the APOCII-activated enzyme lipoprotein lipase and renamed remnant VLDL or IDL.They are incorporated back into the liver through the interaction of APOE [22].Alternatively, they are again hydrolyzed by hepatic lipase, whereby IDLs are transformed into LDLs, depleted of TGs and high in cholesterol.These particles transport cholesterol to peripheral tissues or the liver via APOB100 interactions with LDL receptors [23]. On the other hand, APOAI is the primary apolipoprotein of HDL particles [24].It is again synthesized in the liver and intestine and is involved in forming these molecules through the esterification of cholesterol and phospholipids.During this process, the HDL molecules progressively lose part of their cholesterol load until they return to the hepatocyte or enterocyte, where they replenish their cholesterol stores [25]. Lipoprotein Profile Assessment Dyslipidemia is a quantitative or qualitative alteration in circulating lipoproteins in plasma, notably an increase in the concentration of low-density lipoprotein cholesterol (LDL-cholesterol) [26].However, episodes of atherothrombotic pathology are still observed in patients with normal or low cholesterol levels and without other known cardiovascular risk factors [27].This suggests that there are other lipid alterations, beyond LDL cholesterol levels, that also increase cardiovascular risk [28].The atherogenic potential of lipoproteins should be hence defined not just by their quantity but by their characteristics, including their number, size, and composition.Therefore, analyzing these aspects of lipoproteins provides a more comprehensive assessment of a patient's lipid profile, offering insights beyond traditional cholesterol measurements [29,30]. As previously explained, lipoprotein particles differ from each other in terms of their free cholesterol and TG content.The relationship between density and size is inverse, with the smallest particles having the highest density [31]. These differences in the composition of the same class of particles influence the atherosclerotic process.Healthy vascular endothelium can be freely traversed by particles with diameters of less than 70 nm.These particles, especially smaller LDL particles, can be retained and are the origin of the atherogenic process [32]. It has also been observed that HDL particles can undergo modifications that change their structure and composition, thereby altering their function.For example, in diseases such as type II diabetes (T2DM), chronic kidney disease, sarcoidosis, and inflammatory processes, HDL particles lose their protective function and acquire an atherogenic effect [33]. The size of LDL lipoproteins is variable and depends on their core's lipid content, which determines the particles' density.This variability, which can be influenced by various alterations in lipoprotein metabolism, can lead to a discrepancy between the serum LDL cholesterol concentration and the number of circulating LDL particles [34].Thus, many LDL particles may be associated with a normal LDL cholesterol concentration.This situation is known as c-LDL/p-LDL mismatch. In these cases, different studies have found that particle number measurement is a better indicator than LDL concentration for assessing cardiovascular risk [35].The prognostic ability of LDL particle number has been evaluated in different studies.For example, in the Framingham cohort, it was shown that an LDL particle concentration below the 25th percentile was a more reliable predictor of cardiovascular risk than an equivalent serum LDL cholesterol concentration [36].Studies have even shown that treatment based on LDL particle number targets improves clinical outcomes over that based on LDL cholesterol concentration [37,38]. Alterations in Lipid Metabolism Associated with HCV Infection The most important complications associated with chronic HCV infection are liver cirrhosis and liver cancer.However, there are many extrahepatic manifestations that cause high morbidity and mortality [39].Most are immunological or lymphoproliferative in origin, but alterations in the lipid profile have also been identified, leading to metabolic and cardiovascular complications [40].The lipid profile's modifications related to HCV infection and treatment are shown in Figure 2. When a patient has HCV infection, the main organ affected is the liver (1).Infection causes some changes in lipid metabolism, especially a decrease in the number of VLDL and LDL particles (2).After treatment with DAA (3), the infection is cured.Due to the combination of liver healing and the direct effect of antivirals, there are changes in lipoparticles.There is an increase in serum LDL, HDL, and triglyceride particles (4).In addition, improved liver function reduces the triglyceride content of HDL particles (5).HDL particles can now better mobilize lipids from tissues, which can reduce pancreatic steatosis and thus improve insulin resistance (6). Diabetes Mellitus and Insulin Resistance The development of T2DM is one of the most common HCV-related complications [48].This relationship stems from a complex interplay between insulin resistance, hepatic steatosis, and inflammatory processes [49].HCV-core transcription leads to an increased expression of TNF-alpha and thus to the induction of insulin resistance.This explains why the prevalence of T2DM is higher in patients with HCV liver disease compared to other etiologies of liver disease [50]. The development of T2DM can occur at any stage of liver disease, even with low degrees of fibrosis [51].However, it is more prevalent in patients with advanced fibrosis or even liver cirrhosis [52].As previously described, a genotype-dependent factor must be considered.Patients with genotype 3 have a higher risk of developing insulin resistance and diabetes.On the other hand, patients with genotype 1 would be more likely to improve their carbohydrate metabolism after a viral cure compared to genotypes 2 and 3 [53]. The development of T2DM correlates directly with the severity of liver fibrosis.Although it can occur in patients with mild fibrosis, the highest incidence is observed in those with liver cirrhosis.In addition, patients with HCV-associated T2DM have an increased risk of developing HCC.Regarding the relationship between T2DM and HCV treatment, early interferon treatment showed a worse response in patients with T2DM and HCV [54].A decrease in the risk of de novo T2DM has been observed in several studies with the newer treatments, direct-acting analogues (DAAs) [55,56]. DAAs prevent the future onset of T2DM and improve glucose metabolism in patients who achieve sustained viral response (SVR).During follow-up, a decrease in glycated hemoglobin and an improvement in insulin resistance-related parameters have been When a patient has HCV infection, the main organ affected is the liver (1).Infection causes some changes in lipid metabolism, especially a decrease in the number of VLDL and LDL particles (2).After treatment with DAA (3), the infection is cured.Due to the combination of liver healing and the direct effect of antivirals, there are changes in lipoparticles.There is an increase in serum LDL, HDL, and triglyceride particles (4).In addition, improved liver function reduces the triglyceride content of HDL particles (5).HDL particles can now better mobilize lipids from tissues, which can reduce pancreatic steatosis and thus improve insulin resistance (6). It is very striking that some studies have even been able to link the development of hepatocarcinoma with an alteration in oncogenesis.Moreover, this phenomenon is much more marked in HCV-infected patients than in HBV-infected patients.Some of the mediators involved could be AKT2, SREBP1c, and PPARγ.Also, some regulatory enzymes such as ACC and FAS may be involved [41]. Chronic HCV infection results in low levels of VLDL and LDL.Despite this apparently beneficial change, these patients have an increased development of atherosclerosis, leading to an increased cardiovascular risk [42].This occurs independently of other risk factors, such as the development of T2DM or the presence of hepatic steatosis.Interestingly, HCV eradication leads to an increase in serum cholesterol and LDL levels, creating a combination of circumstances that may exacerbate the risk of atherosclerosis injury [43]. Another finding observed in patients with chronic HCV infection is the existence of abnormal lipoproteins, including VLDL particles enriched with TG, which increase atherogenic risk.These particles disappear after successful HCV treatment and cure. However, it appears that the extent of this interaction is related to certain host polymorphisms and hepatitis C virus genotypes.Both factors are highly variable [44,45].Evidence highlights that genotype 3 of the hepatitis C virus, accounting for 20-30% of infections, is particularly associated with the development of hepatic steatosis, exhibiting a more pronounced degree of steatosis in patients, even those without obesity, compared to other genotypes.This association extends to a direct correlation between viral load and steatosis severity, exclusively in genotype 3, a phenomenon not observed in other genotypes.Moreover, genotype 3 is linked to several adverse disease progression outcomes, such as increased treatment resistance and a higher risk of developing HCC [46]. The underlying mechanisms, though not fully understood, suggest that genotype 3 impacts key metabolic pathways involving microsomal triglyceride transfer protein (MTTP), sterol regulatory element-binding protein 1c (SREBP-1c), and peroxisome proliferatoractivated receptor alpha (PPAR-α) [47].This insight emphasizes the need for a genotype-specific approach in managing HCV infections, considering the unique challenges posed by genotype 3. Diabetes Mellitus and Insulin Resistance The development of T2DM is one of the most common HCV-related complications [48].This relationship stems from a complex interplay between insulin resistance, hepatic steatosis, and inflammatory processes [49].HCV-core transcription leads to an increased expression of TNF-alpha and thus to the induction of insulin resistance.This explains why the prevalence of T2DM is higher in patients with HCV liver disease compared to other etiologies of liver disease [50]. The development of T2DM can occur at any stage of liver disease, even with low degrees of fibrosis [51].However, it is more prevalent in patients with advanced fibrosis or even liver cirrhosis [52].As previously described, a genotype-dependent factor must be considered.Patients with genotype 3 have a higher risk of developing insulin resistance and diabetes.On the other hand, patients with genotype 1 would be more likely to improve their carbohydrate metabolism after a viral cure compared to genotypes 2 and 3 [53]. The development of T2DM correlates directly with the severity of liver fibrosis.Although it can occur in patients with mild fibrosis, the highest incidence is observed in those with liver cirrhosis.In addition, patients with HCV-associated T2DM have an increased risk of developing HCC.Regarding the relationship between T2DM and HCV treatment, early interferon treatment showed a worse response in patients with T2DM and HCV [54].A decrease in the risk of de novo T2DM has been observed in several studies with the newer treatments, direct-acting analogues (DAAs) [55,56]. DAAs prevent the future onset of T2DM and improve glucose metabolism in patients who achieve sustained viral response (SVR).During follow-up, a decrease in glycated hemoglobin and an improvement in insulin resistance-related parameters have been observed.However, their long-term duration after achieving SVR needs to be better established [57]. Cardiovascular Diseases HCV infection confers increased cardiovascular morbidity and mortality [58].Early studies showed a relationship between HCV seropositivity and reduced carotid artery intima/media ratio.Subsequently, HCV was also found to cause an increased expression of pro-atherogenic cytokines [59,60]. Cardiovascular involvement appears to predominate in HCV patients compared to patients with other similar conditions, such as hepatitis B virus (HBV) [61].This indicates that the cardiovascular risk is not solely due to liver damage but is an inherent effect of HCV itself. In studies with large cohorts of patients with very long follow-up periods, it became evident that those patients who received antiviral treatment and achieved HCV eradication had lower mortality rates than those patients who did not receive treatment, not only due to hepatic but also extrahepatic causes, especially cardiovascular.Other studies showed improved myocardial perfusion in those patients who had SVR [62]. Another aspect to consider in the relationship between cardiovascular disease and hepatitis C is the interaction between their respective treatments.Antihypertensive drugs and statins are among the most frequently used simultaneously in patients receiving directacting antivirals.About 10% of patients took a statin before starting antiviral treatment [63].Therefore, it is particularly important to consider interactions between these drugs [64].Not all statins interact in the same way with all antivirals, although the most common complication is the development of myopathies and the need to lower the dose.Specific combinations, such as glecaprevir/pibrentasvir with atorvastatin, lovastatin, or simvastatin, as well as ledipasvir/sofosbuvir with rosuvastatin are formally contraindicated [65]. Metabolic Changes Related to Treatment with Direct-Acting Antivirals Treatment with DAAs leads to changes in lipid metabolism.Serum total cholesterol and LDL-cholesterol levels rise, possibly increasing the risk of atherosclerotic lesions [66,67]. However, the results of studies assessing this point are sometimes contradictory.Some show increased HDL-cholesterol levels that are not seen in other studies.One study even described a decrease in HDL levels [68].Regarding changes in TG, inconsistent results have also been reported: decreases, minimal or absent changes, or even increases in serum levels.The main clinical studies can be found summarized in Table 1. These contradictory results may be explained by the heterogeneity of the studies, which were conducted in disparate genotypic populations, using different treatment regimens and different proportions of patients with liver cirrhosis [69,70].Some studies included HIV-co-infected patients with an HIV-positive population of up to 60%.Followup times also varied widely between studies, ranging from 4 to 48 weeks.In addition, only two studies presented long-term prospective follow-up, Shimizu et al. and Gonzalez-Colominas et al.However, these two studies presented populations with liver involvement that goes beyond simple HCV infection: 50% of patients in the Gonzalez-Colominas study had liver cirrhosis and all patients in the Shimizu study had hepatic steatosis [71,72]. Critical questions about lipid changes post-DAA treatment include whether the initial changes persist over time and the effects on patients with early-stage liver disease, a group significantly understudied. More recently, and especially after the generalization of DAAs drugs, lipid profile alterations have been described after HCV eradication [70].In a study tracking HCV patients treated with DAAs for two years, total cholesterol and LDL cholesterol levels rose progressively, by an average of 15% and 22%, respectively.This led to a higher risk of cardiovascular events.An increase in LDL-C of more than 40% emerged as the sole predictive factor, suggesting it could be a warning sign for potential cardiovascular events in the HCV-eradicated population [73][74][75]. Total cholesterol and LDL-C increased earlier after DAA initiation, while TG and HDL-C increased slowly after the end of therapy [76].This is consistent with the finding that rapidly elevated total cholesterol and LDL-C levels may correlate with rapid viral clearance due to potent DAA therapy.In addition, elevated lipid levels were not transient, but persisted years after the end of treatment.Age and smoking were factors associated with pronounced lipid changes after viral eradication.Patients with a history of untreated dyslipidemia had elevated lipid levels in the post-RVS state.All the above factors were also risk factors for cardiovascular/cerebrovascular events [77]. However, increases in HDL and TG levels remain controversial, with no apparent relationship established.Studies comparing the change in cholesterol levels before and shortly after DAA treatment and studies with a long follow-up period are scarce. In recent studies, it has been discovered that the analysis of lipoparticle metabolism is more complex than initially thought and that not only the quantity but also the quality of lipoparticles determines the cardiovascular risk of patients [77].Our results describe how HCVdependent lipid abnormalities are associated with insulin resistance and how DAA therapy can reverse this association.These findings suggest that monitoring the HDL-TG profile could predict changes in glucose tolerance and insulin resistance post-HCV clearance [78].Further, it has been found that post-treatment, the TG content in HDL particles decreases, signifying improved lipoparticle quality and enhanced cholesterol clearance from tissues.This reduction in hepatic and pancreatic fat could partly account for the observed improvement in insulin resistance [93]. Various LDL-C reduction thresholds (70, 100, and 155 to 190 mg/dL) are recommended to lower the risk of atherosclerotic cardiovascular disease (ACVD) [94].These recommendations consider factors like initial LDL-C levels, age, ethnicity, and the estimated future risk of cardiovascular disease.Current evidence regarding the management of patients with dyslipidemia seems to favor the "lower is better" concept [75].Due to lower lipid profiles prior to anti-HCV therapy, deteriorating lipid profiles are often overlooked in the post-HCVR era; 7.3% of patients without concurrent lipid-lowering therapy prior to antivirals were started on lipid-lowering drugs during the follow-up period.In this study, after excluding patients who took lipid-lowering drugs both before and after anti-HCV therapy, the proportion of patients with LDL > 100 mg/dL increased from 37.5% before treatment to 56.9% after anti-HCV therapy, while the proportion of patients with LDL levels >155 mg/dL increased from 2% before treatment to 7.2% after antiviral therapy [95]. A significantly higher proportion of patients justified the use of lipid-lowering drugs to reduce the risk of vascular events in the post-antiviral treatment era.However, it is believed that lipid-lowering therapy may be significantly underutilized in this population [95]. It has been suggested that HCV eradication reduces the risk of cerebrovascular events.In a recent study, 731 of 17,103 treated patients who achieved SVR experienced cardiovascular events during the follow-up period (19.1 per 1000 person-years) [76,96].A 13% risk reduction was observed in patients with coronary heart disease receiving interferon-or AAD-based regimens compared to the untreated cohort.However, another large cohort study of 160,875 subjects revealed that the benefit of HCV eradication was only found to reduce the risk of stroke, but not coronary heart disease, compared to the untreated cohort [78,97]. Notably, most of the patients who developed cardiovascular disease after HCV treatment had no obvious risk factors prior to antiviral therapy [23,98]. Other studies have shown that DAAs improve carotid thickening, but carotid plaques did not change in the same cohort [94,99].Meanwhile, a recent study has shown dyslipidemia and a short-term increase in aortic stiffness in patients with advanced fibrosis after DAA treatment.Overall, the improvement in vascular events in the post-SVR state must be judged on an individual basis, considering lipid dynamics. However, most existing studies after the advent of DAAs have a relatively short follow-up period, and there are no HCV-uninfected controls.In addition, the number of patients with vascular events is also limited, making it difficult to draw conclusions.This situation underscores the necessity for future studies to investigate whether the potential increase in LDL cholesterol levels following DAA treatment could be counterbalanced by a decrease in systemic inflammation among patients who achieve a SVR.Such research is paramount for clarifying the risk of developing cardio-cerebrovascular diseases in this patient population. In conclusion, after achieving SVR through DAA treatment, monitoring for both hepatic and extrahepatic outcomes is crucial, given the known lipid changes and their potential impact on cardiovascular health.While current guidelines suggest discharging patients without advanced liver disease post-SVR [95,100], the observed increases in total and LDL cholesterol post-treatment highlight the need for ongoing vigilance against vascular events and cardio-cerebrovascular diseases.Recent findings of improved lipoprotein quality and decreased TGs post-HCV cure, potentially reducing insulin resistance and cardiovascular risk, underline the importance of extended follow-up and larger studies to understand these long-term effects fully. Figure 2 . Figure 2. Summary of the main effects of HCV and its treatment on a patient's lipid metabolism.When a patient has HCV infection, the main organ affected is the liver(1).Infection causes some changes in lipid metabolism, especially a decrease in the number of VLDL and LDL particles (2).After treatment with DAA (3), the infection is cured.Due to the combination of liver healing and the direct effect of antivirals, there are changes in lipoparticles.There is an increase in serum LDL, HDL, and triglyceride particles (4).In addition, improved liver function reduces the triglyceride content of HDL particles(5).HDL particles can now better mobilize lipids from tissues, which can reduce pancreatic steatosis and thus improve insulin resistance(6). Figure 2 . Figure 2. Summary of the main effects of HCV and its treatment on a patient's lipid metabolism.When a patient has HCV infection, the main organ affected is the liver(1).Infection causes some changes in lipid metabolism, especially a decrease in the number of VLDL and LDL particles (2).After treatment with DAA (3), the infection is cured.Due to the combination of liver healing and the direct effect of antivirals, there are changes in lipoparticles.There is an increase in serum LDL, HDL, and triglyceride particles (4).In addition, improved liver function reduces the triglyceride content of HDL particles(5).HDL particles can now better mobilize lipids from tissues, which can reduce pancreatic steatosis and thus improve insulin resistance(6). Table 1 . Principal clinical studies evaluating modifications in lipid profile after HCV treatment.
2024-03-27T15:12:29.036Z
2024-03-25T00:00:00.000
{ "year": 2024, "sha1": "292eb631ee79256af15b3cc1fdc8fab2674ca8db", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/13/4/278/pdf?version=1711348798", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b8c094b0262e0e904eb5ddecb9d01da2ff54136", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
240571088
pes2o/s2orc
v3-fos-license
Addressing Mathematics Anxiety through Developing Resilience: Building on Self-Determination Theory Mathematics-specific anxiety is anxiety that impedes mathematical thinking and progress, and creates distress for many learners, or at the least a tendency to avoid mathematical thinking. Such anxiety is prevalent. The importance of mathematics to economic recovery is well-established; in order to meet the need for mathematics, the high levels of mathematics anxiety that stand in the way of individual mathematical progress should be addressed. Using a case study involving an adult learner, we use Self-Determination Theory to explain why mathematical resilience is a concept which can work against anxiety and for a positive stance towards mathematics. Work on mathematical resilience demonstrates that well-informed, subject-specific interventions can help people manage emotions, including anxiety, and improve progress and uptake in mathematics. We illustrate ways in which the focus of Self-Determination Theory on meeting basic psychological needs (autonomy, competence and relatedness), to enhance wellbeing and prevent harm, provides grounding for much good practice in mathematics education and specifically for work in mathematical resilience. The tools of mathematical resilience go beyond what is currently proposed in SDT research. We illustrate ways in which these tools can specifically facilitate learners’ emotion regulation, which we propose is integral to mathematical learning competence, leading to greater mathematical wellbeing, learning, and release from mathematics anxiety. Introduction The two months-long lockdowns in the UK due to the COVID-19 pandemic in 2020/21 resulted in all schools being closed to most pupils. Widnall et al. (2020) found that, during this time of COVID-related school closures, anxiety amongst some school-aged children was significantly reduced. The inevitable conclusion is that some aspects of schooling may actually be harmful for children's well-being. As we, along with many others, consider the well-being of our young people of the highest priority, this report gives cause seriously to identify and tackle those aspects of school that are known to provoke anxiety, one of which is the environment in which mathematics is currently learned (Finlayson, 2014;Nardi & Stewart, 2003). Ashcraft and Krause (2007) have shown clearly that many aspects of commonly accepted mathematical pedagogy, such as an over-regard for speedy recall and an under-regard for understanding concepts, give rise to mathematical anxiety and avoidance which have debilitating consequences for learners in terms of their potential careers and economic activity (Johnston-Wilder et al., 2014). Mathematics-specific anxiety also has wider consequences: the world faces huge challenges that involve mathematical thinking, including climate change and economic injustice, but there is widespread underachievement in mathematics and mathematics-related subjects which may be attributable to mathematics anxiety and avoidance (OECD, 2013). This underachievement affects the supply of Science Technology Engineering and Mathematical (STEM) graduates adversely at a time when those with STEM expertise are much needed (Correia et al., 2010;EMSI, 2018). Recent UK government initiatives to improve mathematics education and increase the supply of people with mathematics qualifications (Gov.uk, n.d.) have not acknowledged how anxiety might interfere with that process (Ellis et al., 2016) or how mathematics anxiety and lack of confidence might be addressed. We believe that governments will continue to waste precious resources and continue to make the problem worse without substantial attention to how the prevalent development of mathematics anxiety in mathematics classrooms can be mitigated. In this paper, we propose addressing mathematics anxiety through developing mathematical resilience within the framework of Self-Determination Theory (SDT) (Ryan & Deci, 2018). We define mathematical resilience as "maintaining self-efficacy in the face of personal or social threat to mathematical well-being" (Johnston-Wilder & Lee, 2019). SDT is premised on the importance of meeting basic psychological needs to promote wellbeing, ensure psychological safety, and avoid psychological harm (Deci & Ryan, 2000;Ryan & Deci, 2018). SDT provides important justification for most premises of the work on mathematical resilience; in this paper, we illustrate with a case study how SDT explains mathematics anxiety as resulting from frustration of basic psychological needs. However, SDT does not in itself provide tools for action; here we show how the tools of mathematical resilience can be used to meet the basic psychological needs currently being thwarted within many mathematics classrooms and thus improve learners' mathematics-specific well-being and thereby their willingness to engage in mathematics and develop competence in learning mathematics. Introducing Jackie Research in mathematics classrooms supports the conclusion that basic needs as set out in Self-Determination Theory (SDT) (Ryan & Deci, 2018) are routinely thwarted and that ill-being results, particularly mathematics-specific anxiety (Durmaz & Akkus, 2016). To illustrate the harm that results from thwarting basic needs in mathematics and how that may happen, this paper draws upon data from one author, Sue, working with case study participant, "Jackie". Jackie's case also shows how historical harm can be addressed and the benefits that can accrue from doing so. Jackie is a middle-aged tutor of UK engineering apprentices. She contacted Sue as she wished to address her own "anxiety and avoidance, even hatred, towards mathematics" and had become aware of the work that Sue had been doing with her colleagues. Sue offered her as many one-to-one sessions as she needed at a mutually agreed venue. In these sessions, they would talk through her feelings towards mathematics and work on some straightforward mathematics together. When asked if her data could be used in a publication, she readily gave informed consent. Key Ideas from SDT Related to Learning Mathematics SDT sees people as having both agency and a natural tendency towards growth. The satisfaction of basic psychological needs gives rise to observable and meaningful positive consequences for learners' wellbeing and an opportunity to thrive. The frustration or deprivation of these needs gives rise to significant harm (Ryan & Deci, 2018), including the development of anxiety. There are three key basic psychological needs, according to Ryan and Deci (2018): Autonomy is the need to regulate actions in accordance with authentic interests and values. A person acting with autonomy (or with autonomous motivation) feels their actions are volitional, congruent, and integrated; a person acting out of "controlled regulation" experiences being controlled by external or internal pressure. An important aspect of autonomy is the ability to choose actions to meet the other two basic needs. Competence is the need to feel effective in interactions within important life contexts, and having the means to exercise, expand, and express capacities. When individuals are prevented from developing skills or understanding, their need for competence remains unmet; consequently, they are likely to experience feelings of failure and inadequacy. Relatedness is the need to feel valued, connected, and have a sense of belonging, experiencing others, and being experienced, as responsive, sensitive and caring. If the need for relatedness is not met, individuals can experience loneli-ness and abandonment, feel they do not fit in and, in a learning context, feel "stupid" in relation to others. In Jackie's case, the environment in which she was required to learn mathematics thwarted these basic needs in several ways and she saw the result as having a negative impact on her life choices. Here we explain how her needs were thwarted: Autonomy thwarting. An example of an autonomy-thwarting learning environment (Niemec & Ryan, 2009) is the UK system of setting children commonly used in many schools. Setting is placing pupils in groups according to some perceive notion of ability. Learners are often not given any input into setting discussions or choice as to which set they join. This happened to Jackie, who reports being "demoted" to a group that was studying commercial mathematics for O-level: "there wasn't a discussion about it, I was just moved across". Further aspects of mathematics learning environments that led to Jackie feeling her autonomy was thwarted are the use of examinations results as gate keepers and the lack of meaning or purpose attributed by teachers to the processes learned in mathematics. Gate keeping. Students are required to achieve a prescribed grade in mathematics for entry into many courses and careers, including teaching and engineering. If circumstances prevent the achievement of that grade, then the route to that career appears closed. Working with engineering apprentices, Jackie recognised she might have found engineering an interesting choice of career; her poor examination results and disaffection with mathematics meant that she did not consider such a career possible when she left school. Lack of meaning. Lack of meaning for Jackie was about not being able to make sense of something unless she understood its purpose; in common with many adults, she did not understand the purpose of "x" until after the sessions with Sue. Jackie said: I remember seeing x's on the board and it meant nothing. It meant nothing. I couldn't relate to an x for the life of me and I thought, well, whatever you do with that x it doesn't matter because I still don't know what x is. … And that's what was completely missing for me, really, because I think, yeah, if someone had sat me down and said, so maths is the language of engineering, and then went on to say what is engineering and why does that matter, or how do we build buildings, how do we create factories, all those things, I'd have seen that bigger picture and thought, ok. I see now why that's important. Jackie's account is wholly from her own perspective; she reports with the addition of hindsight. Nevertheless, her story of feeling that algebra is presented as particularly meaningless is shared by many (see for example, Nardi & Steward, 2003). Competence thwarting. There is a prevalent message in society that some people are not good at mathematics, and never will be; that is, they are unable to devel- S. Johnston-Wilder et al. op feelings of competence in mathematics, therefore it would be a waste of time to try. This idea seems to be why setting is widely accepted and even advocated. Setting is an integral part of Jackie's description of the experiences that led to her mathematical anxiety and avoidance: I was identified as being particularly bad at [maths], so I was moved to a group that did commercial maths for O-level instead of straight maths … that was for people who struggled with the subject … I got a really bad grade … one of my lowest grades … a C or even a D. The absolute "correctness" or otherwise of answers in mathematics can also leave learners open to the immediate perception of failure in a way that many other subjects do not. Failure may be experienced when making a mistake, not knowing what to do, or seeing others as quicker, more confident, providers of answers. Jackie says: I remember very little … I just remember consistently getting it wrong and not knowing why I was getting it wrong. Even if strategies such as students being encouraged to ask questions, or for help, feature in a more supportive classroom setting, the learner may experience explanations that do not make sense. Jackie remembered experiencing: …the teacher using words to tell me why I was getting it wrong, but those words not resonating with me and how I learn or how I process information. So, I was feeling double bad about not understanding what they were saying as well as getting the maths wrong. Experiences of learning mathematics may also involve incomprehensible artefacts: I remember having a slide rule … I was terrified of that slide rule. I never ever understood how that worked. These reported episodes undermined Jackie's sense of competence in mathematics, and her sense of competence in learning mathematics. Relatedness thwarting. Mathematics may be experienced as isolating, irrelevant, or a source of social shaming, all of which thwart any feelings of relatedness within the learning community. Being asked to display their work publicly seems to be a common feature of adult learners recalled experiences and is often one which evokes intense fear. Many also talk of believing that they are isolated, alone in struggling or being left behind. Jackie remembers feeling comfortable and valued in English lessons, but not in mathematics lessons, perceiving that no-one ever took time with her individually. She was publicly told that she was wrong and felt too embarrassed to ask for what she wanted: an idea of why mathematics matters. Jackie's account mentions many powerful feelings-shame, embarrassment, anxiety, feeling stupid, hating mathematics, and panic. The consequence of her difficult experiences with mathematics was that she perceived it as a threat and attempted to shut it out of her life. However, her negative feelings still emerge in the present: … just walking over here today, as lovely as you are and very approachable, I was quite anxious because it was to do with maths. For many learners, like Jackie, experience of being in mathematics class in school, and having their basic needs for autonomy, competence and relatedness thwarted, has profound consequences in later life. Need-Supportive Environments for Teaching and Learning Social environments, including learning environments, can support the satisfaction of basic needs and lead to positive functioning and development, or thwart the satisfaction of these needs thereby harming functioning and stifling development. Or they may fall between the two, supporting some basic needs, whilst thwarting, or ignoring, others. The SDT literature tends to focus on autonomy support, rather than dealing with the three basic needs separately. Autonomy is seen as fundamental in enabling individuals to be aware of their needs and their right to choose ways to meet them. Whilst needs can be identified separately, actions that support any one need, and in particular autonomy, are likely to impact on the satisfaction of others. An autonomy-supportive teacher would, for instance, display a sincere interest in the way students dealt with an exercise and ask them whether they need any additional help. In such a situation, students probably feel they have a say in how to proceed (autonomy satisfaction), are perhaps more likely to feel more confident to improve their skills (competence satisfaction) and feel understood by their teacher (relatedness satisfaction) (Haerens et al., 2015: p. 27). Ryan and Deci (2018) consider autonomy support to involve: enabling, actively encouraging, and valuing meaningful choices; identifying, developing, and supporting a person's interests; valuing their thoughts and feelings; encouraging self-regulation; and taking on their frame of reference. Choice of tasks is seen as autonomy supportive. Stroet et al. (2013) consider making available tasks that are interesting or important to students, and which also foster the idea of the value of mathematics, supports the development of autonomy in learners. They also see teachers showing respect to students, allowing criticism, and using informational language as further elements of autonomysupportive teaching. Teachers who identify, develop, and support learners' interests are also autonomy-supportive according to Haerens et al. (2015), who further suggest autonomy-supportive teaching practices include using inviting language, offering meaningful choices, and creating opportunities for initiative. Autonomy-supportive teachers show a sincere interest in how individual students deal with tasks and offer additional help when needed. According to Ryan and Deci (2018), they also give students opportunities to talk, listen to students, are responsive to students' comments and questions, acknowledge students' experiences and perspectives, make time for students' independent work, acknowledge signs of improvement and mastery, encourage effort, and offer progressenabling suggestions when asked by students who experience being stuck. However, autonomy is known to be thwarted in many learning situations, particularly when instructors teach by control, attempt to transmit information to passive and uninterested students, ignore student viewpoints or aim to make students think or behave in a prescribed way. Controlling teaching leads to need-frustration (Bonem et al., 2020). Stroet et al. (2013) clarify that teachers who disrespect, control, or intrude on students, give meaningless or uninteresting tasks, and suppress criticism act against their students' need for autonomy. Haerens et al. (2015) suggest that where students experience pressuring tactics such as punishment or shouting, or hear phrases such as "you have to", they are experiencing an autonomy-thwarting environment, which is supported by Cousins et al. (2019a). Such tactics also act against need for relatedness and against the need for competence. In a controlling environment, students rarely have time to work independently on solving problems, or the opportunity to formulate their own answers which Ryan and Deci (2018) see as necessary to develop students' need for competence. Stroet et al. (2013) and Leon et al. (2017) characterize competence-supporting environments as involving optimal challenge, provisions of structure through clear instructions and goals, guidance in ongoing activities and the giving of feedback which focuses on the task not the outcome or the student. Guidance on ongoing activities requires teachers to monitor work and offer help, support and encouragement when needed. Throughout, the competence-supporting teachers' attention will be on the process and on the giving of constructive informational feedback. Environments that thwart competence are overly challenging, inconsistent or discouraging. In competence-thwarting environments, feedback is in the form of praise or blame and is often focused on the person rather than their actions. Research into "optimal challenge" shows that, given choice, children select, and rate as most interesting, those tasks which are one step ahead of their current ability level (Danner & Lonky, 1981;Lee & Johnston-Wilder, 2013). Choice seems to be significant here in meeting the need for autonomy, but it also hinges on competence. Students may choose less challenging tasks where their need for competence in mathematics has previously been routinely thwarted (Lee, 2016). In offering choice, teachers must steer a careful path between the energizing nature of sufficient perceived challenge and too much challenge which can be fearor anxiety-inducing. Meeting the need for relatedness will also be important as working collaboratively seems to enable any perceived challenge to be met more readily (Johnston-Wilder & Lee, 2019). A relatedness-supporting environment is personal and inclusive, and also features the caring support of others (Ryan and Deci, 2018). Stroet et al. (2013) recognise that many teachers are not in a situation to fully meet students' needs for relatedness; due to the size of classes and the organization of the learning environment. |However, where teachers express interest in students' lives, and require everyone within the learning environment to treat one another with respect and care, they can generate feelings of belongingness even in large classes. Leon et al. (2017) consider that teachers can foster student relatedness by demonstrating trust, being available, and paying attention to feelings. Building Mathematical Resilience The concepts and attitudes of mathematical resilience are closely aligned with SDT and seem to offer a way to support both young people as they study mathematics and adults such as Jackie. Mathematical resilience is defined as "main- The fourth attribute of mathematical resilience, struggle (recognizing that engaging in struggle is part of learning mathematics and being able to persist and persevere in learning situations that might provoke anxiety (Williams, 2014)), is a necessary part of mathematical resilience due to the nature of mathematics and its learning. This attribute is not as clearly resonant with current SDT thinking but can relate to both autonomy and competence. Many learners have inadvertently learned that to struggle reveals incompetence. Where teachers use quickfix and path-smoothing routes to rapid, superficial "success", glossing over the understanding needed for autonomy and competence (Stigler & Hiebert, 2009), students learn that their experience of struggle is because they "can't do it", rather than that everyone has to struggle to learn mathematics. A key aspect of learning about the "struggle" needed to learn mathematics, is learning to manage anxiety if it arises. In developing mathematical resilience, we recognize the importance of facilitating emotion regulation, providing a framework within which learners are participants were told, prior to seeing a frightening movie clip, either to "take an active interest in their feelings" (IER), or "do their best not to show their feelings", or "try to adopt a detached and unemotional attitude". The participants given the IER instruction showed less fear and had better cognitive recall on a second viewing of the clip. Significantly, a very brief intervention was sufficient to prime IER, so participants experienced less negative emotion and greater cognitive ability. Tools That Can Help Develop Mathematical Resilience A key aspect of mathematical resilience is emotional regulation; a learner with mathematical resilience may still experience anxiety when working with mathematics but will have ways to regulate their response so that they can continue to deal calmly with issues as they arise. We have found that learners who have already developed mathematical avoidance and anxiety need expedients that can help them develop their ability to regulate their emotional responses. Here, we explore the effects of three tools which help learners make sense of their emotional experience when engaging with mathematics (Johnston-Wilder et al., 2020). These tools were offered to Jackie to give her specific actions and strategies for responding to difficult aspects of the emotional experiences she faced when thinking about mathematics. Below each tool is illustrated using examples of how Jackie was introduced to the tool. Hand Model of the Brain. In the first meeting, Sue introduced the hand model of the brain, based on Siegel (2010), to help Jackie reframe her reported sense of being "stupid" at mathematics and enable her to call a pause whenever needed. The pictures in Figure 1 represent this model. As she showed Jackie how to put her own hand into the two different positions, Sue described how the wrist in the model represents the brain stem, and the thumb the "primitive part of the brain", shared with all animals, represents the "alarm system". The back of the hand represents the cortex, which is shared with most mammals, and the fingernails represent the prefrontal cortex, where complex human thinking, involved in reading, writing, and arithmetic, takes place. Sue then connected the folded hand (as shown in Figure 1) with Jackie's experience. Jackie had described that on the way to the session she had experienced nervousness but that by telling herself: "Oh it's okay, it'll be fine, Sue seems kind." she'd managed to arrive at the session in a state of challenge rather than threat. Sue suggested that Jackie's cognition had been regulating her alarm response and contrasted this experience with a threat situation in which the alarm system is not regulated, and the brain is effectively "trying to save your life". Sue explained that the system does not distinguish between physical threat and social threat. Anything that results in isolation from the immediate social group can be experienced as a fundamental threat to existence and experiences that result in feelings of shame and humiliation are examples of social threat. Sue described a common experience in mathematical learning, in which a learner may experience their mind going blank or "feeling stupid" as the cognitive brain function being inhibited, as a temporary state, caused feelings of social threat. In a state of threat, thinking goes offline whilst the brain alters body chemistry to prepare for life-saving fight or flight (Siegel, 2010). The cognitive brain function comes back online only when the perceived threat has been addressed, Jackie was asked to contrast this with the perceived permanent state of being It's a hidden threat, reacting to social and psychological conditions as well as physical. The message that the learner is not "stupid" in mathematics, but rather is panicking, thus inhibiting cognitive brain function, enables learners to reframe those times when they felt stupid about mathematics as temporary, rather than permanent, experiences (Williams, 2014 (Benson, 2000), which is a technique used in mindfulness meditation. The aim is to bring focus back to the present, taking a "step back" from overwhelming feelings of panic. Sue said: … If you're a zebra, you run away, and then you go back to chewing again, in the rest-and-digest state, where everything's calm and quiet (what I call the green zone). If you're a diver, then you need to know how to trigger that [response] because if you panic underwater, you use up too much oxygen and then you die, so divers learn explicitly to reduce their heart rate, to reduce the oxygen [use]. If you breathe in for five and out for seven, you are overriding the alarm because you're telling the brain [to activate] the parasympathetic nervous system-the rest-and-digest system. Growth Zone Model. The third tool Sue used with Jackie gives a pictorial representation of ways in which a learning situation may be experienced. A diagram like the one in Figure 2 was created by Jackie and Sue working together as Sue explained the idea. Sue said: The third tool pulls it together ... [Sue draws a circle, Jackie colours it.] Everybody talks about this as the comfort zone, and sometimes you could say: "I'm going to step outside my comfort zone." What we don't talk about very often, and yet it's in the literature, is that there are two spaces outside the comfort zone. [Drawing a ring around the green circle.] There's the growth zone, where you find everything challenging and exciting and slightly unnerving, but you know that you are learning and it's an experience, although it's scary and [you might feel] nervous. … [Indicating the outside space and drawing an outer ring.] There's the red zone, panic … threat. When you go into a maths environment, your brain may perceive a threat, so you go straight into red … And then, with the relaxation response, you learn to get out either back to green or into orange, and then you can tackle any maths problem [in time]. After introducing the three tools, Sue talked to Jackie about a multiplication table and noticed, from Jackie's non-verbal response, that Jackie was feeling more anxious. Sue deliberately stopped at this point and encouraged Jackie to use the hand model of the brain to indicate she needed a pause, stressing that she (Sue) would stop until Jackie gave her permission to carry on. Jackie expressed how frightening she found numbers. When Sue asked if it was okay to draw the tiles in a bathroom instead, Jackie indicated that this was alright, because she could visualize the tiles, and Sue used this model to help Jackie create a multiplication table by counting tiles. The Outcome for Jackie Jackie's response to the intervention was very positive. In the first session, after the introduction of the first two tools, Sue asked Jackie how she was feeling. Jackie responded: I'm feeling much more relaxed and I feel as though I'm almost addressing something that was very unfair many years ago (laughs), because if I'd been open to this, had support with this at that time, my life could have been very different because I could have been embracing maths. At the beginning of the second session, Jackie commented: I don't feel embarrassed any more about my struggles with maths, because you made it sound completely ok, and that actually it's not really necessarily my fault. It's the fault of the system … it's made me feel more positive about it. And other things I struggle with, like the IT and things like that, it's made me feel similarly about that … I wouldn't say the anxiety has gone, because I still, when I see the little x's … But the embarrassment has gone. And this is a place of learning, and I want to learn. She also reported wanting to complete the multiplication table that she had started in the first session: I felt I wanted to complete that when I got home. And I couldn't actually believe that I was wanting to complete something [in maths] … who is this person? But I think, because I understood it and I knew I could do it, I wanted to do it. … those things haven't been in maths for me before. Jackie also reflected specifically on the three tools. To the hand model, Jackie responded by saying "That's transformational." During the second session, Jackie said: [The hand model of the brain] was very powerful, because I felt that it was absolutely ok for me to put my hand up at any point. I wasn't playing a game, where I was thinking I can't possibly put my hand up now and pretending to understand it. I was able to put my hand up, and that made a big difference, because that gave me the confidence to complete this. And I thought, I can connect with something, and that's completely ok, and that's only going to get better. So, I did like that. Because that was about me and how I felt, not how I thought I should feel. Jackie recognized the relaxation response from her generic use of mindfulness, but, importantly, would not have thought to apply it to her own issues with mathematics. About the growth zone model Jackie said: it made sense to me as well, because I'd thought quite a lot about the sense of wanting to be in the second zone. Because that's how you develop as a person, not just with maths, but with everything in life. And I think I'm probably quite an anxious person anyway, more than I originally thought. And so, to be in that safe but stretchy zone is good. But then, I can see how, with maths and some things, I tend to go straight from the safe zone to the red zone. There's no in-between. And I think, when I do that, that's when I back off and remove myself. The intervention clearly changed Jackie's relationship to mathematics. She reported ceasing to feel embarrassed, and, although still feeling anxious, was beginning to view mathematics as a challenge that she could deal with, rather than a threat. Discussion We consider that the impact of the above intervention was due to meeting Jackie's basic psychological needs, through the way Sue engaged in need-supportive teaching, including the use of the three tools, and by the way she worked with Jackie mathematically. Considering these need by need: Autonomy: Sue prioritized Jackie's choice as to how the intervention proceeded, respecting Jackie's feelings, and valuing her perspective and her needs as a learner. Competence: The tools described in the previous section supported Jackie's competence as a learner by giving her strategies for emotion management. Awareness that panic can lead to a temporary state of "stupidity" enabled Jackie to reframe beliefs about her ability. Sue consistently asked Jackie to do as much as she could, and when she made mistakes, Sue backtracked to strategies with which Jackie was more confident. Relatedness: Sue's responsiveness to Jackie was evidenced throughout the interaction, resulting in feelings of affinity. For example, Jackie described Sue as "lovely" and "very approachable", and described the first session as "interesting", "exciting" and "engaging". There is also evidence for the relevance of the SDT notions of optimal challenge, mindfulness and integrated emotion regulation. One implication of the SDT notion of optimal challenge, framed in terms of the growth zone model, is that, once learners are confident that they can manage to stay out of the red zone, and are given choice regarding the level of challenge that they choose, they are intrinsically motivated to choose optimal challenge. Jackie gave evidence of being motivated to take on further challenge, wanting to work towards the goal of learning mathematics. She said, "I have started by completing the maths times table … I also plan to buy an easy Sudoku magazine to help me start enjoying numbers rather than fearing them!" The relaxation response is connected to mindfulness, in that nurturing awareness of the present moment, either by focusing on breathing or by attending to the environment, enables a shift away from panic. Interestingly, Jackie had not perceived her previous understanding of mindfulness as a strategy to lessen her mathematics anxiety until Sue helped her make that connection. Jackie was already aware of the value of integrated emotion regulation when the study began. The mathematical resilience tools specifically enabled Jackie to move toward increasingly integrated emotion regulation in the mathematics learning situation. Instead of blocking out her experience of mathematics, Jackie was able to view and reinterpret it. Her reinterpretation of "feeling stupid" as "panicking" is particularly striking. The particular understanding of emotional responses to threat, and the concept of the three zones of safety, growth/challenge, and threat, integral to mathematical resilience but not to SDT, appeared to be crucial in providing Jackie with the specific means to reinterpret her experience. We propose the construct "mathematical learning competence" to describe the competence that enables learners to remain in the growth zone, the zone of optimal challenge, in which learning is greatest. Mathematical learning competence specifically involves an awareness of emotions and the tools, support, and strategies to manage those emotions when struggling to engage with mathematics (Mackrell & Johnston-Wilder, 2020). Conclusion We have shown how SDT is useful as a framework for exploring mathematics anxiety, and how it resonates with and supports the outplaying of many of the ideas integral to mathematical resilience. Both SDT and mathematical resilience recognise the importance of a need-supportive environment in promoting learning and preventing anxiety. Mathematical resilience offers tools that help learners to focus on and become aware of their emotions and physiological responses, interpreting these as information which empowers choices other than avoidance, as illustrated in this case study. We recommend that such tools are used to promote a coaching environment where mathematical anxiety can be recognised and overcome. If mathematics anxiety and its destructive power is recognised as a possible outcome of a traditional schooling in mathematics, rather than as a personal characteristic, and ways to deal with it are developed, then that will have benefits for the individual and will prevent them passing that anxiety on to others. We have much to learn from the lived experience of adults who have been affected by mathematics anxiety; we are particularly struck by the strong aspect of personal shame and embarrassment in accounts such as Jackie's. The change in attitude to mathematics that Jackie reported between the first session and the second is explained not only by the new awareness she gained through experiencing the tools, but also through making sense of a piece of mathematics in a more connected way, so that she experienced competence as a mathematical learner. When she was introduced to the relaxation response, Jackie recognised this as "mindfulness"-but she had not previously thought to apply mindfulness to her own mathematics anxiety. We suggest that mathematical learning environments are needed that build mathematical resilience and are therefore needs meeting rather than needs thwarting. If mathematics learning environments offer choice and a teacher who listens and is interested in the pupils and promotes a respectful environment, that will begin to meet learners need for autonomy. If the environments also offer optimal challenge and formative feedback which allow the learner to build their skills, then they will also begin to meet the learners' need to feel competence. The need for relatedness may be met through an environment that encourages collaboration and working together to overcome the obstacles that learning mathematics inevitably puts in the way of progress. If these are the features of all environments in which mathematics is learned, then mathematical anxiety will begin to diminish over time. However, until this ideal is reached, mathematics anxiety will still be present in society. Therefore, we further suggest that interventions to target mathematics anxiety based on the importance of meeting learners' psychological needs (Self-Determination Theory) but including specific responses to facilitate dealing with challenge and threat (mathematical resilience) are particularly important in a world that is increasingly demanding and uncertain and where mathematical understanding and thinking is vital. Interventions that give learners the experience of being liberated from mathematics anxiety may empower these learners to deal with other anxieties. Liberation from mathematics anxiety is also key to enabling learners to contribute to the understanding and expertise in STEM subject areas needed to meet current world-wide challenges.
2021-10-19T16:00:47.661Z
2021-09-06T00:00:00.000
{ "year": 2021, "sha1": "9ed4577d6647b5e5e9b5847703b7856118a46bb5", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=112110", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8c84475535c85c2e1881e07e66c4dfd1f981c54a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
3706497
pes2o/s2orc
v3-fos-license
Distribution Policies for Datalog Modern data management systems extensively use parallelism to speed up query processing over massive volumes of data. This trend has inspired a rich line of research on how to formally reason about the parallel complexity of join computation. In this paper, we go beyond joins and study the parallel evaluation of recursive queries. We introduce a novel framework to reason about multi-round evaluation of Datalog programs, which combines implicit predicate restriction with distribution policies to allow expressing a combination of data-parallel and query-parallel evaluation strategies. Using our framework, we reason about key properties of distributed Datalog evaluation, including parallel-correctness of the evaluation strategy, disjointness of the computation effort, and bounds on the number of communication rounds. Introduction Modern data management systems-such as Spark [32,38], Hadoop [13,18], and others [19]-have extensively used parallelism to speed up query processing over massive volumes of data. Parallelism enables the distribution of computation into multiple servers, and thus significantly reduces the completion time for several critical data processing tasks. This trend has inspired a rich line of research on how to formally reason about the parallel complexity of join computation, one of the core tasks in massively parallel systems. Several papers [9,10,[22][23][24] have studied the trade-off between synchronization (number of rounds) and communication cost, and have proposed and analyzed known and new parallel algorithms [4,11]. Among these, the Hypercube algorithm [4,16] can compute any multiway join query in one round by properly distributing the input data. To reason about Hypercube-like algorithms, Ameloot et al. [6,7] recently introduced a framework that captures one-round evaluation of joins under different data distributions. Their framework implicitly describes a single-round parallel algorithm through a distribution policy, which specifies how the facts in the input relations are distributed among the servers. While for non-recursive queries a distribution policy defines a scalable parallel evaluation strategy, for Datalog programs this is typically not the case. For instance, a simple transitive closure query already requires that for each component of the input database there exists a server containing all facts of the component. To reason about Datalog evaluation in a distributed setting, we introduce a general theoretical framework that allows a combination of data and query parallelization strategies. The central concept in this framework is the notion of an economic policy. Our key observation is that, in order to deal with intensional predicates, we need to specify not only where a fact must be located to be consumed by a rule, but also where a fact must be produced by evaluating a rule of the program. An economic policy in our framework is defined as a pair of distribution policies: a consumption policy, which specifies the location of the facts that are used in the body of rules, and a production policy, which specifies the location of facts that appear in the head of a rule. The evaluation strategy that is implicitly defined by the data distribution must communicate any produced facts to the servers where they will be consumed, and thus can run over multiple rounds. Our framework is inspired by a rich line of research on parallel evaluation strategies for Datalog programs from the early 90's [16,35,36,39]. There, Datalog evaluation strategies are based on the idea of partitioning the instantiations of the program rules among servers by adding conditions to the bodies of the rules, called program restrictions. Some of the strategies proposed require no communication of intermediate (intensional) facts and thus can be completed in one round; other strategies require communication over multiple rounds. We show that an economic policy can capture several algorithms used for parallel evaluation of recursive and nonrecursive queries, including the Hypercube algorithm [4,16], and the decomposable strategies based on program restrictions [35]. In this framework we study several properties of economic policies. We first explore the property of parallel-correctness: when does an economic policy lead to a correct evaluation strategy? As can be expected, it is undecidable to show parallel-correctness for a general Datalog program, even for the simplest of economic policies. We therefore identify a sufficient condition: every minimal valuation of a rule must be supported by the policy. A rule valuation is supported if some server consumes all the facts in the body, and produces the fact in the head. For unions of conjunctive queries, this condition is also necessary, recovering the result of Ameloot et al. [7]; however, we show that even for non-recursive programs with intermediate relations, the condition is no longer required. To overcome the undecidability of parallel-correctness, we identify a general family of economic policies, called Generalized Hypercube Policies (GHPs), which are always parallel-correct, and further capture several commonly used parallel evaluation strategies. Second, we study the property of boundedness: can we decide whether a given economic policy terminates in k rounds, independent of the input size? We show that there exists a sharp increase in complexity as we move from k = 1 to k ≥ 2. For k = 1, we can succinctly characterize the structure of a policy that always terminates in one step. Additionally, given a GHP, we can do this in polynomial time in the description of the GHP. On the other hand, for k ≥ 2 it is undecidable to determine whether it terminates in at most k steps, even for a GHP. We then ask which Datalog programs admit economic policies that are bounded by one round: we show that such programs are characterized by a syntactic property called pivoting, which was also identified by Wolfson and Silberschatz [37] in the context of decomposable programs. The present paper is the full version of the extended abstract [21] and provides the missing proofs. Parallel Complexity The parallel complexity of Datalog was first investigated by Cosmadakis and Kanellakis [12,20]. Later work used the complexity class NC to theoretically capture which Datalog programs are efficiently parallelizable. Since Datalog evaluation is P -complete and the question whether P equals NC is a longstanding open problem, it is not known if every Datalog program belongs in NC, which implies that certain Datalog programs may not be significantly sped up through parallelism. Ullman and Van Gelder [33] showed that if a Datalog program has the polynomial fringe property, which says that every fact in the output has a proof tree of polynomial size, evaluation is in NC. Every linear Datalog program has the polynomial fringe property and is thus in NC. Afrati and Papadimitriou [3] showed that for simple chain queries (including non-linear queries) evaluation is either in NC or P -complete. Recently, Afrati and Ullman [5] studied the trade-off between communication and number of rounds. They describe a very restricted class of Datalog programs where it is possible to reduce the number of recursion steps (to a number that is logarithmic in the size of the input) without significantly increasing the communication cost. Decomposability The concept of predicate decomposability was first introduced by Wolfson and Silberschatz [37]. A predicate T is decomposable if there are r > 1 restricted copies P 1 , P 2 , . . . , P r of the Datalog program P (using arithmetic predicates) such that (i) the copies compute a partition of T for every input, and (ii) there exists an input instance where each copy will produce tuples over T . The main result is that decomposability is equivalent to pivoting for sirups where there are no constants, no repeating variables, and the sirup is linear or a simple chain rule. Here, a sirup is a Datalog program with one intensional predicate S and two rules: (i) a base rule S(x) ← B(x), and (ii) a recursive rule with head predicate S. A sirup is linear if S appears exactly once in the body of the recursive rule. Later works [35,36] redefine the concept of decomposability semantically. A Datalog program is decomposable if it is possible to partition the output domain (to at least two blocks) such that for every instance I , every output fact has a proof tree where all the intensional database facts belong in the same partition block. Wolfson and Ozen [36] show that deciding whether a given Datalog program is decomposable is undecidable. Cohen and Wolfson [35] provide necessary and sufficient syntactic conditions for decomposability for sirups where the arity of the intensional predicate is ≤ 2. They also define the notion of strongly decomposable sirups, where the partition must guarantee that, for some input, at least two blocks will produce a fact using the recursive rule of the sirup. Following the same line of work, Zhang et al. [39] present a more general framework that constructs partitionings of the rule instantiations. A related notion has also been studied by Ameloot et al. [8] in the context of connected Datalog programs. Other Parallel Schemes In addition to decomposability, several frameworks for parallel recursive processing were introduced in the early 90s [16,35,36]. Wolfson [35] generalizes decomposability to load sharing schemes, by allowing the output of a predicate to have overlap in the copies of the program P . Under a load sharing scheme, every linear program can be parallelized, even if it is not pivoting. In [15,16,36], general schemes are introduced that parallelize the evaluation by partitioning the set of rule instantiations, and allowing for communication among the servers (decomposable and load sharing schemes need no communication). Dewan et al. [14] proposes similar techniques with dynamic adjustments, to balance the load of a computation. Our framework differs in that the set of rule instantiations is distributed implicitly among the servers, according to the production and consumption policies, and that the communication between servers is made explicit. Systems Recent work studies the implementation of Datalog (or fragments of Datalog) on modern shared-nothing distributed systems. Seo et al. [29] present a distributed version of a Datalog variant for social network analysis called Socialite; however, their framework requires that the user provides annotations to guide the distribution of data. Wang et al. [34] implement a variant of Datalog on the Myria system [19], focusing mostly on asynchronous evaluation and fault-tolerance. The BigDatalog system [31] describes an implementation of Datalog on Apache Spark, but focuses mostly on linear Datalog programs that use aggregation. The task of parallelizing Datalog has also been studied in the context of the popular MapReduce framework [2,5,30]. Motik et al. [26] provide an implementation of parallel Datalog in main-memory multicore systems. Preliminaries We assume an infinite domain dom. A database schema σ is a finite set of relation names {R i } n i=1 with associated arities ar(R i ). We shall write R (k) to denote a relation R with arity k. A fact R(a 1 , . . . , a k ) is a tuple consisting of a relation name and a sequence of values from dom. We say that R(a 1 , . . . , a k ) is over schema σ , if R (k) ∈ σ . For a schema σ , we denote by facts(σ ) the complete set of facts over σ . An instance I over σ is defined as a finite subset of facts(σ ). We write I |σ to denote the subset of I containing all facts in I that are over schema σ . For i ∈ N, we abbreviate the set {1, . . . , i} by [i], and for a set S we denote by P(S) its powerset. Datalog We assume an infinite domain of variables var, disjoint from dom. An atom is a formula R(t 1 , . . . , t k ) consisting of a relation name and a tuple of terms; a term t i is either a variable from var or a constant from dom. A Datalog rule τ is of the form R(x) ← S 1 (y 1 ), . . . , S n (y n ), where R(x) is a single atom called the head of τ , denoted head τ , and all S i (y i ) are atoms called body atoms of τ , denoted body τ . We say that S i (y i ) is over schema σ , when S i ∈ σ and y i is a tuple of ar(S i ) terms. We say that τ is over schema σ if all its atoms are. We assume that Datalog rules are always safe, i.e., that all variables in the head occur in at least one body atom. By vars(τ ) we denote the set of variables in rule τ . A Datalog program P is a finite set of Datalog rules. A program P is said to be over schema σ if all its rules are. Particularly, by EDB(P ) ⊆ σ we denote the relation names occurring only in the body of rules, and by IDB(P ) ⊆ σ all other relation names occurring in P . We further distinguish the names in IDB(P ) by calling some of them output relations, denoted out(P ) ⊆ IDB(P ); all other intensional relations are auxiliary. We write σ (P ) to denote EDB(P ) ∪ IDB(P ). Consider the directed graph whose nodes are the intensional relation names, and there is an edge from S to S if S occurs in the head of some rule τ of P , and S in the body of τ . We say that P is recursive if the graph is cyclic; otherwise, we say it is non-recursive. A non-recursive Datalog program with only one rule is called a conjunctive query (CQ). Evaluation Semantics We define the evaluation semantics of Datalog programs as usual, through the immediate consequence operator. Let P be a Datalog program and I an instance over EDB(P ). A valuation v for rule τ ∈ P is a constant-preserving mapping of the terms in τ to values in dom. For a rule τ ∈ P and valuation v, we say that τ derives fact v(head τ ) over instance I if v(body τ ) ⊆ I . We refer to v(τ ) as the instantiation of rule τ with valuation v. We use T P to denote the immediate consequence operator for P , which applies all rules in P exactly once over a given instance and adds all derived facts to that instance. Formally, Then, P (I ) is defined as the fixpoint reached after iteratively applying the immediate consequence operator over I . It is not difficult to see that T P is monotone, and thus always reaches a fixpoint after a finite number of iterations. Moreover, the output of the query that P computes is defined as P (I ) |out(P ) . We refer to Abiteboul et al. [1] for a detailed description. We call a fact f P -derivable if f ∈ P (I ) for some instance I , and P -consumable if during the evaluation of P on some instance I a rule instantiation v(τ ) fires that requires f . Both notions naturally generalize to atoms and relation names, e.g., relation name R is said to be P -consumable if some P -consumable fact f exists with symbol R. Atom A is P -consumable if a rule instantiation as above exists, with A ∈ body τ . Proof Theoretic Concepts Let T = (V , E) be a tree. By fringe T we denote its leaves and by root T its root vertex. All other vertices are called internal vertices. For a vertex n ∈ V we denote by children t (n) the set of child vertices of n in T . We now recall the classical notion of proof tree [1]. A proof tree T for a fact f on instance I and Datalog program P is a tree T with vertices over facts(σ (P )), where fringe T ⊆ I , root T = f , and for every internal vertex g, there is a rule τ ∈ P and valuation v such that g = v(head τ ) and children t (g) = v(body τ ). In this case, we shall say that T uses the instantiation of τ with valuation v. It is easy to see that P (I ) consists of exactly those facts f for which a proof tree for f on I and P exists. We say that a rule instantiation v(τ ) is useless if v(head τ ) ∈ v(body τ ); otherwise, we say that it is useful. W.l.o.g. we will consider only proof trees where all rule instantiations are useful. We say that a proof tree T is subsumed by proof tree T for P , denoted T T , if fringe T ⊆ fringe T and root T = root T . 1 The Framework Our framework considers a setting with p servers that share no memory and can communicate only via messages-this is commonly referred to as a shared-nothing parallel architecture. The set of servers forms a network [p] that we assume is fully connected. In order to define how computation is performed, we will use policies that specify how the data (input and output facts) are distributed over the network. We borrow the definition of a distribution policy from [7]: Definition 1 (Distribution Policy) A distribution policy P = (facts P ) over schema σ and network [p] consists of a function facts P : [p] → P(facts(σ )) mapping servers to sets of facts over σ . Distribution policies are instance independent, i.e., they are oblivious of the specific database instance. Intuitively, a policy expresses on which servers a fact should reside if the fact is in the network, but not whether the fact is in the network. Henceforth, we slightly abuse notation and write P (f ) to denote the set of servers responsible for f , i.e., In contrast to [7], where the focus is on single-round query evaluation and policies that define only the initial data distribution over extensional database facts, we consider a multi-round setting that allows the communication of intermediate facts. Definition 2 (Economic Policy) An economic policy E over schema σ and network [p] is a pair (P , C) of distribution policies over the same universe U , where: -P is defined over IDB(P ) and is called the production policy; and -C is defined over EDB(P ) ∪ IDB(P ) and is called the consumption policy. A production policy describes which servers have the responsibility of producing a certain intensional database fact. A consumption policy describes which servers need an extensional or intensional database fact to satisfy the body of a rule instantiation. We say that a fact f is C-consumable if C(f ) = ∅ and that relation R is C-consumable if some fact over dom and R is C-consumable. A family of economic policies F is a set of economic policies over a common universe and schema. We say that a family F satisfies property P if all the policies in F satisfy P. Datalog Evaluation Modulo Policies Instead of letting a server compute the full program over its local instance, we restrict the evaluation process based on a server's economic policy. That is, for economic policy E = (P , C) and Datalog program P , the following sequential evaluation algorithm takes place on server i: -First, every rule τ ∈ P is annotated with policy-predicates as follows. For the head R(x), we add a predicate Policy P R (x) to the body of τ . Here, relation name Policy P R refers to relation facts P (i) |{R} . -Second, for every atom S(y) in the body of τ , we add the predicate Policy C S (y), where now Policy C S refers to the relation facts P (i) |{S} . The added predicates may be infinitely large, but can be accessed through queries of the form "t ∈ facts P (i) |{R} ?" or "t ∈ facts C (i) |{S} ?". Throughout the paper, we assume the semi-naive evaluation strategy for Datalog programs. Semi-naive evaluation proceeds as usual over the annotated program: after each application of the fixpoint operator, the newly derived facts are added to a delta relation, and a rule instantiation is triggered only if at least one of its facts is in the delta relation from the previous iteration. We denote by P E (I, J ) the fixpoint instance when we execute P restricted to E on input I , with delta relations initialized with J . Distributed Evaluation Strategy We now present how an economic policy induces a parallel evaluation strategy. Our parallel model is the BSP-based Massively Parallel Communication Model (MPC) [25]. In this model, computation is performed over servers in a multi-round fashion. Each round has two distinct phases: a local computation phase, and a synchronized communication phase. Consider a Datalog program P , a network [p], and an economic policy E = (P , C). Moreover, let I be the input instance, which we initially assume to be partitioned arbitrarily over the p servers. Denote by local 0 i the initial local instance of server i. Let local k i be the instance on server i right after the k-th communication phase. At the k-th round (for k ≥ 1), we perform the following procedure: 1. Communication: Every server sends its facts as defined by the consumption policy C. That is, server i sends local fact f ∈ local k−1 i to server j if (and only if) f ∈ fact C (j ). Let rec k i be the facts received by server i during the k-th communication phase. 2 2. Computation: Every server computes the local fixpoint: if k = 1, then Intuitively, the algorithm terminates when, after a round is finished, for every server all locally derived facts that need to be sent to some other server according to the consumption policy, were already sent to these servers in an earlier round. Formally, for server i, we define set F i = {f | C(f ) \ {i} = ∅}. Intuitively, F i represents all facts consumed by servers other than i itself. We say that a server has reached a local fixpoint state for E and P after round . We say that the network [p] has reached a global fixpoint state for E and P after round k, if all servers i ∈ [p] have reached a local fixpoint state after round k. Notice that this condition is as desired, because every round goes into the communication phase first, then into the local computation phase. Hence, all earlier sent messages have been taken into account. One could imagine a smarter communication procedure that incorporates Datalog semantics as well. For example, a server does not need to send a local fact f ∈ facts C (j ) to server j if for every input I server j is guaranteed to already have f in its local instance. However, it is in general undecidable to make such a decision (see Lemma 2). For an instance I , let [P , E](I ) denote the union of all facts over out(P ) found at any server after reaching the global fixpoint. Notice that the above evaluation strategy always reaches a fixpoint, due to monotonicity of Datalog. For any function h : dom → [p], we define the economic policy (P 1 , This policy works as follows: it replicates the extensional database facts everywhere, and then produces/consumes each fact T (a, b) at server h(a). It is easy to see that the economic policy correctly computes the transitive closure. In fact, the evaluation always terminates in a single round. Consider a different policy (P 2 , C 2 ), which again takes any function h : dom → [p] and which has the following definition: This policy does not replicate the extensional database facts, but it hash-partitions them according to the first attribute. Whenever a server discovers a new fact, the new fact has to be consumed to the location determined by the hash of the second attribute. Observe that the production policy is [p] because we do not know where each fact will be produced (in other words, each server will produce as many intensional database facts as possible from its local input without any restrictions). We will see later in Section 6 that all the above economic policies belong in a specific family of policies that we call Generalized Hypercube Policies (GHPs). We notice that our framework supports evaluation strategies that are oblivious of the instance: each fact is communicated, consumed, and produced independent of whether other facts are in the same local instance or not. Lastly, we note that monotonicity of Datalog ensures monotonic behaviour of economic policies for Datalog programs, as made formal by Proposition 1. Proposition 1 For every Datalog program P and economic policy For the proof, we first extend the concept of proof tree for Datalog programs, to annotated proof trees for Datalog evaluation with economic policies. For program P , economic policy E, instance I , and fact f , an annotated proof tree T is a proof tree for P , I , and f , where, additionally, every node g in T has a label server T (g). For non-leafs we assume the following constraint: g ∈ facts P (server T (g)), and children T (g) ⊆ facts C (server T (g)). We also assign to all nodes in T a number round T (g), which is obtained through the following iterative argument: For leaf nodes g in T , round T (g) = 1. For all nodes g, for which all nodes in children T (g) have already a number assigned, let max g = max g ∈children T (g) {round T (g )} and L = {g 1 , . . . , g k } ⊆ children T (g) be exactly those child nodes, with round T (g i ) = max g . Now, we define round T (g) = max g if server T (g i ) = server T (g), for all g i ∈ L, and round T (g) = max g + 1 otherwise. Intuitively, an annotated proof tree encodes possible runs in the evaluation of P over I using E. More specifically, T encodes an upperbound on the moment where a fact is derived during the evaluation. More formally: Lemma 1 For a Datalog program P , economic policy E, instance I , and fact f , the following implications hold: f ∈ [P , E](I ) implies existence of an annotated proof tree T for P , E, I and f . If f is derived by E in round i on server s, then T exists with round T (root T ) = i and server T (root T ) = s. 2. Existence of annotated proof tree T for E, P , I and f implies f ∈ [P , E](I ). More specifically, f is derived on server server Proof (1) The proof is by induction on the round in which f is derived. Clearly, after round 1, all facts residing in the network have a desired annotated proof tree. The proof then proceeds by induction, assuming that condition (1) of the lemma holds up to ≤ k rounds, for some k. Now suppose that f is derived at round k + 1 on server s. The latter means that some proof-tree T for f , P and rec k s ∪ local k−1 s exists. We and set for all facts g in T , server T (g) = s. Since for all leaves g in T there exists a desired annotated proof-tree T with round T [T ](g) ≤ k (by the hypothesis), we can simply attach these to T . It is now easy to see that round T (g) ≤ k + 1, for all nodes g in T . Hence, the proof-tree is as desired. (2) By definition of annotated proof-tree, particularly due to the constraints on server T , fact f becomes derivable on server s during the computation of E over I . We only need to show that this happens in round at most round T (f ). The proof is by induction on round T (f ). Clearly, if round T (f ) = 1, then all facts g in T are marked round T (g) = 1, and therefore, server T (g) = s. The latter means that all leafs of T where present on node s after the first communication phase, and thus either f ∈ I , or (because T is also a valid proof tree for f ), f has been derived on node s in the first computation phase. Assume now that condition (2) holds for i ≤ k (induction hypothesis). Suppose round T (f ) = k+1. By the induction hypothesis, all facts g in T with round T (f ) ≤ k have been derived at some node in round ≤ k. Now it is easy to see that the top-fragment T of T (i.e., all the subtree of all facts marked with round T (g) = k + 1 and their immediate children), describes a proof-tree for f and P on node s. Let j be the earliest communication round after which all leaf nodes have reached server s. Since all leaf-nodes have been derived in round ≤ k (by the hypothesis), and the semantics of E and T guarantees arrival of these facts on server s after the next communication phase, we have that j ≤ k + 1. It is now easy to see (due to T ), that f will be derived on server s in computation round j ≤ k + 1. This concludes the proof. Since for instances I ⊆ I , every annotated proof tree T for E, P , and I is trivially also an annotated proof tree for E, P , and I , Proposition 1 is now a corollary of Lemma 1. Parallel-Correctness An economic policy for a Datalog program does not necessarily lead to the desired output. For example, if the production policy maps every fact onto the empty set of servers, then the execution will generate only empty intensional database relations. Henceforth, we are only interested in economic policies that generate the expected output. Definition 3 (Parallel-correctness) An economic policy Parallel-correctness is in general undecidable, even for simple classes of policies. For instance, consider the class of policies, where P (f 1 ) = P (f 2 ) and C(f 1 ) = C(f 2 ), whenever f 1 , f 2 are facts with the same relation symbol. We call this class of policies value-independent, denoted E indep , since the facts are mapped to servers only according to the relations they belong to. Value-independent policies allow a succinct representation by simply enumerating the intensional relation names of P and the subsets of [p] where each relation is assigned. We consider the following decision problem. Proof The proof is by a reduction from the Datalog containment problem, which is well-known to be undecidable [1]. Let P 1 and P 2 be two arbitrary Datalog programs given as input for the containment problem. We assume that both are over the same nullary output relation name, say O. We first denote by P * i an indexed version of program P i ; particularly we define P * i as P i in which all intensional relation names are annotated with index i. We now construct a program P by taking all rules from P * 1 and P * 2 , and adding the rules O() ← O i (), for i ∈ {1, 2}. We note that edb(P ) = edb(P * 1 ) ∪ edb(P * 2 ) and out(P ) = {O}. As economic policy we take E = (P , C) over the 2-node network {1, 2}. The consumption policy maps all facts with index i to server i. The production policy maps all facts with index i to server i, and the fact O() to server 2. The extensional database facts are consumed on all servers. Intuitively, programs P * 1 and P * 2 are computed locally on server 1 and server 2. It thus follows from the construction that ( †) P 1 (I ) ∪ P 2 (I ) ⊆ [P , E](I ), for every instance I . Notice that rule O() ← O 1 () is never used, since server 2 cannot consume facts over relation names with index 1. It remains to show that E is parallel-correct for P if and only if P 1 ⊆ P 2 . Indeed, if P 1 ⊆ P 2 , then P (I ) = P 2 (I ) for every instance I , which implies that the policy will compute the correct result for O. The other direction follows from monotonicity of P . From ( †) it follows that this condition is satisfied if and only if all facts over the O relation produced by P (I ) are also produced by In fact, the above proof yields an even stronger result: Lemma 2 Let P be an arbitrary Datalog program and E = (P , C) an economic policy over σ that is parallel-correct for P . Let f ∈ facts(σ ) and C be a consump- Proof We simply observe that the economic policy E = (P , C) in the proof of Proposition 1 has this property. Indeed, updating C(O 1 ()) = {1} to C(O 1 ()) = {1, 2} makes the policy trivially parallel-correct for P . Despite the above results, we can present some syntactic conditions that are necessary for parallel-correctness, and some that are sufficient. We say that an economic policy E supports a proof tree T if all the rule instantiations in T are supported. Lemma 3 Let P be a Datalog program and E an economic policy. If a proof tree T for P is supported by E, then for every instance I , with fringe T ⊆ I , we have Proof The proof is by induction on the depth d of T . Particularly we show using a simple inductive argument that root T ∈ local k i , for some server i and k ≤ d, which implies root T ∈ [P , E]. Recall that local k i denotes the facts residing locally on server i after the k-th computation round. As base case let d = 1, meaning that T describes a single rule instantiation. After the first communication round, all servers j have local 0 j ∪ rec 1 j ⊆ I ∩ facts C (j ). By the assumption that E supports T , it follows that for some server i, thus after the first computation round, root T ∈ local 1 i . For d > 1 we observe that root T and its children in T define a rule instantiation (τ, v), and, by the assumptions of the lemma, this rule instantiation is supported by E. More specifically, some server i exists where root T ∈ facts P (i) and children T (root T ) ⊆ facts C (i). Further, for all facts f ∈ children T (root T ), the respective subtree T f of T with root f is supported by E and with depth d − 1. By the induction hypothesis it follows that for all these facts f there is a server j and We now have a characterization for parallel-correctness of a program P w.r.t. an economic policy. For this, let f ∈ P (I ), which means that a proof tree T exists with fringe T ⊆ I and root T = f . Particularly, by the assumption of the lemma we can choose T so that it is also supported by E. It now follows from Lemma 3 that f ∈ [P , E]. (Only if) We assume (P , C) is parallel-correct for P . Let T be an arbitrary proof tree. The proof is by construction following the derivation of root T using E. First, from parallel-correctness it follows that P (I ) = [P , E](I ), for any instance I . Here we take I = fringe T , implying root T ∈ [P , E](I ). The proof now continues by induction on the number of rounds needed for E to derive root T . The induction hypothesis is that if k rounds are needed to derive root T , then a supported proof-tree of depth k subsumed by T exists. As a base case suppose k = 1. That is, root T ∈ local 1 i , meaning that root T ∈ P E (local 0 j ∪ j rec 1 j ) for some server j . Particularly, a valuation v and rule τ ∈ P existed with v(body τ ) ⊆ facts C (j ) ∩ I and v(head τ ) = root T , which means that the corresponding rule instantiation is supported by E. Here, the proof tree admitted by (τ, v) is as desired. For k > 1 the proof is analogous, but now we take as proof tree the tree obtained by concatenating the rule instantiation with the proof trees for each child. Existence of the latter follows from the induction hypothesis. As the number of rounds decreases by one in each inductive step, and the fringes of the obtained trees cannot have other facts than does in I , the constructed proof tree is as again as desired. We consider various categories of economic policies based on which rule instantiations are supported for a given Datalog program P : ∈ v(body τ ). N ess P : the set of all essential rule instantiations of P . An instantiation of rule τ with valuation v is essential if for some P -derivable fact f and instance I , every proof tree T for f on I and P has a vertex g with g = v(head τ ) and v(body τ ) ⊆ children T (g). If the program is non-recursive, then N use P = N all P , since there will be no rule that contains the same relation in the head and the body. We also have: Before giving a proof, we first show the following Lemma. Lemma 4 For every proof tree T of depth d, there exists a proof tree T T of depth at most d that uses only minimal and useful rule instantiations. Proof The proof is by induction on the depth of T , which we denote d. For the base case, let d = 1. Then, T corresponds to a single rule instantiation (τ, v) for P where all the facts in v(body τ ) are extensional database facts. By definition, there is also a minimal rule instantiation (τ , v ), with v (head τ ) = v(head τ ) and v (head τ ) ⊆ v(body τ ), which admits the desired proof tree. As induction hypothesis we take the statement of the lemma. Now, for the induction step, suppose T has depth d > 1. Then, the root of T , together with its children, defines a rule instantiation (τ, v) for P . Now take an subsumed minimal instantiation (τ , v), such that v (head τ ) = v(head τ ) and v (body τ ) ⊆ v(body τ ). For every fact f ∈ v (head τ ), let T f be the subtree of T with root f (child of root T ). By the induction hypothesis, there is a proof tree T f T f with depth ≤ d − 1 that uses only minimal rule instantiations. The proof tree that combines instantiation (τ , v ) with T f for all f ∈ v (τ ) is as desired. Proof of Proposition 3 The containment N ess P ⊆ N use P is straightforward, since a proof tree does not use any useless rule instantiations. We next show that N ess P ⊆ N min P . Suppose that we have an instantiation of rule τ with valuation v that is essential. Then, there exists some fact f and instance I for which every proof tree T has a vertex g with g = v(head τ ) and v(body τ ) ⊆ children T (g). By Lemma 4, we can pick this tree such that it uses only minimal rule instantiations. This implies that the rule instantiation with head g and body children T (g) is minimal. Hence, the instantiation with head v(head τ ) and body v(body τ ) is also minimal. The following example demonstrates the different types of rule instantiations. Example 2 Let P be the left-linear transitive closure program from Example 1; consider a rule instantiation of the recursive rule : T (a, b) ← T (a, c), R(c, b), for some (not necessarily different) constants a, b, c. We distinguish the following cases: c = a: in this case, the instantiation is not minimal, since we can derive the same head fact from the instantiation T (a, b) ← R(a, b) of the first rule. c = b: in this case, the instantiation is useless, since T (a, b) also belongs in the body. Depending on which types of rule instantiations are supported by an economic policy, we can define different types of policies. An economic policy that supports all possible rule instantiations, that is, N all P , is said to be strongly supporting for Datalog program P . Proposition 4 Let P be a Datalog program and E an economic policy. If E supports all minimal and useful rule instantiations in P , then it is parallel-correct. If E is parallel-correct for P , then it supports all essential rule instantiations. Proof The first item follows from Proposition 2 and Lemma 4. For the second item, consider a parallel-correct policy E and an essential instantiation of rule τ with valuation v. By the definition of essential, for some fact f and instance I , every proof tree T for f on I and P has a vertex g with g = v(head τ ) and v(body τ ) ⊆ children T (g). By Proposition 2, there must exist such a tree T that is supported. This implies that there exists server s with v(head τ ) = g ∈ facts P (s) and v(body τ ) ⊆ children T (g) ⊆ facts C (s). Hence, the essential rule instantiation is indeed supported. Proposition 5 Let P be a Datalog program where each intensional relation name occurs only in the head of rules (i.e., P is a union of CQs). Then, N ess Proof Because P is not recursive, N use P = N all P ; hence, because of Proposition 3 it suffices to show that N min P ⊆ N ess P . Indeed, consider a minimal instantiation for rule τ with valuation v, and consider the instance I = v(body τ ) and fact f = v(head τ ). Take any proof tree T for f on I and P ; T must have depth one. Because of the minimality of the rule instantiation, it must be that children T (f ) = v(body τ ), which proves the essentiality. Together with Proposition 4, the above proposition implies that a Datalog program where the body of each rule contains only extensional database relations is parallelcorrect if and only if it supports every minimal rule instantiation, or equivalently if and only if it supports every essential rule instantiation. Notice that this class of Datalog programs corresponds to a program that computes a set of UCQs, and thus the above result captures the characterization of parallel-correctness for CQs and UCQs in [7,17]. We should emphasize here that [7,17] consider only economic policies where P assigns every fact to every server, while a general economic policy can assign facts to any subset of servers. For general Datalog programs, N ess is not true anymore, and thus supporting essential instantiations is not a sufficient condition for parallelcorrectness, even if P is non-recursive. (Recall that non-recursiveness is a syntactic condition, and that all such programs are straightforwardly rewritable to UCQs.) Example 3 Consider the following non-recursive Datalog program P : and take the rule instantiation with head U() and body {V (), R(a, b), S(c, d)}. Assume that c = d. This rule instantiation is minimal, but we will show that it is not essential. For the sake of contradiction, assume that it is essential. Then, for some instance I there exists a proof tree T for U() on I and P such that there exists a vertex U() R(a, b), S(c, d)} ⊆ children T (U ()). Since the proof tree contains the fact V (), it also contains a rule instantiation that derives the fact V () with body {R(a , b ), S(c , d ), S(d , c )} for some constants a , b , c , d . We can now construct two proof trees for U() on the same instance, as seen in Fig. 1. Because c = d, one of the facts S(c , d ), S(d , c ) must be different from S(c, d) (In Fig. 1 we assume this fact is S (d , c )). Thus, for one of the two trees, the children of U() will not be a subset of {V (), R(a, b), S(c, d)}. This implies that the rule instantiation we considered is indeed not essential. T (x, y) ← R(x, y), is trivially minimal, useful and essential. As for the recursive rule, we showed in Example 2 that an instantiation that is minimal and useful is also essential. Observe that if this instantiation is only minimal but not useful, or only useful and not minimal, it is not essential. Thus, both properties are necessary to guarantee essentiality. We conclude this section by commenting on whether it is computationally feasible to test the different properties of rule instantiations. It is easy to see that given an instantiation, it is possible to check whether it is useful in polynomial time. The complexity for checking the minimality of a rule instantiation is CONP-complete [7]. Unfortunately, testing essentiality of a rule instantiation is undecidable. Proposition 6 Testing essentiality of rule instantiations, as well as whether for a given rule an essential instantiation exists, is undecidable. Proof We first show the latter. The proof is again by a reduction from the Datalog containment problem. For this let P 1 and P 2 be programs serving as input, with output predicate O (k) . As before, let P * 1 and P * 2 be the indexed versions of these programs. Now the question whether P 1 ⊆ P 2 reduces to the question whether some essential rule instantiation for O(x) ← O 1 (x) exists. Indeed, if P 1 ⊆ P 2 , this cannot be the case, since a proof tree over {O} ∪ σ P 2 will always exist. If P 1 ⊆ P 2 , then some I and t exist, with O 1 (t) ∈ P 1 (I ), O 2 (t) ∈ P 2 (I ). Then, it is easy to see that all proof trees T with root T = O(t) contain the instantiation O(t) ← O 1 (t), which is thus essential. Next, we show that essentiality of rule instantiations is undecidable. More precisely the proof is by contradiction. We show that, if essentiality of rule instantiations is decidable, then testing whether an essential instantiation for a given rule exists is decidable as well, which contradicts the earlier obtained result. The algorithm relies on the observation that positive Datalog programs (without function symbols) are C-generic, with C being the constants occurring in P . Thus, if a rule instantiation is essential, all isomorphic instantiations (where values from C are preserved) are essential. Clearly there are only finitely many distinct instantiations (up to isomorphisms). For given rule τ ∈ P , one can thus iterate over the above defined equivalence class, choose from each a specific instantiation, and test whether the chosen instantiation is essential. An essential instantiation is found if and only if the rule has an essential instantiation. Generalized Hypercube Policies In this section, we present a general class of economic policies, called Generalized Hypercube Policies (GHP), which encompass a broad variety of evaluation strategies. We first give an intuitive explanation. The formalism of GHPs relies on the Hypercube partitioning for CQs [4], which has been shown to provide guarantees on the communication-cost for CQ evaluation [9]. Let P = {τ } be a CQ with k distinct variables. Hypercube conceptually orders the p servers as a hypercube H = [p 1 ] × [p 2 ]×· · ·×[p k ], with i p i = p, where every dimension p i ≥ 0 corresponds to a variable x i from the query; every server is assigned a unique point in the space H ; and every variable x i is associated to a hash function h x i : dom → [p i ]. Then, a fact R(a 1 , . . . , a r ), matching with atom R(y 1 , . . . , y r ) ∈ body τ , is sent to all servers whose coordinate in the dimension of H associated to variable y i is equal to h y i (a j ), for all j ∈ [r]. Then, program P is computed on each server over the data at hand. For GHPs, we associate to every rule a hypercube over the full p-server network, and intuitively define the consumption policy so that "a fact is consumed at server i if and only if one of the considered Hypercube specifications would send it to server i", and "a fact is in the production policy of server i if and only if one of the Hypercube specifications would derive it on server i". GHP Parameters Let P be a Datalog program, and assume we have a network [p]. A GHP for P defines a finite set of k-dimensional hypercubes H 1 , . . . , H , for some parameter k. We note that the assumption that every hypercube has the same number of dimensions is without loss of generality, since we allow the range [1] for dimensions. The range of the dimensions of the hypercubes are parametrized by a matrix of dimensions × k with entries p i,j , such that k i=1 p j,i = p, for each j ∈ [ ]. Each hypercube is then defined as H j = [p j,1 ] × [p j,2 ] × . . . [p j,k ]. For each hypercube H j , we also define a bijective mapping map j that assigns to every point in H j a server s ∈ [p]. The latter thus provides the mapping between conceptual servers in the cube and real servers in the considered network. A GHP policy next assigns each rule τ to exactly one of the hypercubes: let χ : P → [ ] be the function that encodes this assignment. Given this assignment, a GHP defines a mapping ρ τ : [k] → P(vars(τ )) that maps each dimension of the hypercube H χ(τ ) to a subset of the variables that appear in τ . Finally, the GHP defines for each dimension i ∈ [k] and each hypercube H j a hash function h j i that maps subsets 3 of dom with size lesser or equal than the largest size of ρ τ (i) (for any τ with χ(τ ) = j ) to a value in the i-th dimension. For hash functions that accept non-empty sets, we require that they are surjective. Notice that our concept of hash-function is a generalization of the hash-functions used in, e.g., the Hypercube algorithm, where α = 1. This generalization allows to scatter tuples over a row in a more fine-grained way than is possible via a single variable. Further, we notice that, by definition, rules that use the same hypercube, also use the same hash function for each dimension of that hypercube. GHP Semantics Let f be a fact and suppose that f = v(A), for some valuation v and atom A = R(y) that appears in rule τ . 4 We define the following set of servers: Intuitively, S τ f ,A denotes the set of servers whose coordinate q is consistent with the hash mappings specified for τ . Notice that if the atom R(y) has only a part of the variables that correspond to some dimension i, then facts are broadcast over dimension i, as it happens if none of these variables are in y. The consumption policy C(f ) is defined as the union over all sets S τ f ,A for rules τ and atoms A ∈ body τ with instantiation f . The production policy P (f ) is similarly defined as the union over all sets S τ f ,A for rules τ and atom head τ with instantiation f . Fig. 2. We choose two hypercubes H 1 , H 2 ( = 2) with dimension k = 2. The first two rules τ 1 , τ 2 are mapped to the hypercube H 1 , and the third rule τ 3 is mapped to H 2 . We choose the dimensions of the hypercubes such that p 1,1 · p 1,2 = p, p 2,1 = p, and p 2,2 = 1. The two functions map 1 , map 2 map the points of H 1 , H 2 respectively to {1, . . . , p} in a one-to-one fashion. Finally, the mapping of variables to dimensions is: Example 5 Consider the Datalog program depicted in Consider the first two rules (which form the left-linear TC example), and assume that p 1,1 = 1 and p 1,2 = p. Then, the resulting GHP is equivalent to the hash partitioning policy that we described in Example 1. Notice that since we use the same hypercube for both rules, the extensional database relation R will be hash partitioned only once. If we now change the dimensions to p 1,1 = p, p 1,2 = 1, we obtain the decomposable policy of Example 1 that broadcasts the extensional R to every server and can terminate in a single round. Apart from the above two GHPs, we can also define other GHPs by configuring different dimensions of the hypercube H 1 . For example, we can choose p 1,1 = p 1,2 = √ p. We next show that GHPs are strongly supporting policies. Proposition 7 Let P be a Datalog program. Every GHP E for P is strongly supporting for P and, as a consequence, parallel-correct for P . Proof To show that E is strongly supporting, consider some rule τ ∈ P , and its instantiation w.r.t. some valuation v. Consider some atom A = R(y) in the body of τ ; then the consumption policy says that its instantiation f = v(A) will be consumed in the set S τ f ,A . Similarly if A is the head, the fact f will be produced in S τ f ,A . Now we can write the intersection A∈τ S τ f ,A as: In other words, there will be at least one server in A∈τ S τ A , which means that every instantiation of the rule τ will be strongly supported. GHP Families Since we do not want to consider an encoding mechanism for hash functions-which is necessary to formally reason about properties for GHPs-we introduce the concept of GHP families. Given a Datalog program P and network [p], a GHP family F is defined as the set of GHPs over P and [p] that all have the same parametrization for P, map j , χ, ρ τ . In other words, policies in F can differ only with respect to the choice of hash functions, and for every choice of hash functions, the associated GHP is in the family. By F GHP we denote the class of all GHP families. Bounded & Disjoint Evaluation In this section, we we ask two main questions: First, can we reason about the number of rounds that an economic policy needs to compute a Datalog program? Second, can we constrain the number of servers that derive a copy of the same fact? We start with a formal definition of boundedness. Definition 5 (Boundedness) An economic policy E for Datalog program P is bounded if some constant k exists such that, for every instance I , the network reaches a global fixpoint for E and P , when round k is finished. We say E is -bounded if k ≤ . First, we remark that setting ρ τ to map to the emptyset for all rules τ , does not eliminate communication. Indeed, economic policies always send facts to all servers that may need the fact according to the policy, independently of whether the fact is already known by the target server. In other words, the responsibility to decide whether a fact is send lies entirely on the sender. There is no trivial way to provide a 1-bounded economic policy. Second, one should not confuse the number of rounds in the parallel computation with the number of iterations of semi-naive evaluation. Nevertheless, as the following proposition shows, boundedness of the Datalog program implies boundedness of the evaluation. Proposition 8 If P is a bounded Datalog program, then every parallel-correct economic policy E for P is k-bounded, for some constant k that depends on P . Proof We use the following claim. ( †) if we run E on an instance with bounded size, then E will finish its evaluation in a bounded number of rounds. The result now follows from boundedness of P and Proposition 1. Boundedness of P implies that some constant exists, such that for every instance I and fact f , f ∈ P (I ) implies the existence of a proof tree with depth no more than the bound. We observe that a bound on depth implies also a bound on fringe size. Now, for arbitrary f and I , for f ∈ [P , E](I ) we observe that f ∈ P (I ), due to parallel-correctness, and thus, due to boundedness of P , that some proof-tree T with bounded fringe exists. Then, it follows from ( †) that the number of rounds of E on the instance consisting only of the fringe is bounded, and due to parallel-correctness of E, that f ∈ [P , E](fringe T ). Since this observation holds for all f ∈ [P , E](I ), it follows from Proposition 1 that the number of rounds of E over the whole instance is also bounded. It remains to show ( †). The crucial observation is that, in all but the last computation round at least some fact is communicated in the network that has not been communicated in any earlier round. Indeed, only new derivations can trigger a next communication round, and when a fact is received it will trigger new derivations only if it is not already known by the receiving server. Since the instance is bounded, the active domain (of this instance) is bounded, and thus the number of facts that can be introduced during the evaluation is bounded as well. The result follows. Surprisingly, there exist economic policies for bounded Datalog programs that are not bounded. However, due to Proposition 8, such policies cannot be parallel-correct. Example 6 Consider the following bounded program. T (x) ← A(x). T (x) ← B(x), T (y). We construct a network with p > 1 servers. Consider a policy that consumes T (i) and B(i) at server (i mod p) + 1, and produces T (i) at server (i mod p). Every tuple in A is consumed at server 1. Now, consider the following input instance: B(1), B(2), . . . , B(p − 1)}. It is easy to see that T (0) is produced at server 1 at round 1, T (1) is produced at server 2 at round 2, and so on, until T (p − 1) is produced at round p at server p. In the remainder of this section, we focus on pure Datalog (PureDatalog). A Datalog program is pure if it is free of constants and variables occur at most once in every atom [28]. We emphasize that this definition prohibits a variable from occurring on multiple positions in an atom, but that a variable can still occur in multiple (distinct) atoms of a rule. We note that, for a program P in pure Datalog, every fact over a P -consumable (P -derivable) relation itself is P -consumable (P -derivable). We consider the following decision problems. Since the proof is by reduction to an economic policy that is either 2-bounded or not bounded et all, it follows that BOUNDEDNESS Proof The proof is by a reduction from the undecidable containment problem for Datalog programs. Let P 1 , P 2 be two Datalog programs with the same distinguished nullary output predicate O that serves as input. As before, we annotate the relation names of both programs P 1 and P 2 with index 1 and 2, respectively, and denote the obtained programs by P * 1 and P * 2 . We now construct program P over the schema: (1) , O (2) , E (2) }, by combining the rules from P * 1 , P * 2 , and those mentioned below. First we add rules Adom(x j ) ← X(x 1 , . . . , x α ) for every relation X (α) ∈ σ (P * 1 ) ∪ σ (P * 2 ) ∪ {E (2) } and j ∈ [α]. Further, we add: Notice that new relation E is an extensional database relation, while O and Adom are intensional database relations. Next, we define a GHP H. For this take a single 1dimensional cube of p = 2 servers, say cube 1, and define χ(τ ) = 1 for all rules in P . For rules in P * 1 and P * 2 , as well as the Adom producing rules, we define ρ τ (1) = ∅. For rule τ 1 we again define ρ τ 1 (1) = ∅, for τ 2 and τ 3 we define ρ τ 2 We claim that P is 2-bounded if, and only if, P 1 ⊆ P 2 . Otherwise, a GHP exists in H for which the number of rounds depends on the size of the input, particularly on the size of relation E. (If) Let E = (P , C) be an arbitrary economic policy from H. We observe that after a single round, programs P * 1 and P * 2 , as well as relation Adom are fully computed on both servers. (The latter is due to our choice ρ τ (1) = ∅ for the involved rules.) During the first round, O-facts may be produced by rule τ 2 . After this first round, several facts will be communicated, particularly the C-consumable facts derived with rules from P * 1 and P * 2 , as well as the facts with relation name Adom and O. Since these relations are computed on both servers, no server receives a new fact (particularly due to P 1 ⊆ P 2 ). Hence, the fixpoint is reached and no further communication steps are needed. (Only if) Since P 1 ⊆ P 2 , some instance I exists, with O() ∈ P 1 (I ) and O() ∈ P 2 (I ). We convert instance I to an instance for P , by annotating the relations with respective index, and add a relation E, with the chain E(0, 1), E (1, 2), . . . , E(m − 1, m) for some integer m. We define a specific GHP from H. For this, let T I be the transitive closure relation of E I . As hash function, we choose h 1 1 ({i}) = imod2. We note that, by the choice of h 1 1 , E(0, 1) is consumed at server 1, E(1, 2) is consumed at server 2, E(2, 3) is consumed again at server 1, etc. We observe that server 1 derives the fact O(0, 1) in the first round and sends it to server 2. Then, server 2 can derive (in the second round) the fact O(1, 2), based on O(0, 1) and the fact E(1, 2) which it had already received in an earlier round. Now, a straightforward inductive argument shows that server imod2 receives fact O(i − 2, i − 1) for the first time in round i, and thus that we need (m) rounds to reach a fixpoint. So for m large enough we need more than k rounds. Proof The proof is again by a reduction from the undecidable Datalog containment problem. Given two Datalog programs P 1 , P 2 over single output relation, which serve as input for Datalog containment. We construct program P by taking all rules in P 1 , where all IDB relations are annotated by index 1, and all rules in P 2 , where IDB relations marked with index 2. Here we assume that O 1 is the output predicate for P 1 , and O 2 for P 2 . We add the following rules, with fresh relation names {D i | i ∈ [k]}: And for each i ∈ {1, . . . , k − 1} we add the rule: We take an economic policy E = (P , C) over a two-node network. The consumption and production policies are defined as follows: -All relations with index 1 are consumed and produced by server 1; -All relations with index 2 are consumed and produced by server 2; -All relations D i with even i are produced at server 2 and consumed at server 1; -All relations D i with odd i are produced at server 1 and consumed at server 2; and -Relation D k is produced at server 2 (even if k is odd). Next, we show that E is k-bounded if and only if P 1 ⊆ P 2 . (Only if) Suppose P 1 ⊆ P 2 . Then let I be an instance with O 1 () ∈ P 1 (I ) and O 2 () ∈ P 2 (I ). In the first round, server 1 derives fact D 1 (), which needs to be communicated in the next round (round 2) to server 2. In round 2, server 2 receives D 1 and produces D 2 , which needs to be communicated in the next round (round 3) to server 1. Since server 1 and server 2 cannot produce facts for relations D i in another way than via rule D i+1 () ← D i (), they are deemed to continue this exchange of facts till D k is produced (at round k) and received by its consuming server (at round k + 1). Policy E is thus clearly not k-bounded. (If) Suppose P 1 ⊆ P 2 . On every instance I , server 1 computes P 1 (I ), and server 2 computes P 2 (I ). We distinguish between three cases: If P 2 (I ) is empty, then the network fixpoint is reached immediately after the first round. If P 1 (I ) is empty and P 2 is not, then the fact D k () is derived at server 2 and may have to be send to server 1 in the round (if k is even). Since D 1 is not derived, and will not be derived after receiving D k (), the network fixpoint is reached after at most two rounds. The more interesting case is when P 1 (I ) is not empty. Then server 1 derives fact D 1 (), which triggers the consecutive exchange of D i () facts between the two servers as deribed in the only-if case of the proof, except that, when receiving fact D k−1 () (in round k), the fact D k () is already known by its consuming server (i.e., server 1 if k is even, server 2 otherwise). Therefore, the network fixpoint is reached already in round k, which concludes the proof. Result (4) follows from the syntactical characterization shown in the next subsection. Towards this characterization, we first give a general characterization of 1-boundedness for strongly supporting policies. Let P be a Datalog program and E = (P , C) an economic policy. We denote by P * the policy obtained by removing from every P (f ) any server s for which no rule instantiation v(τ ) exists with v(head τ ) = f , v(body τ ) ⊆ facts C (s), with v(body τ ) being all P -derivable. Intuitively, P * (f ) removes those servers that are allowed to produce f , but cannot due to limitations of the consumption policy C. Notice that if E = (P , C) is strongly supporting for P , then so is E = (P * , C), since we have not removed the support of any rule instantiation. Proposition 11 Let P be a Datalog program and E = (P , C) a strongly supporting economic policy for P . E is 1-bounded if and only if for every P -derivable intensional database fact f : Proof (If) All intensional database facts derived during the distributed evaluation are P -derivable. Consider a rule instantiation (τ, v) that is fired on some server s and produces fact f = v(head τ ). Then, condition (1) tells us that |C(f )| ≤ 1. If |C(f )| = 0, then f is not consumed anywhere and thus will not be communicated. If |C(f )| = 1, condition (2) tells us that C(f ) = P * (f ). But since s ∈ P * (f ), this implies that C(f ) = {s}. Hence, s is the only server that consumes f , and f does not have to be sent to another server. Thus indeed E is 1-bounded. Notice that extensional database facts are never communicated after round 1. (Only if) Suppose that E is 1-bounded. Let f be a P -derivable fact. Since E is strongly-supporting, it is parallel-correct, thus f is derived at some server s over some instance I in round 1. In particular, s ∈ P * (f ). If C(f ) ⊆ {s}, then f needs to be communicated by s, which enforces another round and contradicts 1-boundedness. Hence, C(f ) ⊆ {s} and |C(f )| ≤ 1. Assume C(f ) = {s} and suppose that there exists some s ∈ P * (f ) \ {s}. Then, over some instance J , f is derived in s in round 1, and then needs to be communicated to s, which again contradicts 1-boundedness. Weakly Pivoting GHPs We present a necessary and sufficient syntactic condition for 1-boundedness of GHP families. Here, for atom A and set of variables X ⊆ vars(A), we denote by pos A (X) the set of positions in A containing variables from X. Definition 6 (Pivoting Relation) A relation R is pivoting for GHP family H if for every two atoms A 1 , A 2 (in rules τ 1 , τ 2 respectively) over R, and for all dimensions i of cube χ(τ 1 ) with p χ(τ 1 ),i > 1: Intuitively, if R is pivoting, then every rule that sends R tuples will send each R tuple to exactly one server, and the rules agree on this server. Definition 7 (Pivoting/Weakly pivoting) We say that a GHP family is pivoting (weakly pivoting, resp.) for P if all (all P -consumable, 5 resp.) intensional database relations are pivoting. The program from Example 7 is weakly pivoting. For pure programs we can test whether a GHP family is weakly pivoting in polynomial time, since we need to go over all P -consumable intensional database relations (for pure programs, these are all relations occurring in the body of a rule), and then for each such relation R test all pairs of atoms over R. This observation, along with the proposition below-that shows that weakly pivoting is a necessary and sufficient condition for 1-boundedness-implies that deciding 1-boundedness for GHP families is indeed in PTIME. Proposition 12 Let P be a pure Datalog program, and H a GHP family. Then, H is 1-bounded for P if and only if it is weakly pivoting for P . Proof (If) Let E = (P , C) be an arbitrary economic policy in H that is weakly pivoting for P . We show that H is 1-bounded for P by making use of Proposition 11. For this, let f be an arbitrary P -derivable fact. We first deal with the case when f is not P -consumable. Due to pureness of P , the latter implies that no rule in P exists with a body atom that can match with f . It follows from the definition of GHP that f is not C-consumable (i.e., |C(f )| = 0) and thus that the conditions in Proposition 11 are true for f . For the remainder of this direction of the proof we have to deal only with the case when f is P -derivable and P -consumable, which implies |P (f )| ≥ 1, and |C(f )| ≥ 1. Recall that a server s is in C(f ) iff there is a rule τ ∈ P , atom A ∈ body τ , and valuation v, with v(A) = f . Analogously, server s is in P (f ) iff there is a rule τ ∈ P and valuation v, with v(A) = f for A = head τ . Moreover, per definition 6 for weakly pivoting programs, in both cases server s is uniquely determined (for given τ and A) by the parameters χ(τ ) and pos A (ρ τ (i)), for all dimensions i of χ(τ ). (We ignore map χ(τ ) , which is fixed for χ(τ ).) Next, we show that |C(f )| = 1. For this, arbitrary elements s 1 , s 2 ∈ C(f ). Then, due to Definition 6 it follows directly that s 1 = s 2 . Hence, indeed |C(f )| = 1, which corresponds to condition (1) of Proposition 11. The argument for condition (2) of Proposition 11, that s 1 = s 2 for every pair of servers s 1 ∈ P (f ) and s 2 ∈ C(f ) (which implies P (f ) = C(f )) is analogous. We thus conclude from Proposition 11 that E is 1-bounded. Then, from the generality of the argument it follows that H is 1-bounded. (Only If) Let R be an arbitrary consumable intensional predicate name in P . We argue that R is pivoting by showing conditions (1), (2), and (3) from Definition 6 by contraposition. Notice that all facts over R are both P -consumable and P -derivable, due to pureness of P and because R is intensional. First suppose that (1) fails for some τ 1 , i, and atom A 1 over R. If A 1 is a body atom it follows immediately that all facts f over R are replicated in the construction for C over dimension i, which implies |C(f )| ≥ 2 and thus contradicts with 1boundedness due to Proposition 11. Now assume A 1 is the head of τ 1 . If ρ τ 1 (i) = ∅ it follows that all rule instantiations for τ 1 are replicated over dimension i, and thus that P * (f ) > 1 for all facts f matching the head of τ 1 . Since such a fact f is C-consumable and p i > 1 (which implies |C(f )| ≥ 2, this again contradicts 1boundedness. For the case where ρ τ 1 (i) = ∅, a similar argument holds: Take x ∈ ρ τ 1 (i) \ vars(A 1 ) and consider two valuations mapping all variables on the same value, except for x. We can now chose the hash functions for ρ τ 1 so that both rule instantiations are fired on different servers (due to p 1 > 1), and thus again |P * (f )| > 1, for some C-consumable fact f , which contradicts 1-boundedness. For condition (2), χ(τ 1 ) = χ(τ 2 ) allows choosing valuations for τ 1 and τ 2 that agree on the A 1 and A 2 (due to pureness), and then hash functions can be chosen so that both are fired on different servers. Since all matching facts are C-consumable, this would contradict 1-boundedness (since |P (f )| > 1 implies P (f ) = C(f )). We remark that Proposition 12 cannot be easily generalized. For example, one cannot replace GHP families by strongly supporting policies, since then facts f that are not P -consumable may still be C-consumable (i.e., C(f ) = ∅). Reasoning about the latter requires a concrete representation mechanism for policies. (See also [7] for a discussion on this matter.) Further, it is unclear what the complexity becomes for testing 1-boundedness under general (not necessarily pure) Datalog, since then it is required to reason about P -derivability of facts. Example 8 For an example showing that not every 1-bounded GHP is weakly pivoting, consider the following non-pure Datalog program P : T (x, y) ← R(x, y). T (x, y) ← T (z, x), R(z, y). and GHP family H over a single one-dimensional cube 1. Let map 1 be the identity mapping, χ(τ ) = 1 and ρ τ (1) = {x} for all rules τ . Clearly, H is not weakly pivoting. Nevertheless, it can be shown that H is 1-bounded, which follows from the observation that only single-valued rule instantiations can satisfy under P . Weakly Pivoting Datalog We have so far looked at whether a given GHP family is 1-bounded. In this section, we ask: which Datalog programs admit a 1-bounded policy? , x 2 , x 3 ) and b = (1, 3) Definition 8 (Pivot Base) Let P be a Datalog program, and let σ ⊆ IDB(P ). Let β be a function that takes as input some relation name R ∈ σ and outputs a non-empty tuple with values in [ar(R)]. We say that β is a pivot base for σ if: -For every rule τ ∈ P and for every pair of atoms A Datalog program P is pivoting (weakly pivoting, resp.) if it has a pivot base for all relations in IDB(P ) (for all relations in IDB(P ) that occur in the body of some rule in P ). Here, there are two intensional database relations, but only T occurs in the body of a rule. The pivot base β from before is still a pivot base for {T }; hence the program is weakly pivoting. However, there is no pivot base for to {T , U}, which means that the program is not pivoting. The concept of pivoting Datalog was first introduced for single rule programs [35] and then generalized to full Datalog [28] where it is called generalized pivoting. The definition in [28] is based on a rather complex argument over fractional weightmappings, but relates to pivoting in that every generalized pivoting Datalog program is pivoting for all intensional database relations. For pure Datalog these notions are equivalent. The proposition below shows that for pure Datalog, a weakly pivoting program admits a weakly pivoting (and thus 1-bounded) GHP family. Proposition 13 Let P be a pure Datalog program and p ≥ 2. There is a 1-bounded GHP family for P if and only if P is weakly pivoting. In the below propositions, we show slightly stronger results. We start with the if-direction of Proposition 13, which follows from Proposition 12 and the below Proposition 14. Henceforth, we use for an atom A with relation symbol R, and tuple b of integers from [ar(R)], the notation vars A [b] to denote the set of variables in A on the positions defined by b. Proposition 14 Let P be a pure and weakly pivoting Datalog program. For every p there is a weakly pivoting GHP family for P over [p]. Proof Take a weak pivot base β for P . We construct GHP E over network [p] by considering a single cube, cube 1, with only one dimension. We choose χ(τ ) = 1 for every τ ∈ P , and map 1 as the mapping from the single-point coordinates to servers in [p] that expresses identity. Now for rules τ ∈ P having no atom with associated pivot base, we define ρ τ (1) = ∅; for all other rules we define ρ τ (1) = vars A [β(R)], with R the relation symbol of A. It is easy to see that H is indeed weakly pivoting. For the only-if direction, we introduce the concept of straggler. Let P be a Datalog program and E a strongly supporting economic policy for P over [p]. We call s ∈ [p] a straggler for an intensional relation R if either s ∈ C(f ) for all facts f of R or s ∈ P * (f ) for all facts f of R. A straggler thus is a server that consumes or produces an entire relation. The only-if direction of Proposition 13 then follows from Proposition 15 and Proposition 16, which are given below. Proposition 15 Let P be a pure Datalog program. If H is a weakly pivoting GHP for P over a network where p ≥ 2, then every E ∈ H is without stragglers for P -consumable intensional database relations. Proof To show that E has no stragglers, we recall that for a weakly pivoting policy (a) |ρ τ (i)| > 0 for every rule τ and dimension of the cube χ(τ ), and that (b) for every atom A over a P -consumable intensional relation, ρ τ (i) ⊆ vars(A). Condition (b) implies that the facts from P -consumable intensional relations are consumed and produced at just one server. Condition (a) implies that the hash functions used by E are surjective and thus that, for every intensional P -consumable relation R, for at least some pair of facts f and g over R we have C(f ) = C(g), and analogously, for some pair we have P * (f ) = P * (g). In other words, not all facts over R are consumed or produced on the same server, which proves the desired result. Proposition 16 Let P be a pure and not weakly pivoting Datalog program. Then, every strongly supporting economic policy that is 1-bounded has a straggler for some consumable intensional database relation in P . In the remainder of this section, we prove Proposition 16. For this, we introduce the notion of policy key for economic policy E = (C, P ) and Datalog program P . Let R be some intensional database relation and γ a tuple of integers in |ar(R)|. Then γ is called a policy key for R in E, if for all facts f , g over R, implies P * (f ) = P * (g) = {s}, for some server s; and C(f ) = C(g) = ∅ or C(f ) = C(g) = {s} (with s denoting the same server as before). When E is clear from the context we omit mentioning E and say that γ is a policy key for R. We call γ empty if γ = (). For 1-bounded and strongly supporting economic policies, all C-consumable intensional database relations have a policy key (namely the tuple containing all positions), which follows immediately from Proposition 11. Lemma 5 For pure Datalog program P , and 1-bounded strongly supporting economic policy E = (P , C) for P , are equivalent for each intensional predicate R: 1. R has empty policy key in E; 2. E has a straggler for R. Proof For (1) ⇒ (2): By definition of policy key, there is a server s that belongs to P * (f ) for every fact f with predicate R. Hence, s is a straggler for R. For We can also show the following technical results regarding policy keys. Lemma 6 Let E be an economic policy and R a relation in the schema of E. If γ 1 and γ 2 are policy keys for R, then every tuple γ having all integers that are in both γ 1 and γ 2 , is a policy key for R. vars A 1 [γ 1 ] = vars A 2 [γ 2 ] follows from Lemma 6 and the assumption that γ 1 and γ 2 are minimal. To show that γ is indeed a key for R 1 , let f and g be two arbitrarily chosen facts over R 1 , with f [γ ] = g[γ ]. Let v 1 be a valuation for τ , with v 1 (A 1 ) = f ; and let v 2 be a valuation for τ with v 2 (A 1 ) = g. We consider also a valuation v for rule τ , with v(x) = v 1 (x) for all x ∈ vars(A 1 ) \ vars A 2 [γ 2 ] and v(x) = v 2 (x) for all other variables. (Recall that all these valuations exist because P is pure.) Since R 1 and R 2 have a key, every fact h with relation name R 1 or R 2 is associated to a unique server s, with P * (h) = {s}, and C(h) = ∅ (if it is not consumed) or C(f ) = {s} (if it is consumed). Therefore, and since E is strongly supporting, the server associated to v 2 (A 1 ) is the same server as associated to v 2 (A 2 ). Let's call this server s 2 . For the same reason, the server associated to v(A 1 ) is the same server as associated to v(A 2 ). Let's call this server s 3 . Now since v(A 2 ) and v 2 (A 2 ) agree on their key (that is, v(vars A 2 [γ 2 ]) = v 2 (vars A 2 [γ 2])), it follows that s 2 = s 3 . To conclude the proof, we observe that v 1 (A 1 ) agrees with v(A 1 ) on its key (particularly because v 1 and v 2 agree on the values for variables in vars A 1 [γ 1 ] ∩ vars A 2 [γ 2 ]), and thus that the server s 2 is associated also to v 1 (A 1 ). In other words, P * (f ) = P * (g) and C(f ) = C(g), which proves that γ is indeed a key for R 1 . Proof of Proposition 16 Suppose E is a strongly supporting economic policy that is 1-bounded. For the sake of contradiction, assume that E has no stragglers for any C-consumable intensional database relations. Then, by Lemma 5 all P -consumable relations have only non-empty policy keys, which implies existence of minimal non-empty policy keys for the C-consumable intensional relations. In turn, due to Lemma 7 we can take these minimal keys as pivot base. However, this contradicts the fact that P is not weakly pivoting. Bounded and Disjoint Evaluation Sometimes we want to guarantee that, at the end of a computation, no two copies of the same fact have been derived at different servers. We call this property disjointness. Definition 9 (Disjointness) Let P be a Datalog program, and R an intensional relation name of P . We call an economic policy E for P R-disjoint if for every instance, every fact of R is produced in at most one server. We study economic policies that are both 1-bounded and disjoint. Proposition 17 Let P ∈ PureDatalog and H a GHP family for P . Then, H is 1bounded, disjoint for P , and without stragglers for intensional database relations, if and only if, H is pivoting. For the proof, we use the next auxiliary result. Lemma 8 An economic policy E = (C, P ) that is 1-bounded, disjoint, strongly supporting and without stragglers for intensional database relations of the associated Datalog program has non-empty minimal keys for these relations. Proof Due to 1-boundedness and the absence of stragglers intensional database relations, it follows from Lemma 5 that only non-empty keys (and thus non-empty minimal keys) for C-consumable intensional database relations exist. We can now show the proof for Proposition 17. Proof for Proposition 17 (If.) Since a pivoting GHP is also weakly pivoting, it follows from Proposition 12 and Proposition 15 that E is 1-bounded and without stragglers for P -consumable intensional relations. For the remainder of the proof we observe that there exists s ∈ P * (f ) iff there is a rule τ ∈ P and valuation v such that v(head τ ) = f . Due to Definition 6 for pivoting GHPs, s is identified uniquely per rule τ by the combination χ(τ ) and pos head τ (ρ τ (i)) for all i with p i ≥ 1. For s 1 , s 2 ∈ P * (f ), it follows again from Definition 6 for pivoting GHPs, that s 1 = s 2 , thus |P * (f )| = 1. Hence, E is indeed disjoint. Due to surjectivity of the considered hash functions, it follows that E has no stragglers for intensional database relations. (Only if.) 1-boundedness implies that H is weakly pivoting due to Lemma 8. It remains to show that condition (1) and (2) also hold for relations that are not consumable. The proof is again by contraposition and completely analogous to the proof of Proposition 12. Only now A 1 and A 2 must be head atoms, and we use disjointness to argue |P * (f )| = 1 for all facts f over R 1 . Next, we show which programs admit a 1-bounded, disjoint policy. Proposition 18 Let P ∈ PureDatalog. Then P is pivoting if, and only if, P admits a 1-bounded, strongly supporting, disjoint economic policy without stragglers for intensional database relations. Proof (If) The result follows immediately by Lemma 8 and Lemma7. (Only If) The proof is analogous to the proof of Proposition 14 and takes the pivot base B for P to obtain a pivoting GHP for P . The result then follows from Proposition 17, which shows that this pivoting GHP is 1-bounded, strongly supporting, disjoint, and without stragglers for intensional database relations. Remark 1 The reader may wonder how the above concepts relate to the class of decomposable programs [36,37]. A decomposable program is a (single rule) Datalog program that admits an evaluation strategy (via predicate restrictions) that is parallel-correct, 1-bounded, disjoint, and non-trivial. (Here non-triviality means that all servers do part of the work.) We did not consider the non-triviality property, but instead require the absence of stragglers. Nevertheless, for GHPs, non-triviality is implied-at least for pure Datalog-by the use of surjective hash functions). Conclusion We have introduced a theoretical framework to reason about multi-round Datalog evaluation in a distributed setting. In this framework we study three properties: parallel-correctness, boundedness, and disjointness. There are many interesting questions left open. For example, it would be interesting to come up with restrictions on Datalog programs and economic policies, for which the mentioned properties are decidable. In fact, recent work by Neven et al. [27] extends our work in that direction. Among other results, they show that parallel-correctness is already undecidable even for heavily restricted fragments of Datalog, including monadic Datalog (for which the containment problem is decidable). Another interesting direction for future work would be to study the problem of finding economic distribution policies with desired properties, which should not necessarily be harder than deciding properties over given policies. A related question then is which properties, besides the ones studied in the paper, are relevant in a practical context. One interesting option would be to define a fairness condition for economic policies, e.g., an instance independent notion of load-balancing; another option is to study bounds on the amount of communication needed to evaluate Datalog programs. Yet another direction is to consider smarter algorithms for local Datalog evaluation than semi-naive, by, for example, allowing to express unique-decomposition conditions (c.f., [5]) in the economic policy.
2018-04-03T06:10:26.248Z
2019-12-04T00:00:00.000
{ "year": 2019, "sha1": "04a025146ce8c923cddc6cb951b960c35a96f6b3", "oa_license": "CCBY", "oa_url": "https://drops.dagstuhl.de/opus/volltexte/2018/8603/pdf/LIPIcs-ICDT-2018-17.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "5955f7e5ebac094ec132296a0cf60006da29e768", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
8272994
pes2o/s2orc
v3-fos-license
Adoptive Transfer of Dendritic Cells Expressing Fas Ligand Modulates Intestinal Inflammation in a Model of Inflammatory Bowel Disease Background Inflammatory bowel diseases (IBD) are chronic relapsing inflammatory conditions of unknown cause and likely result from the loss of immunological tolerance, which leads to over-activation of the gut immune system. Gut macrophages and dendritic cells (DCs) are essential for maintaining tolerance, but can also contribute to the inflammatory response in conditions such as IBD. Current therapies for IBD are limited by high costs and unwanted toxicities and side effects. The possibility of reducing intestinal inflammation with DCs genetically engineered to over-express the apoptosis-inducing FasL (FasL-DCs) has not yet been explored. Objective Investigate the immunomodulatory effect of administering FasL-DCs in the rat trinitrobenzene sulfonic acid (TNBS) model of acute colitis. Methods Expression of FasL on DCs isolated from the mesenteric lymph nodes (MLNs) of normal and TNBS-colitis rats was determined by flow cytometry. Primary rat bone marrow DCs were transfected with rat FasL plasmid (FasL-DCs) or empty vector (EV-DCs). The effect of these DCs on T cell IFNγ secretion and apoptosis was determined by ELISPOT and flow cytometry for Annexin V, respectively. Rats received FasL-DCs or EV-DCs intraperitoneally 96 and 48 hours prior to colitis induction with TNBS. Colonic T cell and neutrophil infiltration was determined by immunohistochemistry for CD3 and myeloperoxidase activity assay, respectively. Macrophage number and phenotype was measured by double immunofluorescence for CD68 and inducible Nitric Oxide Synthase. Results MLN dendritic cells from normal rats expressed more FasL than those from colitic rats. Compared to EV-DCs, FasL-DCs reduced T cell IFNγ secretion and increased T cell apoptosis in vitro. Adoptive transfer of FasL-DCs decreased macroscopic and microscopic damage scores and reduced colonic T cells, neutrophils, and proinflammatory macrophages when compared to EV-DC adoptive transfer. Conclusion FasL-DCs are effective at treating colonic inflammation in this model of IBD and represent a possible new treatment for patients with IBD. Introduction Inflammatory bowel disease (IBD) is a chronic relapsing and remitting inflammatory condition of the gastrointestinal tract [ 1 ] that generally affects the colon, or large intestine, and can be classified as Crohn's disease or ulcerative colitis. Although these two forms of IBD share common clinical and pathological features, the disease is heterogeneous, with marked differences in clinical presentation, underlying genetic factors, and response to treatment. Like many other chronic inflammatory or autoimmune disorders, the immunopathology of these diseases seems to result from complex interactions between susceptibility genes, the environment (most notably bacteria) and the immune system [ 2 ]. Under homeostasis, the intestinal immune system must balance the capacity for mounting protective immune responses to infectious agents (i.e. pathogenic bacteria) with the ability to tolerate the enormous load of antigens and immunostimulatory molecules that constitute the commensal intestinal bacteria (oral and mucosal tolerance) [ 3 ]. Therefore, gut inflammation may logically result from a loss of local tolerance. The intestinal mononuclear phagocyte system, composed of dendritic cells (DCs) and macrophages, is essential for maintaining tolerance. Both macrophages and DCs can serve as antigen presenting cells, having the capacity to initiate and/or regulate intestinal immune responses. Under homeostasis, intestinal macrophages sample luminal contents [ 4 -7 ] and phagocytose bacteria and other luminal antigens that breach the mucosal barrier, but do not stimulate an overt immune response against these antigens [ 8 -10 ]. Instead, macrophages transfer antigens to intestinal DCs via gap junctions [ 11 ], and DCs migrate to mesenteric Current therapeutic goals mainly focus on decreasing inflammatory cytokine activity by infusing either proinflammatory cytokine-targeting antibodies or anti-inflammatory cytokines, or by using non-specific inhibitors of inflammation, such as corticosteroids or immunosuppressants [ 18 , 19 ]. However, many of these therapies have significant undesirable side effects. Therefore, the identification of a specific molecular and cellular target in the pathogenesis of IBD and new therapeutic agents remains vitally important. Manipulation of DCs or macrophages may open the way towards new therapeutic approaches for IBD. Fas ligand (FasL/CD95L), a type II transmembrane protein that belongs to the tumor necrosis factor family, can induce apoptosis in target cells by binding to its death domain-containing receptor Fas (CD95). In the present study, we show that adoptive transfer of DCs genetically engineered to express FasL, an inducer of apoptosis, can reduce inflammation in a rat model of acute colitis. Ethics statement All experiments involving animals were performed in accordance with institutional, local, and national guidelines and approved by the Ponce Health Sciences University Institutional Animal Care and Use Committee. Animal model of colitis Acute colitis was induced in male Sprague-Dawley rats (250-450 g; Southern Veterinary Service, PR) as previously described [ 20 , 21 ]. The rats were maintained under standard laboratory conditions. Trinitrobenzene sulfonic acid (TNBS; 60 mg/mL) was administered intracolonically after lightly anesthetizing with ether. Control animals were untreated. The rats were weighed daily to monitor weight change as a disease marker, and sacrificed 72 hours after the initial administration of the TNBS by an overdose of pentobarbital. The colon was removed and scored for macroscopic damage using four criteria, as previously described [ 22 ]: the presence of adhesions (0, 1, or 2 for none, minor, or major, respectively), diarrhea (0 or 1 for absent or present, respectively), thickness (mm), and ulceration (0 for no damage, with increasing scores depending on extent of ulceration). These were added to give a total macroscopic damage score. After sacrifice, the whole mesenteric lymph node (MLN) chain/ layer was identified and removed as previously described [ 23 ], trimmed of any fat, cut into pieces, and incubated for 60 min under agitation at 37°C in the presence of 100 U/ml of collagenases type II and VII, and 300 U/ml of hyaluronidase (Sigma). Cells were separated from debris by filtration through a 100 μm cell strainer (BD Bioscience, San Diego, CA) after enzymatic digestion. The flow-through was centrifuged at 1500 rpm for 5 min at 4°C, and the resulting pellet resuspended in RPMI. Dendritic cells and T cells were obtained by FACS. Generation, transfection, and adoptive transfer of bone marrow-derived dendritic cells Bone marrow (BM) cells from male Sprague Dawley rats 8-10 weeks old were isolated as previously described [ 24 ]. The BM cells were cultured at a cell density of 2-5 × 10 5 cells/mL in culture dishes (Falcon, Becton Dickinson Biosciences) or 75 cm 2 tissue culture flasks (T75) (Corning Inc., Corning, NY, USA). The RPMI 1640 culture medium was supplemented with 20 ng/ml recombinant rat GM-CSF (Sigma Aldrich, St. Louis, MO) and 20 ng/ml recombinant rat IL-4 (Sigma Aldrich, St. Louis, MO), or 20 ng/ml rat GM-CSF (Sigma Aldrich, St. Louis, MO). On day 3 and 6, more growth factors were added. All cells were collected on day 7. Purity was determined by measuring OX62 expression through flow cytometry. Primary rat BMDCs were transfected with 8 μg of either expression vectors containing rat FasL cDNA (FasL-DCs) or control vectors (EV-DCs) using the Lipofectamine ™ 2000 Transfection Reagent (Invitrogen, Carlsbad, CA) according to the manufacturer's protocol. Both vectors were generous gifts from Drs. Li Xiao-Kang and Masayuki Fujino. Each transfection was done in triplicate. Transfection efficiency was determined by measuring FasL expression through flow cytometry. For adoptive transfer studies, animals received the FasL-DCs or EV-DCs administered intraperitoneally ~3 × 10 7 cells per rat (based on similar types of studies in arthritis [ 25 ]) 96 and 48 hours before the induction of colitis. All animals also received formyl-methionylleucyl-phenylalanine (fMLP; 2.5 mM in 6% DMSO at pH 8 intracolonically) 2 hours after the TNBS administration. Flow cytometry & fluorescence-activated cell sorting All flow cytometry analyses and cell sorting experiments were performed using a BD FACSCalibur platform (BD Biosciences) and Cell Quest Pro software (BD Biosciences). DCs and T cells from rat MLN were isolated using mouse anti-rat OX62:PE conjugate (MCA1029PE, AbD Serotec, UK) and mouse anti-rat CD3:FITC conjugate/CD4:RPE conjugate (DC041, AbD Serotec, UK), respectively. Mouse IgG1:RPE (MCA1209PE, AbD Serotec, UK) and mouse IgM: FITC (MCA692F) were used as isotype controls (AbD Serotec, UK). Briefly, the MLN population was suspended in staining buffer (PBS 1x with 0.1% sodium azide and 0.1% FBS) and incubated with 10 μl of antibody per 10 6 cells for 20 min at 2-8°C in the dark. The cells were washed twice, resuspended in staining buffer and then sorted. The CD3 + CD4 + T, CD3 + CD4 − lymphocytes and OX62 + cells (DCs) were gated by size on forward and side scatter. The purity of sorted cells was confirmed by flow cytometry. FasL expression on MLN OX62 + cells and on BMDCs transfected with FasL was determined using FITC-and PE-conjugated anti-FasL antibodies (sc-19987; Santa Cruz Biotechnology, Inc., Santa Cruz, CA). Mouse IgM FITC or IgM PE served as isotype controls. Annexin V PE Assay FasL-DCs were co-cultured with CD4 + T cells at a ratio of 1:5 in the presence of the bacterial peptide fMLP 10 −7 M. EV-DCs were used as controls. After 24 hrs of co-culture, an apoptosis assay was performed using Annexin V PE staining kit (BD Bioscience, San Diego, CA) as per the manufacturer's instructions. To assess the extent of apoptosis, we accounted for both the total number of annexin-positive cells and the intensity of annexin staining by multiplying the percentage of positive cells by the average staining intensity. de Jesus et al. ELISPOT Solid-phase IFNγ Elispot kits from BD Biosciences (Pharmingen, San Diego, CA) were used to enumerate T cells secreting IFNγ upon co-culture with DCs following fMLP stimulation. In one set of experiments, FasL-DCs or EV-DCs were co-cultured with T cells isolated from MLN of normal rats to examine the immunomodulatory potential of FasL-DCs. The plate was incubated for 24-48 hrs at 37°C in 5% CO 2 . IFNγ production was measured for cell suspensions containing 5 × 10 4 of either FasL-DCs or EV-DCs alone without treatment, FasL-DCs or EV-DCs treated with fMLP only, or FasL-DCs or EV-DCs treated with fMLP and cocultured with T cells. DCs and T cells were cocultured at a ratio of 1:5, respectively. As a negative control, wells with medium alone were used. Cells were added to individual wells in triplicate, followed by medium with or without antigen (10 −6 M fMLP). The number of spots, which represent IFNγ-producing T cells, were quantified with a dissection microscope or an ELISPOT plate reader (AID). The results were compared with unstimulated cell suspension (no fMLP) as a negative control, or under optimal stimulation with 5 ng/ml of Phorbol Myristate Acetate (Sigma, St. Louis, MO) and 500 ng/ml of Ionomycin (Sigma, St. Louis, MO) as a positive control. Positive controls are not shown in the graph because they produced spots that were deemed too numerous to count. In another set of experiments, the proinflammatory role of MLN DCs was tested in vitro by comparing IFN-γ production by either DCs alone, CD4 + T cells alone, DCs with CD4 + T cells, DCs with fMLP, T cells with fMLP or DCs/CD4 + T cells with fMLP. The same cell numbers and ratios and reactant concentrations were used in both sets of experiments. Western blotting Forty-eight hours after the gene transfection, the BMDCs were harvested and lysed with Radio Immuno Precipitation Assay buffer consisting of 150 mM sodium chloride 1.0% NP-40, 0.5% sodium deoxycholate, 50 mM Tris, pH 8.0. Proteins were separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred to nitrocellulose filters. The filters were blocked with blocking buffer containing 5% skimmed milk and incubated overnight with rabbit anti-FasL antibody (sc-956; Santa Cruz Biotechnology, Inc., Santa Cruz, CA) at 1:500. Horseradish peroxidase-conjugated anti-rabbit IgG (sc-2004; Santa Cruz Biotechnology, Inc.) was added as a secondary antibody at 1:1000 and further incubated for 1 hr. The filters were washed in the blocking buffer and the immune complexes were detected via chemiluminescence. Histology & immunostaining Sections of colon were fixed by immersion in 10% buffered formalin, placed in labeled tissue cassettes and processed overnight at RT. Samples were sequentially dehydrated with ascending percentages of ethanol, cleared in xylene, and embedded in paraffin. Four-micronthick sections were stained with hematoxylin and eosin to determine the extent of inflammatory infiltrate and the appearance of the underlying muscle layers. The resulting slides were analyzed by two blinded observers for disruption of the architecture (0→3; absent→severe), cellular infiltration (0→3; absent→severe), muscle thickening (0→3; absent→severe), presence of crypt abscesses (0 or 1; absent or present), and goblet cell depletion (0 or 1; absent or present) as previously described [ 22 ]. Colonic T cell infiltration was determined by staining tissue sections with CD3. Formalinfixed and paraffin embedded sections obtained from the same histology blocks as outlined above were deparaffinized in xylene and rehydrated through a graded series from ethanol to water. Quenching of endogenous peroxidase was performed by incubating tissue sections with 3% H 2 O 2 at RT for 15 minutes in a humidified chamber. After washing with PBS (pH 7.4), tissue sections were incubated with 0.25% pepsin at 37°C for 30 minutes to reveal fixed Ag epitopes. Tissue sections were treated with the blocking solution at RT for minutes, followed by incubation with an HRP-conjugated Anti-Rat CD3 (Santa Cruz, CA, USA) at RT for 1 hour. Slides were incubated with DAB staining kit (SK4100; Vector Laboratories, Burlingame, California, USA) for color visualization. Slides were counterstained by incubation with methyl green at 65°C for 3 minutes. Five fields were randomly selected for each section of colon, and the average number of infiltrating T-cells was determined. For negative controls, 10% of normal rat serum was used as the primary antibody. Statistical analysis Data are presented as mean ± standard error of the mean. Statistical significance was set at p values less than 0.05. All statistical analyses were performed using Prism v6.0a (GraphPad Software, Inc., La Jolla, CA, USA). One-way ANOVA with a Holm-Sidak's multiple comparisons post-hoc test were used for comparing percent weight change across the different time points within the same group. All other comparisons were performed using unpaired, one-sided t test with Welch's correction. Results and Discussion We first asked whether DCs in colitic rats were functionally different from those in normal control rats. As expected, rats receiving TNBS-induced colitis lost more weight ( Figure 1A) and had higher macroscopic damage scores ( Figure 1B) than normal controls. Analyzing FasL expression by flow cytometry on MLN-DCs from normal and colitic rats revealed a marked reduction in the expression of FasL on MLN-DCs from colitic rats when compared to MLN-DCs from normal controls (p<0.001; Figure 1C, and Supplemental Figure 1). These results suggest that MLN-DCs promote inflammation in this model of acute colitis. To confirm the proinflammatory role of DCs, we examined IFNγ production by MLN-DCs, MLN-CD4 + T cells, and cocultures of MLN DCs and CD4 + T cells in the presence and absence of the bacterial peptide fMLP. Few DCs or T cells produced IFNγ when cultured alone and exposed to fMLP. However, the number of IFNγ-producing cells was substantially increased when DCs and T cells were cocultured in the presence of fMLP (data not shown). persistent and severe colitis in FasL knockout mice [ 27 ]. Additionally, T cells upregulate Fas expression upon interaction with dendritic cells and, thus, become sensitive to Fas/FasLinduced apoptosis [ 28 , 29 ]. We therefore hypothesized that overexpressing FasL on DCs via transfection could impart immunomodulatory characteristics on the transfected DCs. Transfection efficiency was determined to be 5.53%, 11.77%, and 19.97% by flow cytometry for cells transfected with 5 μg, 10 μg, and 15 μg of FasL-coding vector, respectively. FasL expression was confirmed by western blotting (Figure 2A). Next, we analyzed IFNγ production in FasL-DCs or EV-DCs that were cultured alone without treatment, cultured alone with fMLP treatment, or cocultured with CD4 + T cells and treated with fMLP ( Figure 2B). FasL-DC and T cell cocultures contained significantly fewer IFNγproducing cells than cocultures of EV-DCs and T cells (p<0.001). Few DCs cultured alone produced IFNγ regardless of transfection and treatment with fMLP. Furthermore, Annexin V-positivity, an indicator of apoptosis, was much greater in FasL-DCs/CD4 + T cell cocultures than in cocultures containing EV-DCs ( Figure 2C, and Supplemental Figure 2). Even though mature DCs have been shown to express Fas, studies have demonstrated that they have an increased expression of anti-apoptotic molecules such as c-FLIP and Bcl-xL, probably leading to their protection from apoptosis [ 30 -32 ]. Therefore, FasL-DCs attenuate T cell inflammatory cytokine production in vitro likely by inducing T cell apoptosis. To investigate the in vivo immunomodulatory potential of FasL-DCs, we employed an acute colitis animal model of IBD in which we adoptively transferred FasL-DCs or EV-DCs to rats prior to colitis induction ( Figure 3A). Rats that received the EV-DCs instead of the FasL-DCs weighed significantly less than their original weight by the end of the study (p<0.05; Figure 3B). In contrast, weights after colitis induction were not significantly different from original for rats receiving FasL-DCs. Furthermore, adoptive transfer of FasL-DCs significantly decreased colonic damage both macroscopically and microscopically (p<0.05; Figures 3C and 3D). Although previous studies have shown the immunomodulatory potential of regulatory dendritic cells in animal models of colitis [ 33 -37 ], ours is the first study that shows that expression of FasL on BMDCs resulting from genetic manipulation is sufficient to attenuate colonic inflammation in a model of acute colitis. To investigate the mechanism by which FasL-DCs ameliorate inflammation in this acute colitis model, we examined the effect that FasL-DC adoptive transfer had on T cell, neutrophil, and macrophage infiltration. CD3 + T cells were significantly reduced in colonic mucosa from FasL-DC rats when compared to EV-DC rats (p<0.01; Figure 4A). Colonic tissue from FasL-DCs rats demonstrated less MPO activity, an indicator for neutrophils and other myeloid cell infiltration, than colonic tissue from EV-DC rats (p=0.0537; Figure 4B). Upon examining macrophage infiltration, we found that areas of histologically intact mucosa contained more macrophages than areas of damaged mucosa ( Figure 5). Damaged mucosa was characterized by regions of marked eosinophilia, indicative of cell death and likely the result of necrosis, with underlying massive inflammatory infiltrates ( Figure 5B). Notably, these inflammatory infiltrates were strongly positive for iNOS immunofluorescent staining ( Figure 5D). Although total macrophage numbers between EV-DC and FasL-DC rats did not differ significantly in areas of intact mucosa, FasL-DC rats contained less proinflammatory macrophages (p<0.01) and total macrophages (p<0.05) in areas of damaged mucosa than EV-DC rats (Figures 5E-5H). T cells, neutrophils, and macrophages, among other cells, express Fas. T cells have long been known to undergo apoptosis upon Fas ligation, and neutrophils have recently been shown to be regulated by Fas ligation in vivo [ 38 ]. FLIP expression by macrophages is thought to account for the resistance to Fas ligation exhibited by these cells [ 39 ], yet pathogen-associated molecular patterns (PAMPs) can sensitize macrophages to Fas-mediated apoptosis and lead to a proinflammatory state in these cells upon Fas ligation [ 40 ]. In contrast to proinflammatory macrophages, resident intestinal macrophages are hyporesponsive to PAMPs despite potent antibacterial activity. Therefore, resident and proinflammatory intestinal macrophages might respond differently to Fas ligation. Interestingly, we observed that, when compared to EV-DC treatment, treating rats with FasL-DCs decreased total colonic macrophage numbers in damaged, but not intact, areas of colon and reduced proinflammatory colonic macrophages. This suggests that FasL-DCs modulate proinflammatory macrophages by inducing apoptosis via Fas ligation. Future studies should aim to better understand the mechanism by which FasL-DCs reduce colonic inflammation. Furthermore, the possibility of transfecting induced regulatory DCs to express FasL should be investigated as a way of generating more potent disease-modifying DCs. In conclusion, we have shown that FasL-DCs are effective at treating colonic inflammation in this animal model of IBD. The use of FasL-DCs should be investigated further as a potential therapy for patients with IBD. Supplementary Material Refer to Web version on PubMed Central for supplementary material. FasL expression on OX62 + dendritic cells is decreased in mesenteric lymph nodes (MLNs) from rats with acute colitis. A. weight change in normal and acute colitis rats expressed as percent of initial weight on day of colitis-induction (day 0). B. macroscopic damage scores for normal rats and rats with acute colitis. C, percentage of OX62 + MLN cells that are OX62 + FasL + in normal and colitic rats. Representative flow cytometry blots for panel C are shown in Supplemental Figure 1. n=4 rats per group. *** p<0.001 vs normal group; # p<0.05, ## p<0.01, ### p<0.001 vs day 0 of same group.
2018-04-03T00:05:44.958Z
2016-04-01T00:00:00.000
{ "year": 2016, "sha1": "d551f963b92b74ba6e671d14887cf3e84e4f6c23", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2155-9899.1000411", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d551f963b92b74ba6e671d14887cf3e84e4f6c23", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
15625277
pes2o/s2orc
v3-fos-license
Risk factors for short- and long-term complications after groin surgery in vulvar cancer Background: The cornerstone of treatment in early-stage squamous cell carcinoma (SCC) of the vulva is surgery, predominantly consisting of wide local excision with elective uni- or bi-lateral inguinofemoral lymphadenectomy. This strategy is associated with a good prognosis, but also with impressive treatment-related morbidity. The aim of this study was to determine risk factors for the short-term (wound breakdown, infection and lymphocele) and long-term (lymphoedema and cellulitis/erysipelas) complications after groin surgery as part of the treatment of vulvar SCC. Methods: Between January 1988 and June 2009, 164 consecutive patients underwent an inguinofemoral lymphadenectomy as part of their surgical treatment for vulvar SCC at the Department of Gynaecologic Oncology at the Radboud University Nijmegen Medical Centre. The clinical and histopathological data were retrospectively analysed. Results: Multivariate analysis showed that older age, diabetes, ‘en bloc’ surgery and higher drain production on the last day of drain in situ gave a higher risk of developing short-term complications. Younger age and lymphocele gave higher risk of developing long-term complications. Higher number of lymph nodes dissected seems to protect against developing any long-term complications. Conclusion: Our analysis shows that patient characteristics, extension of surgery and postoperative management influence short- and/or long-term complications after inguinofemoral lymphadenectomy in vulvar SCC patients. Further research of postoperative management is necessary to analyse possibilities to decrease the complication rate of inguinofemoral lymphadenectomy; although the sentinel lymph node procedure appears to be a promising technique, in ∼50% of the patients an inguinofemoral lymphadenectomy is still indicated. Vulvar squamous cell carcinoma (SCC) is a rare disease and accounts for B3 -5% of all female genital malignancies (Hacker, 2005). The incidence is B1 -2 per 100 000 (van de Nieuwenhof et al, 2009). The majority of the patients with vulvar SCC have early-stage disease: a cT1 (o2 cm) or cT2 (42 cm) tumour without suspicious inguinal lymph nodes. The standard treatment of early-stage SCC of the vulva consists of wide local excision (WLE) of the tumour combined with an inguinofemoral lymphadenectomy (removal of all superficial lymph nodes and the medial femoral lymph nodes) (Levenback et al, 1996;de Hullu et al, 2004). The inguinofemoral lymphadenectomy has significant short-and long-term complications, which are a major concern for both patients and clinicians. Wound breakdown, wound infection, formation of lymphoceles, development of lymphoedema and cellulitis/erysipelas are the most documented complications, occurring in up to 85% of the patients (Podratz et al, 1983;Gaarenstroom et al, 2003). Only 25 -35% of patients with early-stage disease will have lymph node metastases (Hacker et al, 1981;Burger et al, 1995;Bell et al, 2000;Katz et al, 2003). There are no noninvasive techniques such as palpation, ultrasound, CT, PET and MRI available with a high enough negative predictive value to safely omit inguinofemoral lymphadenectomy in a selection of patients (Oonk et al, 2006). This urged the introduction of the sentinel lymph node (SLN) procedure in vulvar SCC. After excellent results in different accuracy studies (Ansink et al, 1999;De Cicco et al, 2000;de Hullu et al, 2000;Levenback et al, 2001;Sliutz et al, 2002;Moore et al, 2003), van der Zee et al (2008) showed in the 'Groningen International Study on Sentinel nodes in Vulvar cancer I' (GROINSS-V I) with the combined technique that in early-stage vulvar SCC patients with a negative SLN, the groin recurrence rate is low, survival is excellent and the treatment-related morbidity is minimal. Despite the excellent outcomes of the SLN procedure, only patients with small (o4 cm) unifocal tumours are eligible for this technique. Therefore, in B50% of the patients, there is still an indication for inguinofemoral lymphadenectomy. The modifications of the past decades have been introduced to decrease morbidity without compromising prognosis. 'En bloc' surgery has been replaced by the triple incision technique (de Hullu et al, 2002). Performing a superficial lymphadenectomy alone gives a decrease in survival (Stehman et al, 1992a;Burke et al, 1995), and hence at least the lymph nodes medial of the femoral vessels should be removed. In the literature, sparing of the saphenous vein does not reduce lymphoedema in all studies (Podratz et al, 1983;Zhang et al, 2000;Rouzier et al, 2003). Sartorius transposition did not decrease the morbidity (Rouzier et al, 2003;Judson et al, 2004). The direct postoperative management for patients with vulvar SCC has not been described extensively. Gould et al (2001) showed that prophylactic antibiotics and duration of drains in situ were no predictors for the development of wound infection and late complications (lymphoedema and cellulitis). The drains were removed when the output was o30 ml per day. Gaarenstroom et al (2003) described that the drains were removed when the fluid production was o50 ml per day after at least 5 days. However, the reason for this specific duration was not based on study results. In breast cancer, the postoperative management after axillary lymphadenectomy has been studied in more detail. There is no clear evidence that the use of a drain after axillary surgery reduces the incidence of lymphocele formation (Zavotsky et al, 1998;Talbot and Magarey, 2002;Soon et al, 2005). The studies in breast cancer that compared early with late drain removal (Inwang et al, 1991;Gupta et al, 2001;Dalberg et al, 2004) concluded that early drain removal was safe, but that the incidence of lymphoceles was higher in this group. The aim of this study is to investigate the influence of patients' characteristics, extension of surgery and postoperative management on the short-and long-term complication rate after inguinofemoral lymphadenectomy in patients with SCC of the vulva. Patients Data of 283 consecutive patients with vulvar SCC who were treated at the Department of Gynaecologic Oncology at the Radboud University Nijmegen Medical Centre (RUNMC) between 1 January 1988 and 30 June 2009 were retrieved from medical files. A total of 78 patients were excluded from the current analysis because their groins were not treated surgically (n ¼ 8), the primary treatment took place in another medical centre (n ¼ 21), no groin surgery was performed at primary treatment (n ¼ 36), only superficial inguinal lymphadenectomy was performed (n ¼ 4) only debulking of lymph node metastases was performed (n ¼ 2) or posterior exenteration was performed (n ¼ 3). Four patients were excluded because their medical files could not be retrieved. In 205 patients groin surgery was performed; 41 patients only underwent SLN procedure and were excluded. In 2001, the SLN procedure (unilateral or bilateral) was introduced in the RUNMC initially in an accuracy study (followed by lymphadenectomy) that preceded the GROINSS V-studies by van der Zee et al (2008). Data of 164 patients were available for further analysis in this study. Local surgery consisted of a WLE or radical vulvectomy. From 1988 to 1993, standard local treatment consisted of a radical vulvectomy. After 1993, the WLE was introduced; it was carried out when the tumour was clinically resectable with a macroscopically measured normal tissue margin of 1 -2 cm despite the tumour diameter. After the introduction of the WLE, radical vulvectomy was only considered in patients with multifocal tumours and in case of an abnormal remainder of the vulva with complaints. Groin surgery consisted of 'en bloc' inguinofemoral lymphadenectomy from 1988 to 1993. In 1993, the triple incision technique was introduced (de Hullu et al, 2002): when the medial margin of the tumour was 41 cm from the midline, unilateral or otherwise a bilateral inguinofemoral lymphadenectomy was performed. A total of 62% of our patients underwent inguinofemoral lymphadenectomy through triple incision technique after 1993 vs 17% before 1993. It took some time until the triple incision technique was fully integrated in our Gynaecologic Oncology centre. The inguinofemoral lymphadenectomy contained resection of superficial lymph nodes as well as deep femoral nodes. For the resection of inguinal lymph nodes, the fatty tissue beneath the subcutaneous tissue down to the fascia lata was removed. The saphenous vein was spared when possible. After splitting the fascia lata, the fatty tissue medial to the femoral vessels within the opening of the fossa ovalis was resected to perform femoral lymphadenectomy. The lateral part of the fascia lata was spared and no sartorius transposition was performed. Data All data were retrospectively collected from a database and the patient charts. Parameters extracted were: patients' characteristics (age, diabetes, peripheral vascular disease, body mass index (BMI) and continuation of antibiotics), type of surgery ('en bloc' approach or triple incision technique, unilateral or bilateral inguinofemoral lymphadenectomy, the ligation of the saphenous vein, number of removed lymph nodes, presence or absence of lymph node metastases and adjuvant radiotherapy) and postoperative management (drain management). In the RUNMC, all patients received standard antibiotics during surgery: Cefazoline 1000 mg and Metronidazol 500 mg; in some individual patients, the treatment with antibiotics extended for some additional days. 'Antibiotics' in our study was defined as the continuation of antibiotics after surgery. Patients who underwent an inguinofemoral lymphadenectomy received high-vacuum Redon drains (775 mm Hg (0.9 bar) negative pressure) in the groins postoperatively. In general, the drains were in situ for 5 days and these were removed when the production was decreasing and under 50 -100 ml per day. 'Duration of the drains in the groins' was defined as the time between operation and the day the drains were removed. The 'fluid production' was measured per day. Prescription of elastic stockings was a standard procedure in patients who underwent inguinofemoral lymphadenectomy. 'Hospitalisation time' was defined as the day of operation (day 0) and the number of postoperative days in the hospital. The influence of adjuvant radiotherapy was only assessed for the long-term complications. Definitions and the frequencies of the short-and long-term complications after inguinofemoral lymphadenectomy are shown in Table 1. In total, 137 patients (84%) suffered from a complication of any kind after inguinofemoral lymphadenectomy. We also assessed the frequency of any of the short-term complications and any of the long-term complications. Standard follow-up was every 3 months in the first 2 years; from the third to the fifth year, it was twice a year and yearly thereafter. Statistical methods All events were described per groin, but analysis of complication rate per groin might overrate the influence of patient characteristics, because these were doubled in case of a bilateral lymphadenectomy. In patients who underwent bilateral lymphadenectomy, we randomly analysed the right or the left groin in order to minimise bias. We started at the top of the database and took the right groin in the first patient and the left groin in the second and so on without knowing in which groin the complications occurred. Variables eligible for entry were analysed using SPSS software (version 16.0.01 for Windows, SPSS, Chicago, IL, USA). Univariate logistic regression was used to assess the risk of patients' Short-and long-term complications after groin surgery F Hinten et al characteristics, type of surgery and postoperative management on the short-term complications and long-term complications, respectively any of the short-term complications and the longterm complications as the single type complications. The odds ratios with the 95% confidence interval (CI) are presented. Multivariate logistic regression with forward selection procedure was used to identify those variables that independently contributed to the risk of short-term complications and long-term complications (statistically significant variables from univariate logistic regression). After entry the adjusted odds ratios with 95% CI of the final model are presented. A P-value of o0.05 was considered statistically significant. An IRB approval was not necessary for this retrospective study. RESULTS Of all patients who underwent inguinofemoral lymphadenectomy for primary SCC of the vulva (n ¼ 164), 140 patients underwent primary inguinofemoral lymphadenectomy, whereas 24 patients underwent inguinofemoral lymphadenectomy subsequent to SLN procedure during the learning curve (with standard inguinofemoral lymphadenectomy after SLN procedure) or because of positive SLN(s) in GROINSS-V I (van der Zee et al, 2008). In 301 groins of 164 patients, an inguinofemoral lymphadenectomy was performed (27 patients only unilateral), of which 73 patients underwent surgery through the 'en bloc' approach. Figure 1 shows a flowchart of patients with SCC who underwent groin surgery. Table 2 shows the features of the research population. The SLN procedure was not yet introduced in our department before 2001. In retrospect, B50% of the patients in our study population were not or would not have been eligible for a SLN procedure because the tumour was 44 cm and/or multifocal. The details of use (duration and results) of the stockings were not well documented in the medical charts, and hence this item was excluded from the analysis. Risk factors for short-term complications and long-term complications were assessed with univariate analysis (Tables 3 and 4). Using multivariate analysis, 'en bloc' surgery (odds ratio 2.72, 95% CI 1.16 -6.37) and older age (odds ratio 1.06, 95% CI 1.02 -1.10) were both independent risk factors for developing wound breakdown. 'En bloc' surgery (odds 2.66, 95% CI 1.15 -6.15) and higher drain production on the last day the drain was in situ Short-and long-term complications after groin surgery (odds ratio 1.05, 95% CI 1.00 -1.09) were the only independent risk factors for wound infection. Higher drain production on the last day the drain was in situ (odds ratio 1.05, 95% CI 1.01 -1.10) was a risk factor for developing lymphocele. Diabetes (odds ratio 4.10, 95% CI 1.04 -16.05) and higher drain production on the last day the drain was in situ (odds ratio 1.11, 95% CI 1.04 -1.19) were risk factors for developing any of the short-term complications. Younger age was the only independent risk factor for developing lymphoedema (odds ratio 0.95, 95% CI 0.93 -0.98). The independent risk factors for cellulitis/erysipelas were younger age (odds ratio 0.96, 95% CI 0.93 -0.98) and lymphocele (odds ratio 3.28, 95% CI 1. 50 -7.19). Higher number of lymph nodes dissected seems to protect against developing any long-term complications (odds ratio 0.92, 95% CI 0.84 -1.00) and younger age was a risk factor (odds ratio 0.94, 95% CI 0.92 -0.97; Table 5). DISCUSSION In this study we found different risk factors for the short-and long-term complications after inguinofemoral lymphadenectomy as part of the standard treatment for primary vulvar SCC. Older age, diabetes, 'en bloc' surgery and higher drain production on the last day the drain was in situ were significant risk factors for shortterm complications. Younger age and lymphocele gave higher risk of developing long-term complications. We found that older age was associated with higher risk for wound breakdown. This can be explained by the deterioration of wound healing with age. On the other hand, younger age was correlated with the long-term complications lymphoedema and cellulitis/erysipelas. One should realise that younger women are more active and might be more limited in their daily activities by possible lymphoedema; older people may experience more restrictions from other diseases, such as cardiac problems resulting in lymphoedema. Our study showed that diabetes was associated with wound breakdown and any short-term complication. It is well known that diabetes mellitus is associated with wound healing problems in many surgical disciplines (Trussell et al, 2008;Chen et al, 2009;McConnell et al, 2009;Ogihara et al, 2009). Therefore, the glucose levels in patients with diabetes should be regulated strictly to diminish the influence of diabetes on the short-term complications. In our study the 'en bloc' approach was the only surgical technique-related risk factor found. In 1993, the triple incision technique was introduced in the RUNMC. In our study we found a decrease in complication rate after 1993, especially in the shortterm complication rate (76.2% before 1993 and 55.7% after 1993). We did expect to find this result, as our study showed, as expected in literature (Hacker et al, 1981;Podratz et al, 1983;Lin et al, 1992), the 'en bloc' approach to be a risk factor for both wound breakdown and wound infection. This can also be explained by the higher rate of triple incisions performed after 1993 compared with before 1993 (61.7% vs 17.4%). Nowadays, 'en bloc' surgery is only performed in patients with large suspicious inguinofemoral lymph nodes to prevent skin bridge and groin recurrences. We hypothesised that a higher total amount of dissected lymph nodes during surgery would impose a risk for lymphoedema, which was based on the idea that less lymph nodes may drain less lymph fluid. The mean number of nodes dissected in our study was 9.45 nodes per groin, and a higher amount of nodes dissected as a risk factor for lymphoedema was not confirmed in this study. On the contrary, a higher amount seemed to protect against developing any long-term complications. Besides, only in cellulitis/erysipelas a cutoff point was recognised, namely 10 lymph nodes (410 lymph nodes dissected posed protection). We did not have an explanation for this finding. Courtney-Brooks et al (2010) showed that removal of 410 lymph nodes might be associated with better survival in FIGO stage III patients. The prognostic impact of the number of lymph nodes dissected remains unclear. It is advised to remove between 6 and 8 lymph nodes per groin (Butler et al, 2010;Woelber et al, 2011), but variations in anatomy and other factors make node counting an unreliable measure of surgical quality (Stehman et al, 2009). The use of drains after inguinofemoral lymphadenectomy is generally accepted worldwide and therefore used in our gynaecologic oncology department. There are no standardised protocols for the duration of drainage, but in most cases the drains are left in situ for at least 5 days; the postoperative management at the RUNMC is to remove the drains when the production has decreased under 50 -100 ml per day. Only two retrospective studies on postoperative management in vulvar SCC report a postoperative protocol on drain management; either remove the drain when the output was o30 ml per day or when the fluid production was o50 ml per day after at least 5 days (Gould et al, 2001;Gaarenstroom et al, 2003). Both studies showed, in accordance with our results, that duration of the drain in situ had no influence on the short-and long-term complications after Short-and long-term complications after groin surgery F Hinten et al inguinofemoral lymphadenectomy. There is limited literature on drain management in patients after inguinofemoral lymphadenectomy for vulvar SCC, probably because of the low incidence of vulvar SCC and/or the focus on improving quality of life by the SLN procedure. In contrast with the groin, drain management in the axilla after breast cancer treatment has been extensively studied; most surgeons remove the drain when the drainage volume is o20 -50 ml in the preceding 24 h and this may take up to 10 days (Tadych and Donegan, 1987;Yii et al, 1995;Bundred et al, 1998;Kopelman et al, 1999;Woodworth et al, 2000). Barwell et al (1997) showed that patients who developed a lymphocele after breast cancer surgery had a higher mean total drain volume (480 ml) than patients who did not develop a lymphocele (240 ml). We found that the total volume of fluid drained from the groin was B1.5 times higher, without a significant difference between the patients who did and did not develop a lymphocele. An explanation may be that the lymph nodes of the groin have to drain more lymph fluid from the lower extremities than the lymph nodes in the axilla from the upper extremities. Our study showed that a higher drain production on the last day that the drain was in situ was associated with an increased risk for lymphocele formation. An explanation for this result is that after removal of the drain, stasis of lymph fluid takes place, which gives rise to lymphoceles. Our study is limited by a small group of patients with known drain production on the last day. These results confirm our hypothesis that more the fluid drained on the last day, the higher the incidence of lymphoceles would be as shown in the studies on breast cancer. In the literature on breast cancer surgery, the amount of postoperative fluid drainage has been found to be significantly influenced by the degree of negative pressure in the drain. The hypothesis is that a high negative suction pressure in the drain may prevent the leaking lymphatics and blood vessels from sealing off, thus leading to prolonged drainage (van Heurn and Brink, 1995;Kopelman et al, 1999;O'Hea et al, 1999;Chintamani et al, 2005). In vulvar SCC, high-vacuum drains are used, but none of the studies defined the amount of negative pressure applied in the drains. There is one prospective study that compared two types of drains, the Blake and the Jackson-Pratt drain. This study showed an increased incidence of overall complication rate associated with the Blake drain (Carlson et al, 2008). These findings show that there is a need for further studies to investigate drain management after inguinofemoral lymphadenectomy. Compared with a full lymphadenectomy, the SLN procedure has been shown to significantly reduce postoperative morbidity in the GROINSS-V I study (van der Zee et al, 2008). Our data revealed a number of patients who are not eligible for the SLN procedure because of the size of the tumour or multifocality. These patients would still require an inguinofemoral lymphadenectomy with the associated morbidity. Despite the application of the SLN procedure, the complication rate remains high compared with Short-and long-term complications after groin surgery F Hinten et al the rates described in the literature (Table 6). This may be explained by the different definitions for complications used in the literature. Apparently, it is difficult to prevent short-and long-term complications other than omitting lymphadenectomy. In the past years, different methods such as ligation of VSM (Podratz et al, 1983;Zhang et al, 2000;Rouzier et al, 2003), sartorius muscle transposition (Rouzier et al, 2003;Judson et al, 2004) and sealing with VH fibrin sealant (Carlson et al, 2008) have been adopted with the attempt to decrease the complication rate, but none of these methods decreased the complication rate after inguinofemoral lymphadenectomy. Hopefully, GROINSS-V II will show that radiotherapy is an attractive and safe alternative for inguinofemoral lymphadenectomy in a substantial number of patients with a positive SLN. A few other treatment options have been described in the literature on vulvar cancer. Primary radiotherapy could be able to replace inguinofemoral lymphadenectomy in patients without suspicious groins. Three studies showed that primary radiotherapy to the groin results in less morbidity but also in a higher number of groin recurrences compared with surgery (Stehman et al, 1992b;Manavi et al, 1997;Perez et al, 1998). It has been suggested to remove only the bulky lymph nodes before radiotherapy. A study by Hyde et al (2007) showed that the survival is not compromised by only resecting the bulky nodes; however, because of the small study group retrospective nature of the study, a randomised prospective study is recommended. Primary neoadjuvant chemoradiation may be an option in patients with nonresectable tumours to reduce tumour volume, achieve resectability and reduce the extent of surgery. However operability was achieved in 63 -92% of cases, surgical interventions after chemoradiation are associated with high postoperative morbidity (van Doorn et al, 2006). Chemotherapy as a single treatment modality is not common. The data available for any of the applied chemotherapeutic regimens are not sufficient to recommend routine application (Wagenaar et al, 2001;Cormio et al, 2009;Witteveen et al, 2009). Only primary radiotherapy decreases the postoperative morbidity, but with compromising the prognosis. We should keep in mind that a groin recurrence is nearly always fatal. Hence, we recommend treating all patients with vulvar cancer optimally; WLE and groin surgery unless patients are unfit to undergo surgery. Furthermore, research should focus on development of tailor-made postoperative therapy such as appropriate postoperative drain management and possibly lymph drainage therapy for the individual patient who still needs the inguinofemoral lymphadenectomy to survive vulvar SCC. In conclusion, age, diabetes, 'en bloc' surgery and higher drain production on the last day the drain was in situ are risk factors for the development of short-and long-term complications. The postoperative drain management is the only factor that urges us to further studies to find the optimal postoperative protocol. Considering the rarity of SCC of the vulva, this study should preferably be a randomised multicentre study in patients who undergo standardised bilateral inguinofemoral lymphadenectomy. Two different policies with respect to postoperative management may be studied in both groins of the same patient to exclude bias by patient-related factors. Table 5 Adjusted odds ratio with 95% confidence interval of patient characteristics, surgery and postoperative management variables for short-and longterm complications using multivariate logistic regression with selection procedure Abbreviations: SCC ¼ squamous cell carcinoma; SLN ¼ sentinel lymph node; N ¼ number of patients; short term ¼ wound breakdown and infection and lymphocele; long term ¼ lymphoedema and cellulitis/erysipelas; -¼ not studied. Short-and long-term complications after groin surgery
2016-05-12T22:15:10.714Z
2011-10-04T00:00:00.000
{ "year": 2011, "sha1": "89d8c8f8e92c4a6c981b68a45b6589f0d92a10d9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/bjc2011407.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a9cd822ae4e990f342c5461dd3466fd08f8d99f3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267196913
pes2o/s2orc
v3-fos-license
Identification of a New Compound (4-Fluoro-2-Trifluoromethyl Imidazole) Extracted from a New Halophilic Bacillus aquimaris Strain Persiangulf TA2 Isolated from the Northern Persian Gulf with Broad-Spectrum Antimicrobial Effect Background: The unique ecosystem of the Persian Gulf has made it a rich source of natural antimicrobial compounds produced by various microorganisms, especially bacteria, which can be used in the treatment of infectious diseases, especially those of drug-resistant microbes. Objectives: This study aimed to identify antimicrobial compounds in the bacteria isolated from the northern region of the Persian Gulf in Abadan (Chavibdeh port), Iran, for the first time. Materials and Methods: Sampling was performed in the fall on November 15, 2019, from 10 different stations (water and sediment samples). The secondary metabolites of all isolates were extracted, and their antimicrobial effects were investigated. 16S ribosomal ribonucleic acid sequencing was used for the identification of the strains that showed the best inhibition against selected pathogens, and growth conditions were optimized for them. A fermentation medium in a volume of 5000 mL was prepared to produce the antimicrobial compound by the superior strain. The extracted antimicrobial compounds were identified using the gas chromatography-mass spectrometry technique. Minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) were determined for the superior strain. The effects of salinity, pH, and temperature on the production of antimicrobial compounds were determined by measuring the inhibitory region (mm) of methicillin-resistant Staphylococcus aureus (MRSA). Results: Four new strains with antimicrobial properties (i.e., Halomonas sp. strain Persiangulf TA1, Bacillus aquimaris strain Persiangulf TA2, Salinicoccus roseus strain Persiangulf TA4, and Exiguobacterium profundum strain Persiangulf TA9) were identified. The optimum growth temperatures were determined at 37-30, 37, and 40 °C for TA1 and TA2, TA4, and TA9 strains, respectively. The optimum pH values for the four strains were 7, 6-7, 7.5, and 6.5-7.5, respectively. The optimal salt concentrations for the four strains were 15%, 2.5-5%, 7.5%, and 5%, respectively. The ethyl acetate extract of strain Persiangulf TA2 showed extensive antimicrobial activity against human pathogens (75%) and MRSA. The most abundant compound identified in TA2 extract was the new compound 4-fluoro-2-trifluoromethyl imidazole. The MBC and MIC for the ethyl acetate extract of strain TA2 were 20 and 5 mg. mL-1 (Staphylococcus aureus), 40 and 20 mg. mL-1 (MRSA, Escherichia coli, and Enterococcus faecalis), 40 and 10 mg. mL-1 Acinetobacter baumannii), and 80 and 40 mg. mL-1 (Staphylococcus epidermidis, Shigella sp., Bacillus cereus, and Klebsiella pneumoniae), respectively. The optimal conditions for antibiotic production by TA2 strain were 5% salt concentration, pH of 7, and temperature of 35 °C. Conclusion: Newly detected natural compounds in TA2 strain due to superior antimicrobial activity even against MRSA strain can be clinically valuable in pharmacy and treatment. Background Currently, much research has been focused on discovering new, long-acting antibiotics that are effective in the prevention and treatment of diseases (1).The widespread use of chemical antibiotics has led to the spread of infectious diseases caused by multidrug-resistant pathogens, and the death rate from these pathogenic microbes is increasing.In recent years, much attention has been paid to bioactive compounds, especially of marine origin.Among aquatic organisms, antimicrobial compounds derived from bacteria have had an amazing effect in controlling microbial infections (1)(2)(3)(4)(5).A more serious threat than methicillin-resistant Staphylococcus aureus (MRSA) is the spread of Gram-negative infectious agents that have become resistant to all available antimicrobial compounds (6).Aquatic ecosystems have a very different ecological structure and, as a result, have unique organisms with the potential to produce secondary metabolites, compared to terrestrial ecosystems.To date, numerous bioactive compounds with different functions from aquatic organisms have been reported, some of which are antimicrobial compounds.They have been widely used in pharmacy and medicine to treat infectious diseases caused by microbial pathogens (7)(8)(9)(10)(11)(12).The production of different bioactive compounds, such as antibiotic compounds by marine organisms, especially bacteria, is due to unbalanced and variable physical and chemical conditions of their habitat and the variety of food sources in marine environments for microorganisms.Therefore, in these harsh conditions, only microorganisms are able to survive that adapt to these conditions and use unique strategies for survival which is one of these methods of producing secondary metabolites (13)(14)(15).Due to the geographical location of the Persian Gulf, it usually has a high temperature and a relatively high salinity.Therefore, these special ecological conditions have caused high biodiversity in this region (16)(17)(18).Among marine microorganisms, Bacillus has been recognized as an effective biological control agent and producer of a variety of secondary metabolites with different biological applications, including antimicrobial applications.Additionally, these strains probably have a great potential for controlling human, animal, and plant pathogens (5). Objectives The Persian Gulf is a source of new undiscovered compounds due to its wide biodiversity.This study was performed in the northern region of the Persian Gulf in Abadan (Chavibdeh port), Iran, to identify and purify antimicrobial compounds in isolated bacteria.To date, no studies have been performed on this area research in this geographical location.With the identification of new antimicrobial compounds in the Persian Gulf, its position in terms of various medical, biological, and industrial sciences will be improved. Sampling Site and Collection Sampling was performed in the fall on November 15, 2019, from the northern part of the Persian Gulf in Khuzestan province within the port of Chavibdeh in Abadan from 10 different stations (water and sediment samples) under completely sterile conditions.Water pH, salinity, and temperature were also measured.The water samples were collected using sterilized-Niskin bottles; subsequently, Sterile 500 ml bottles were used to collect the water samples, and a Van Veen grab was used to collect sediment samples.The sediment samples were collected from each station in a sterilized plastic bag after each sampling.All the samples were transferred to the laboratory at 4 °C. Isolation of Bacteria The water samples (in the amount of 70 μL) and sediment samples (after dilution) were shredded over Marine agar 2216 (HiMedia, India) plates and incubated.The plates were then incubated at different temperatures (25, 30, and 37 °C) for 5 days, and the grown colonies were purified (19,20). Screening for Antimicrobial Compound-Producing Bacteria Bacterial strains were inoculated in 250 mL Marine broth (MB; HiMedia, India) medium.The media were then placed in a shaker incubator for 7 days.Its temperature was set at 30 °C and at a speed of 160 rpm.For obtaining crude bacterial extract after incubation, the medium was centrifuged at 4 °C for 20 minutes (10,000 rpm).Ethyl acetate (KBR) of equal volume was used to extract the bacterial secondary metabolite from the supernatant, and the solvent was removed at 37 °C.Extraction was performed twice from each strain (21)(22)(23).The disk diffusion method was used to investigate the effect of the antimicrobial activity of bacterial extracts on the tested pathogen strains.Half McFarland suspension was prepared from pathogenic bacteria grown Iran.J. Biotechnol.October 2023;21(4): e3359 Taghavi S et al. in the Müeller-Hinton broth medium after 24 hours.The metabolite extracted from the bacterium was first dried and then dissolved in metatol (at a concentration of 100 mg.mL -1 ).The extract was added to a sterile filter paper disc (diameter: 6 mm) in a volume of 35 μL.The dried discs were then placed on lawn cultures and incubated for 24 hours at 37 °C.The zone of inhibition around the paper discs was measured in millimeters.Various antibiotics (Difco) were used as controls.This experiment was repeated twice for each strain, and its antimicrobial properties were confirmed (23,24). Identification of Antibiotic-Producing Bacteria The morphological and biochemical properties of marine bacterial strains with antibacterial activity were determined (25).In addition, the identification of extracellular hydrolases, such as alpha-amylase, protease, and lipase, in strains with antimicrobial properties was investigated (26)(27)(28).16S ribosomal ribonucleic acid (16S rRNA) sequencing was used for the molecular identification of bacterial strains that showed the best inhibition against the selected pathogens.Bacterial deoxyribonucleic acid samples were amplified using 16S rRNA primers, mainly forward (5-TCACGGAGTTT-GATCCTG-3) and reverse (5-GCGGCTGCACGTAGTT-3) (29).The sequences of all types of strains used for the analysis were retrieved from the NCBI GenBank database and https://lpsn.dsmz.de/.The sequences were aligned, and phylogenetic trees were constructed using MEGA software (version 7.0) and reconstructed using maximum-likelihood methods.The phylogenetic tree of the bacterial strain indicated an evolutionary relationship with the selected sequence. Detection and Identification of Bioactive Antimicrobial Metabolites in Marine Bacterial Extract Gas chromatography-mass spectrometry (GC-MS) (Agilent 7890 Gas Chromatograph-5975 Mass Spectrometer detector) was used to identify compounds in the secondary metabolites.The gas chromatograph was equipped with a capillary column (30 × 0.25 μm ID × 0.25 μm df) and attached to the mass spectrometer section containing an Elite-5MS (5%-phenyl methylpolysiloxane) (30).The quadrupole mass analyzer and MSD ChemStation software were used to examine the chromatograms and the obtained mass spectra.The National Institute of Standards and Technology database and the Wiley Online Library were used to examine the mass spectrum of GC-MS results. Strain No 1.Antimicrobial activity of four selected strains on pathogenic microbes by disk diffusion method (results repeated three times). Provision of Fermentation Media for Selected Strain with Better Antimicrobial Activity The selected strain was cultured in a volume of 5000 mL in MB medium under the above-mentioned conditions.After the incubation period, centrifugation was performed, and then the extract was extracted with ethyl acetate.The extracted metabolite was dissolved in a ratio of 160 mg.mL -1 and used for subsequent experiments. Determination of MIC and MBC for Selected Strain The minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) of the extracted metabolite with an antimicrobial effect were determined against some pathogenic microbes.The tubes containing the Müeller-Hinton broth medium, standard suspension of pathogenic microbes, and different concentrations of antimicrobial extract (5,10,20,40,80, and 160 mg.mL -1 ) were incubated at 37 °C for 24 hours.The first tube without turbidity was considered MIC.For the determination of MBC, a loop of tubes without turbidity was cultured on the Mueller-Hinton agar and then incubated at 37 °C for 24 hours.The lowest concentration of antimicrobial metabolite that prevented the growth of pathogenic microbes in the environment was considered MBC (31).This test was repeated three times, and its mean was reported. Isolation of Bacteria and Screening for Antimicrobial Compound-Producing Bacteria In the present study, 23 bacterial species were isolated from the marine samples collected from the northern part of the Persian Gulf in Khuzestan province within the port of Chavibdeh in Abadan.The secondary metabolites of all samples were extracted using ethyl acetate, and their antimicrobial activity was investigated.Only four of them showed antimicrobial activity (17.39%).They differed in morphology and pigmentation.Strains no. 1 (yellow), strains no. 2 and 3 (pale orange-yellow), and strain no. 4 (orange) were reported with different pigments.Table 1 and Figure 1 show the antibiotic activity that differed from strain to strain.Of these four samples, one sample had broader and more antimicrobial activity (strain no.2).The ethyl acetate extract of strain no. 2, out of 12 pathogenic microbes, had an antimicrobial effect on 9 cases (75.00%) and no effect only on Enterococcus faecalis PTCC29272, Pseudomonas aeruginosa ATCC The extract of strain no. 2 showed significant activity against MRSA.So that with increasing the amount of the extract, the inhibition zone around the disc containing extract increased against the MRSA pathogen (Fig. 1B). Detection and Identification of Bioactive Antimicrobial Metabolites The ethyl acetate extract of Bacillus aquimaris strain Persiangulf TA2 was subjected to GC-MS analysis.Seven peaks were observed in the graph obtained from the GC-MS of the desired extract, the sharpest peaks 4).The highest amount of compound was related to a new compound (imidazole,4-fluoro-2-trifluoromethyl).There were also no reports of the identification of compounds 2-hydrazino-4-methyl-6 methylthio-pyrimidine and 3,5-dihydroxybenzoic acid in prokaryotes. Determination of MIC and MBC A fermentation medium in a volume of 5000 mL was prepared for strain Persiangulf TA2, and the antimicrobial compound produced by this strain was extracted with ethyl acetate and dissolved in methanol at a ratio of 160 mg.mL -1 to determine the MIC and MBC of used sensitive pathogens.The MBC and MIC for the ), respectively (Fig. 4). Optimization of Antimicrobial Metabolite Production The strain Persiangulf TA2 was grown at different conditions, such as different temperatures, pH values, and NaCl concentrations.Then, the effect of secondary metabolites produced by strain TA2 in different conditions was investigated against MRSA by the disk diffusion method (Fig. 5).The maximum antibiotic activity was achieved at NaCl 2.5-5% (w/v) (Fig. 5A), initial pH of 6-7 (Fig. 5B), and temperature of 35-37 °C (Fig. 5C). Discussion Due to the fact that pathogenic microbes have used new strategies to neutralize the effects of old chemical antibiotics, different antibiotic-resistant strains have emerged that can increase the severity and duration of the disease and even increase human mortality in some cases.Therefore, the emergence of the aforementioned strains is a serious threat.The identification of new bioactive compounds, especially marine microorganisms, can be a good option to deal with this problem (33,34).In this study, different bacteria isolated from the water and sediments in the north of the Persian Gulf were screened to identify antimicrobial compounds.Four new marine species with significant antimicrobial effects included Rossellomorea aquimaris strain Persiangulf TA2, Halomonas sp.Strain Persiangulf TA1, Salinicoccus roseus stain Persiangulf TA4, and Exiguobacterium profundum strain Persiangulf TA9, respectively; however, among the aforementioned strains, only the Persiangulf TA2 strain had a significant inhibitory effect against both Gram-positive and Gram-negative pathogenic microbes.In most studies, newly identified antimicrobial compounds had minor inhibitory effects on Gram-negatives and often affected Gram-positives (35).Therefore, in the present study, the introduction of TA2 strain with an inhibitory effect on Gram-negative bacteria can be interesting and important. To date, there have been no reports of the isolation (35,36,42).The extracts of Bacillus strains isolated from the Caspian Sea had a minor inhibitory effect on Gram-negatives but have shown significant inhibitory properties on Gram-positive pathogens (43).Drug-resistant microorganisms, including methicillin-resistant Staphylococcus aureus, have caused serious crises in hospitals and health centers around the world.Few chemical drugs are available to control drug-resistant pathogens, including MRSA.So that most of these drugs can have serious side effects for a person in addition to high costs.For this reason, natural secondary metabolites can be a good alternative for controlling these pathogens (44).In this study, the extract of Bacillus aquimaris showed significant activity against MRSA.So that with increasing the amount of the extract, the inhibition zone around the disc containing extract increased against the MRSA pathogen.Therefore, pathogens can be more likely to be inhibited by increasing the amount of antimicrobial compounds until they do not become resistant.Extracts of marine Streptomyces (44), Pseudoalteromonas (45) and Bacillus velezensis (46) strains have also shown significant antimicrobial activity against MRSA. Compounds extracted from a collection of aquatic bacteria associated with marine sponges isolated in Portugal showed weak antimicrobial activity against MRSA strains and their main activity was against Bacillus subtilis (49). In the present study, the growth conditions of the strains and the optimal production of antimicrobial compounds were in almost moderate conditions in terms of temperature, pH, and salinity.The pigments extracted from Salinicoccus sesuvii MB597 and Halomonas aquamarina MB598 have shown a good inhibitory effect on a large number of pathogenic microbes.Furthermore, the optimal conditions for these two strains in normal and intermediate ranges have been reported (48) With regard to all the aforementioned issues, the difference in the results can be explained (35).The Bacillus aquimaris strain Persiangulf TA2, Salinicoccus roseus strain Persiangulf TA4 and Exiguobacterium profundum strain Persiangulf TA9 also had amylolytic activity.Given the importance of microbial amylase in industry, these strains could be candidates for further research in this field.Since marine ecosystems do not have uniform and stable conditions, the enzymes isolated from marine microorganisms should be highly adaptable to unbalanced conditions and can be a good option in different industries (52).Marine Bacillus species can produce a variety of secondary metabolites, such as antimicrobial, antifungal, and anticancer compounds (5).The results of the present GC-MS analysis showed that Persiangulf TA2 strain ethyl acetate extract contains several important chemical compounds.Among the aforementioned compounds, the highest amount was related to imidazole,4-fluoro-2-(trifluoromethyl) composition, which has not been reported to date from the separation of this compound from prokaryotes.Furthermore, the derivatives related to the aforementioned compound have been made artificially, and its antimicrobial effect has been investigated.There are also no reports of the identification of compounds 2-hydrazino-4-methyl-6methylthio-pyrimidine and 3,5-dihydroxybenzoic acid in prokaryotes.Imidazole is a heterocyclic compound with synthetic derivatives that have various therapeutic applications biologically and are used against bacteria, fungi, viruses, and tumor cells (53) (56)(57)(58).In addition, C 11 H 18 N 2 O 2 composition from marine Nocardiopsis sp.DMS 2 has shown inhibitory activity against biofilm-forming K. pneumoniae (59).The secondary metabolites extracted from Vagococcus fluvialis and Bacillus cereus included compounds of alkaloids, flavonoids, and saponins (37).The identification of new compounds in this strain (Persiangulf TA2) can be a good candidate for further research in the field of pharmacy and treatment. Conclusion The large parts of the Persian Gulf have a pristine and intact ecosystem; therefore, it is likely to have a very wide diversity of organisms.Bacillus aquimaris strain Persiangulf TA2 showed an interesting inhibitory effect against Gram-negative and Gram-positive pathogens.The new compound identified in this bacterium could have an important application in the inhibition of pathogenic bacteria, including important antibioticresistant pathogens.Furthermore, the mass production of antibiotics derived from aquatic microorganisms can be important due to their high adaptability to unbalanced sea conditions. Figure 1 . Figure 1.Investigation of antimicrobial activity of microbial extracts by disk diffusion method.A)Effect of ethyl acetate extracts (in 100 mg.mL -1 concentration) from four different strain on Staphylococcus epidermidis (Clinical).B) Effect of different concentrations (30, 50, and 100 mg.mL -1 ) of ethyl acetate extract (strain no. 2) on methicillin-resistant Staphylococcus aureus. Figure 2 . Figure 2. Optimization of growth conditions for antibiotic-producing strains.(A,B, C) Optimal growth conditions in terms of temperature, pH, and salt percentage after 48 hours of incubation by measuring turbidity at 600 nm. Figure 5 . Figure 5. Investigating the effect of different conditions on the production of antimicrobial compounds.(A, B, C) The Effect of salinity, pH, and temperature on the production of antimicrobial compounds by Bacillus aquimaris strain Persiangulf TA2 were determined by measuring inhibition zone (mm) of Methicillin-resistant Staphylococcus aureus bacteria.DIZ: Diameter of Inhibition Zon. Table 4 . Compounds detected in antimicrobial extracts of Bacillus aquimaris strain Persiangulf TA2 by Gas Chromatography-Mass Spectrometry Taghavi S et al. 4. Determination of minimum bactericidal concentration (MIC) and minimum inhibitory concentration (MBC). MIC and MBC values (mg/ml) of extracts of Bacillus aquimaris strain Persiangulf TA2 against pathogenic microbes were determined. (Acinetobacter baumannii 1256), and 80 and 40 mg.mL -1 (Staphylococcus epidermidis [clinical], Shigella sp.[clinical], Bacillus cereus [clinical], and Klebsiella pneumoniae [clinical] (35)avi S et al.of Bacillus aquimaris with antimicrobial properties in Iran.There are reports of other species of Bacillus with antimicrobial properties.This inhibitory effect is related to the newly identified compound in the bacterial antimicrobial extract.The newly identified compound has not yet been reported as a natural antimicrobial metabolite, and only derivatives of this compound (artificially) have been used to treat diseases.Various studies have shown that members of the genus Bacillus, such as Bacillus aquimaris, have the ability to produce different types of bioactive compounds with the property of inhibiting pathogenic pathogens.This inhibitory compound can act on different pathogens, or one genus of Bacillus can produce different compounds to inhibit various microbes(34).In research to identify Bacillus species with antimicrobial activity, Bacillus equimaris has been reported less frequently than other Bacillus species.It might be due to the needs and growth conditions of this bacterium and might have shown much less antimicrobial properties than other species and, as a result, has not been reported(35). . Some compounds have been previously reported in microbial extracts, including C 14 H 16 N 2 O 2 with anti-microbial and antioxidant properties (54-56) and compounds of C 11 H 18 N 2 O 2 and C 7 H 10 N 2 O 2 with antioxidant properties
2024-01-25T05:08:11.209Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "4ffa13b40363a2f0227b773191920526f5b4cd76", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4ffa13b40363a2f0227b773191920526f5b4cd76", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
181515761
pes2o/s2orc
v3-fos-license
Quantification of Fusobacterium nucleatum at Depths of Root Dentinal Tubules in the Tooth Using Real-time Polymerase Chain Reaction: An In Vitro Study Introduction: Microorganisms have been known to cause pain and infection in the tooth. Fusobacterium nucleatum was always found predominantly in failed root canal treatments. Objective: The aim of the present study was to quantify Fusobacterium nucleatum at the inner and peripheral half of coronal, middle and apical region of the root by using real-time polymerase chain reaction (qPCR). Methods: Extracted maxillary incisors were taken. After shaping and cleaning, the root canals were inoculated with Fusobacterium nucleatum. Samples were taken from both the inner and peripheral halves of dentin. The inoculated teeth were maintained in anaerobic jars for two weeks, and the bacterial isolates were changed every third day. The quantification was done using qPCR. Results: The cycle threshold (Ct) value in all groups showed the presence of Fusobacterium nucleatum. Conclusion: Fusobacterium nucleatum penetrates to the entire thickness of dentin in the middle and apical region. The coaggregation with other microorganisms could be responsible for the symptomatic endodontic patients. Introduction Microorganisms have been long recognized as the primary cause for the development of periapical lesions and failure of endodontic treatment [1]. Successful endodontic treatment is dependent on the eradication of the infective microflora from the root canal system. Flare-up is defined as the acute exacerbation of asymptomatic pulp or periradicular pathoses after the initiation or continuation of root canal treatment [2]. The reason for failure of root canal treatment is mainly due to procedural errors resulting in lack of control and prevention of the intracanal endodontic infections. Endodontic failures are usually associated with the persistence of microbial infection in the root canal system and the periradicular area [3]. Fusobacterium nucleatum, one of the main microorganisms is found in root canal infection and periodontal disease [4]. It is one of the most frequently isolated microbes in the root canals of untreated teeth as well as root canal treated teeth with recurrent infection. The virulence of Fusobacterium nucleatum increases when it acts along with other anaerobes [5]. Thus, the aim of the present study is to quantify Fusobacterium nucleatum at both the inner and peripheral halves of coronal, middle and apical regions of the roots using qPCR. Materials And Methods In the present study, ten freshly extracted single-rooted maxillary central incisors (extracted for periodontal or prosthodontic reasons) were used. Teeth of uniform length were taken. The roots were decoronated at the level of cementoenamel junction. The roots were treated in an ultrasonic bath containing 3% sodium hypochlorite (NCP Chlorchem, South Africa) for five minutes to remove the debris. The chemical traces used were removed by immersing the roots in an ultrasonic bath containing distilled water for five minutes. The canals were prepared to an apical size of 40 using 2% Kerr Endodontic files (Kerr Corp. Orange, CA, USA). All the roots were sterilized in an autoclave for 20 minutes at 121°C. The roots were 12 ml in length. They were inoculated with ATCC 25586 culture F.nucleatum (Microbiologics Inc., St. Cloud, Minnesota, US; Batch No. 328641) and maintained in anaerobic jars for two weeks. The culture was changed once in every 72 hours. Sample preparation The roots were divided into three portions and samples were taken. The teeth in Group I consisted of samples taken from the coronal third of the tooth. Samples were taken from the middle third and the apical third of the tooth for Group II and Group III, respectively. In each of these groups, samples were taken from the inner and peripheral halves of the root dentin, comprising of Group A and Group B respectively in each group. An autoclaved diamond disk was used to split the tooth vertically into two halves and Gates Glidden drills (Kerr Corp. Orange, CA, USA ) were used to remove dentin from the inner and peripheral region of coronal, middle and apical regions. Group IA-Inner dentinal half in coronal third; Group IB-Peripheral dentinal half in coronal third; Group IIA-Inner dentinal half in middle third; Group IIB-Peripheral dentinal half in middle third; Group IIIA-Inner dentinal half in apical third; Group IIIB-Peripheral dentinal half in apical third. DNA isolation The samples were thawed and vigorously vortexed; centrifuged at 8,000x G for five minutes. After the supernatants were removed, the pellets were used for DNA extraction. DNA extraction from dentine samples was done by enzyme extraction method (Bacterial genomic DNA isolation). DNA Isolation protocol used in the present study was adopted with slight modification [6]. Specific primer Primer of 16S rRNA directed specific primers were forward (AGAGTTTGATCCTGGCTCAG) and the reverse primers were (GTCATCGTGCACACAGAATTGCTG). PCR amplification protocol The DNA amplification and detection by qPCR is done with specific primer by using 7900HT ABI Real-time PCR Detection System [7]. For each real-time PCR, 20 μl SYBR Green master mix (Thermo Fisher Scientific, Hampton, New Hampshire, United State, US) was used. Total PCR amplification volume for each reaction was placed in each well of a 96-well MicroAmp optical plate (Thermo Fisher Scientific, Waltham, Massachusetts, US) and covered with Optical-Quality sealing tape (Applied Biosystems, Fisher Scientific, Waltham, Massachusetts, USA). The DNA amplification for specific primers was five minutes initial denaturation at 95° C, followed by 40 consecutive cycles at 95° C for 30 seconds, 65° C for 45 seconds, 72° C for 30 seconds, and 72°C for 30 seconds and Ct values were obtained [8]. Ct value represents the amount of target region amplified. Statistical analysis was done by subjecting Ct values to one-way analysis of variance (ANOVA) and T-test. Results The Bacterial penetration in Group IA was statistically significantly higher than that in Group IB. Equal penetration of the Fusobacterium nucleatum was seen both in the inner and peripheral halves of Group II and Group III. The Fusobacterium nucleatum was seen to penetrate the entire thickness of dentin in the middle and apical regions. Discussion Fusobacterium nucleatum was seen in 48% of the root canals with apical rarefaction [9]. Fusobacterium nucleatum can coaggregate with other microorganisms like Enterococcus faecalis [3]. It can survive and multiply even if as little as 10% of serum is left in between the treatment appointments [10]. Despite being killed during the root canal treatment, the lysed cells present in the dentinal tubule or in the biofilm can act as donors of chromosomal or plasmid DNA. The plasmids or smaller peptides called pheromones can impart drug resistance and virulence to other microbes like Enterococcus faecalis [11], thereby, increasing the pathogenicity of other microorganisms. Studies have shown that the dark pigmented bacteria in pure culture produce mild infection but when it was mixed with Fusobacterium nucleatum, it resulted in abscess and death of animals [12]. Coaggregation of Fusobacterium nucleatum with Enterococcus faecalis suggested a potential role for the combination in endodontic infections [13]. Coaggregation of Fusobacterium nucleatum and other species facilitated the growth of Enterococcus faecalis [14] and increased their number to 27%-56% in nonhealing endodontic cases [15]. Single-rooted central incisors were taken in this study to have uniformity in the sample preparation. The different methods in identification of bacteria into dentinal tubule are culture method, fluorescent microscopy, confocal microscopy, and molecular techniques. The standard culture method used for identification and enumeration of Fusobacterium nucleatum has numerous technical difficulties because of the fastidious nature of the species. Moreover, these methods are extremely laborious, possess lower sensitivity and are time-consuming. Also, the methods usually cannot distinguish the Fusobacterium nucleatum to the subspecies level. Fluorescent microscopy and confocal microscopy can identify bacteria in dentinal tubules but cannot quantify at the depths of dentinal tubule [16]. Molecular approaches have the potential to make the list of endodontic pathogens more accurate. Virulence features can vary among strains of given species and the molecular methods can allow detection of virulent genotypes. Molecular method for microbial typing can allow tracking the origin of root canal bacteria. Application of functional genomics and microarray technologies to the study of endodontic disease includes discrepancy in the host-pathogen interaction in molecular details. It is helpful in identifying target molecules and the pathway for diagnosis and treatment as well as to predict prognosis. Conventional PCR is qualitative but qPCR is quantitative and shows high sensitivity. Hence, qPCR was used in the study. Results from the current study showed that Fusobacterium nucleatum could penetrate the full thickness of the dentinal tubules in all the groups. A statistically significant difference in the number of bacteria between inner and peripheral halves was seen only in the coronal third of the root canal, with more bacteria seen in the inner half. The reason for this could be the increased thickness of dentin in that area. The present study showed that Fusobacterium nucleatum could penetrate almost the entire depth of dentinal tubule in three weeks. The current hypothesis on the etiopathogenesis of periapical pathoses implicate both the bacterial and host factors. Fusobacterium nucleatum induces the expression of matrix metalloprotinase (MMP-13) in host cells infected with bacterium [17] and stimulates the expression of matrix metalloprotinase (MMP-1) as well [18]. In addition, the lipopolysaccharides from Fusobacterium nucleatum triggers the synthesis of interleukin 1α and tumor necrosis factor-α and their release from macrophages [19,20], which in turn might be involved in apical periodontitis-related bone resorption. Further, some of the systemic infections like liver abscess [21] and arthritis [22] were caused by Fusobacterium nucleatum from the dental origin. Fusobacterium nucleatum is the most frequently associated microorganism in the extraradicular biofilm [6]. Bacterial biofilm was found to develop on root surface outside the apical foramen and associated with apical periodontitis. Studies had suggested that Porphyromonas gingivalis, Tannerella forsythensis and Fusobacterium nucleatum were associated with extraradicular biofilm formation and refractory periodontitis [23]. Biofilms are uniquely suited for horizontal gene transfer [24], and as such might provide an avenue for communication between species of relevance to endodontic infection [25], by plasmid transfer [11]. Therefore, the analysis of the invasiveness of Fusobacterium nucleatum into the dentinal tubules was considered. The high occurrence of Fusobacterium nucleatum recorded by qPCR might be attributed to its ability to invade the dentinal tubule and adhere to the walls. The present study inferred that Fusobacterium nucleatum had greater invasiveness and could penetrate almost the entire thickness of dentin in apical and middle regions. Further, coaggregation with other microorganisms could be responsible for the symptomatic endodontic cases. Thus, within the limitations of the study, it is seen that the presence of Fusobacterium nucleatum throughout the entire length of the dentinal tubules, asserts the complexity involved in the eradication of microorganisms during a root canal treatment. Further studies need to be done to find suitable root canal irrigants and irrigants delivery systems to overcome this difficulty. Conclusions Fusobacterium nucleatum penetrates to the entire thickness of dentin in the middle and apical regions. Due to its coaggregation with other microorganisms, it could be responsible for symptomatic endodontic cases. The findings of the present study indicate the need for root canal irrigants and intracanal medicaments that would reach the entire length of the dentinal tubules for the success of the root canal treatment. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2019-06-07T21:13:27.426Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "6645137f9c23b194150b8bad6339de921075d447", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/19373-quantification-of-fusobacterium-nucleatum-at-depths-of-root-dentinal-tubules-in-the-tooth-using-real-time-polymerase-chain-reaction-an-in-vitro-study.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6645137f9c23b194150b8bad6339de921075d447", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246791781
pes2o/s2orc
v3-fos-license
A Review of Security Evaluation of Practical Quantum Key Distribution System Although the unconditional security of quantum key distribution (QKD) has been widely studied, the imperfections of the practical devices leave potential loopholes for Eve to spy the final key. Thus, how to evaluate the security of QKD with realistic devices is always an interesting and opening question. In this paper, we briefly review the development of quantum hacking and security evaluation technology for a practical decoy state BB84 QKD system. The security requirement and parameters in each module (source, encoder, decoder and detector) are discussed, and the relationship between quantum hacking and security parameter are also shown. Motivation Quantum key distribution (QKD) provides an approach to share a key between two remote parties via an insecure channel with information-theoretic security (or called the unconditional security). Since the first QKD protocol, BB84, was proposed by Bennett and Brassard in 1983 [1], various types of QKD protocols based on the discrete variables [2][3][4] or the continuous variables [5,6] have been proposed, which have been applied to different situations according to their characteristics. Remarkably, QKD-based quantum networks are also available in many countries [7][8][9]. For example, an integrated space-to-ground quantum communication network over 4600 km was implemented in China [10]. However, the unconditional security of the final key still might be broken because the imperfections of the practical devices could be exploited by Eve to bypass the security assumptions of QKD. For example, in the standard BB84 protocol, Alice is required to encode her information in the single-photon pulse. Nevertheless, instead of the singlephoton source (SPS), the weak coherent source (WCS) that includes the multi-photon portion is widely used in most practical QKD systems. Then, Eve can perform the photonnumber-splitting (PNS) attack by exploiting these multi-photon pulses [11,12]. So far, many quantum attack strategies have been discovered (see Table 1 in Section 5 for the detailed information, and Ref. [13] for a review). In order to overcome the practical security threat, at least two solutions have been proposed. One is the new QKD protocol in which the loopholes of practical devices can be partially removed. For example, all loopholes in the detection part can be removed by the measurement-device-independent (MDI-) QKD protocol [14]. Moreover, by introducing Bell's inequality [15,16], the unconditional security of device-independent (DI-) QKD can be proven with just a few basic assumptions. The other solution is security patching. The patches to certain known attacks are employed in a QKD system. By measuring or monitoring the parameters of the QKD system, the leaked information can be estimated. The security patching plays an important role to guarantee the security of a QKD system with imperfect devices. First, a security evaluation is necessary for most of the practical QKD system, even for MDI-and DI-QKD. Second, by monitoring the parameters of the QKD system, Alice and Bob can make sure that Eve cannot perform some quantum attacks, and then the performance of a QKD system can be improved. In this paper, we review the development of security evaluation technology for QKD. Although there are many different QKD protocols based on both the discrete variables and the continuous variables, we focus our main attention on the decoy state BB84 protocol [17][18][19] here since it is the widely used protocol in many practical applications. In Section 2, we introduce the communication model of a typical QKD system, which can be divided into five modules (source, encoder, channel, decoder, and detector). Then, the basic security requirement for each module is introduced. In Section 3, by reviewing the main quantum hacking strategies in each module (The quantum channel is totally controlled by Eve, and the unconditional security of QKD is proven under the general coherent attack; thus the practical imperfections of the quantum channel only reduce the efficiency of the QKD system, but do not break its security.), it is clearly shown that, once some security requirements introduced in Section 2 are broken (due to the imperfection of the practical optical and electrical devices), the unconditional security of the final generated key will be compensated. In Section 4, we review the security model and show how to define the security parameter, which describes the deviation between the theoretical security requirement (introduced in Section 2) and the practical implementation (which could be exploited by Eve in Section 3). In Section 5, we introduce the security evaluation technology, and show the relationship between quantum hacking and security parameters. Communication Model and Security Requirement According to a general communication model [20], a QKD system also can be divided as five parts ( Figure 1): source, encoder, channel, decoder and detector. Now, we give the detail definition and security requirement of each module for a typical decoy state BB84 protocol. Source Encoder Decoder Detector Alice Bob Quantum channel Figure 1. The concept communication model of a QKD system, which includes five modules: source, encoder, channel, decoder and detector. The source generates the required optical pulse, single photon pulse for BB84, or the weak coherent pulses with different average intensities. The encoder and decoder transform two classical bits into quantum states, back and forth. The detector absorbs the photon and registers the click of SPDs. The detailed definition and security requirement for each module are given in the main text. Source: In this module, a required optical pulse is generated, such as a single-photon pulse for the standard BB84 protocol. However, a perfect SPS is still unavailable for a practical QKD system, due to the complexity, stability, cost, and so on. Thus, for a practical decoy state BB84 protocol, the source generates a weak optical pulse with stable average intensity and known photon number distribution (PND). The most widely used source in a practical QKD system is the laser diode combining with an attenuator, which generates the weak coherent pulses following the Poisson distribution with an average intensity of µ ≈ 0.1. Although the security of QKD is compensated by the multi-photon pulse in the WCS, the decoy state method [17][18][19] can be used to estimate the contribution of the single photon pulse. In other words, with the help of the decoy state method, the laser diode combined with an attenuator can be considered a SPS with finite-generation efficiency (the contribution of the multi-photon pulse could be removed from the total gain and bit error). In order to guarantee the security of a decoy state BB84 QKD system, at least three basic assumptions are required [17][18][19]: (1) the average intensity and the PND of the source should be exactly stable and known; (2) the phase of each optical pulse should be uniformly randomized from 0 to 2π; (3) the decoy states should be indistinguishable in any dimensions except for the average intensity. Encoder: In this module, Alice transforms her two random classical bits (one is called basis bit and the other one information bit) into the quantum state. Then, one of four encoded quantum states is randomly generated by modulating the photon emitted by the sources. The two classical bits should be generated by a true random number generator (TRNG), such as the quantum random number generator [21,22]. The transformation from classical bits to quantum states is performed by a modulator, which is the core part of the encoder module and should be carefully protected to remove the existence of Eve. In order to make sure that Eve cannot distinguish the encoded quantum state, at least three assumptions are required [23][24][25]: (1) Eve does not have any information about the random number used by Alice (the random number used by Alice should be random and secure); (2) the encoded quantum state should perfectly match the standard quantum state required by the BB84 protocol (perfect quantum state preparing phase); and (3) the encoded quantum state should not be distinguished in any dimensions, except for the encoded degree (no information is leaked from the side channel). Quantum channel: In this module, the quantum state of Alice is transmitted to Bob. The fiber and free space are two typical quantum channels for QKD (the security of the classical channel used for the post-processing and device calibration is not considered here). In the security model of QKD, the quantum channel is assumed to be totally controlled by Eve, who can perform any operation admitted by the quantum mechanics. Thus, there are no security requirements for the quantum channel. However, the loss and noise of the quantum channel should amplify the flaws of the source and encoder [23], then limit the final key rate. Thus, a quantum channel with lower loss and noise is always necessary to improve the performance of the practical QKD system. Decoder: In this module, by measuring the optical pulse coming from the quantum channel, Bob could transform the quantum state into two classical bits (also called basis bit and information bit) again. The basis bit could be actively chosen with a T-RNG or passively registered with a beam splitter. The information bit is registered according to the click of SPDs. Since the optical pulse measured by Bob is totally controlled by Eve, the click of SPDs is determined by three parts, the encoded state of Alice, the operation of Eve and the measurement of Bob. In other words, the decoder module can be considered a box with one input and four outputs (although, in some QKD systems, Bob actively chooses his basis, and there are only two outputs in the decoder, but, theoretically speaking, we can consider the two basis one by one). For each optical pulse going into the box, it will output from one of the four outputs (presenting the two classical bits). Therefore, the following assumptions are required for the decoder module [24,25]: (1) the basis of Bob should be random, which cannot be controlled or known by Eve; (2) for each basis, Eve cannot control the output of the decoder box by manipulating the parameters of each optical pulse, such as the time, wavelength, and so on; and (3) no optical or electrical signal is leaked to the quantum channel from the decoder module. Since the decoder is the most weak part of the QKD system, we give a detailed discussion about it here. The first two assumptions above mean that Eve cannot control the probability P(i|λ) (i = 0, 1, 2, 3), which is the conditional probability that a photon outputs from the i-port of the decoder box given the hidden variable parameter λ controlled by Eve. Here, we remark that both the two phases that Bob randomly chooses as his basis and analyses of Alice's information bit are included in "Decoder" in this review. The main advantage here is that a part of the imperfection of the SPDs can be included in the basis bit and information bit. For example, the SPD blinding attack [26] for a polarizationencoding QKD system can be considered such that Eve can set the probability P(i|λ) as P(i|I, Pol.) = pδ ik for each optical pulse. Here, I (Pol.) is the intensity (polarization) of Eve's optical pulse, k is the index of SPD that should click if Eve is absent, and p is the probability that a optical pulse should be detected by Bob when Eve is absent. Detector: In the detector module, Bob measures the decoded optical pulse with SPDs and registers which SPD clicks (according to the security analysis, if more than one SPD click, Bob should randomly register one). Based on the Decoder module above, four SPDs are required. For the QKD system with only two SPDs, another two virtual SPDs that have the same parameters as that of the two factual SPDs can be introduced. Then, two virtual SPDs are used to measure the optical pulse for one basis and two factual SPDs for the other basis. Thus, for the detector module, the following assumptions are required [27]: (1) all the clicks of the detectors can be registered by Bob; (2) no active optical or electrical signal is leaked to Eve from the detector. Quantum Hacking In this section, we briefly introduce the quantum attacks to show that Eve can exploit the imperfections of the practical devices to break parts of the required security requirements in Section 2, then compensate for the unconditional security of the final generated key. Here we should remark that, most of these attacks can be removed by taking the security parameters into the security model or monitoring the security parameters to remove Eve's attack. The security parameter and the evaluation technology are discussed in next two sections. The detailed definitions of these security parameters are discussed in Section 4, which characterize the deviation between the theoretical requirement and the practical implement. The relationship between the quantum hacking and the security parameters is discussed in Section 5. Source The phase randomization is a core assumption for a QKD with WCS. However, it has been shown that the phase might be unrandomized, due to imperfect implementation, which gives Eve a chance to distinguish the states and learn the secret keys [23]. Specifically, Eve can apply the unambiguous state discrimination (USD) measurement to distinguish decoy states and signal states if the phase is fully non-random [28]. With the help of homodyne detection, the encoded quantum state can be distinguishable when the phase of the source is just partially randomized [29]. Furthermore, the distribution of the phase can be tampered from uniform to Gaussian via the laser-injection attack [30] (see Figure 2a,b for detail). (c) Figure 2. The phase distribution and intensity with and without Eve's laser-injection attack, reprinted from Refs. [30,31]. (a,b) Phase distribution of Alice's adjacent pulses tested from two samples of ID300 lasers. Without Eve's attack, the phase is random. However, under 50 µW or 100 µW of Eve's injected light, the phase follows a Gaussian distribution. (c) The increased intensity under laser-injection attack. The shape of the optical pulses is another type of vulnerability. If one drives the laser diode with different amounts of electrical current to generate decoy states and signal states, this driving mode may result in various lasering times and a lasting period for decoy states and signal states [32], as shown in Figure 3a. To exploit this loophole, Eve carefully chooses two observing windows, W d and W s , to distinguish the signal state and decoy state [32] as shown in Figure 3a. The configuration of the multiple laser diodes may disclose the variation of the decoy states and signal states in the timing, spectral, and intensity degrees of freedom [33], which is shown in Figure 3b. Time and spectrum are two other typical side channels. Intersymbol interference in time is usually disclosed in a high-speed QKD system [34]. The distorted driving signal for the intensity modulator may result in the intensity correlation between neighboring pulses in the time degree of freedom as shown in Figure 4, which breaks the assumption about independent and identical distribution. By actively shifting the arriving time of pulses to an intensity modulator, the spectrum of optical pulses can be correlated with the intensity of the light in a plug-and-play QKD system [35]. In the decoy-state BB84 protocol, the intensities of decoy states and signal states are preset to be optimal values, maximizing the key rate. However, these preset intensities might be manipulated by the laser-injection attack during the operating phase of a QKD system [30,31,36]. This is because Eve can lock Alice's laser diode by injecting a bright light into it. As shown in Figure 2c, the intensity of Alice's laser is increased to 3.07 times as the maximum with the raise of Eve's injected power, which is not noticed by Alice and Bob. As a result, they may incorrectly estimate the contribution of the single photon pulse. The intensity of Alice's pulse also can be actively manipulated by Eve with the laser-damage attack on the optical attenuator [37,38]. Eve's injected high-power light from the quantum channel first reaches the optical attenuator [39][40][41] and decreases the attenuation value [38]. Figure 5 illustrates the typical results of decreased attenuation after the attenuator being shined by 2.8 W laser for 10 s, which increases the intensity of Alice's pulses. Encoder The encoder is always the target of Eve's attack, since the quantum states is modulated here to represent the secret information. The security vulnerabilities of the encoder module come from both the encoding and non-encoding degrees of freedom. For the encoding degrees of freedom, an imperfect encoder module may prepare non-orthogonal states. For example, in a phase-encoding QKD, the encoder is assumed to generate a state with one of four phases in {0, π 2 , π, 3π 2 }. However, the actual phase modulated on the optical pulse may deviate from the required one, which allows Eve to partially distinguish the states [42]. Furthermore, the precision of modulation can be manipulated by modifying the arriving time of the pulses. For example, in a phaseencoding plug-and-play QKD system, Eve may remap the encoded phase of Alice by controlling the time that the optical pulse arrives at Alice's modulator [43]. The non-encoding degrees of freedom also reveal side channels to Eve. For instant, in the Trojan horse attack [45], Eve actively sends optical pulses into Alice's encoder from the quantum channel, a portion of which may be modulated by Alice and return to the channel again as shown in Figure 6. Since the reflected photon is measured by Eve and not transmitted to Bob, it does not increase the error rate and interrupt the QKD system. Therefore, Eve can silently learn the secret key. It is notable that all the imperfections and attacks discussed in the source, Section 3.1, and the encoder, Section 3.2, not only affect the security of a decoy-state BB84 QKD system, but also may compromise the security of a MDI-QKD system that is immune to all attacks on the measurement unit. Since the MDI-QKD is out of the scope of this review, we will not discuss the security threat of it in detail here. Decoder At Bob's side, the decoder module shall randomly choose the basis bit and the information bit as introduced in 2. In practice, these random choices may be known or controlled by Eve via the following attacks. Regarding the basis bit, Bob may actively choose his basis with a modulator. Therefore, similar to the encoder, the choice of Bob's basis may be eavesdropped by the Trojan horse attack on the modulator [46]. However, to reduce the probability that the Trojan horse light is detected by Bob's SPDs, Eve may employ a hacking laser with a wavelength out of the SPDs' sensitive range [47], which helps Eve hide her attack. Another configuration of basis selection, named passive choice of measurement basis, is realized by a 50:50 beam splitter (BS). The randomness of the basis bit relays on the coupling ratio of the BS at the working wavelength, such as 1550 nm for a fiber-based QKD system. However, Eve may perform the wavelength-dependent attack [48]. Eve intercepts Alice's state and resends a faked state whose wavelength depends on its basis. As shown in Figure 7, the different wavelengths may result in highly unbalanced coupling ratio of the BS, such as 99:1 or 1:99, which almost certainly determines the selection of the measurement basis. The information bit is registered by the click from one of two Bob's SPDs in the same basis. This result shall be fully determined by the randomness of Alice's quantum state. However, in practice, Eve also can control the click of Bob's SPDs, which breaks the randomness of the information bit (see Section 2 for the details). For example, Eve may exploit the loopholes of the SPDs to control the information bit. These types of attacks have been discovered the most so far, in which Eve tailors the arriving time, the intensity, the phase, or the polarization of the hacking pulses. There are various types of attacks controlling the detection results by manipulating the arriving time of the hacking pulses, such as the time-shift attack [49], the efficiency mismatch attack [50,51], the dead-time attack [52], the after-gate attack [53], and the superlinearity attack [54]. A typical detection efficiency curve is shown in Figure 8a, in which two detectors present a mismatch at point A and B. Then Eve can conduct the time-shift attack [49] by controlling the transmission delay of Alice's pulse. Once the pulse passes through the shorter arm ( Figure 8b) and arrives at moment A (Figure 8a), '"Detector 0" clicks with a higher probability than that of '"Detector 1", and vice versa. Another typical time-related attack is the dead-time attack [52]. Instead of tampering the signal state, Eve sends a faked state with multiple photons, for example |− in Figure 9a By tailoring the intensity of the faked state, Eve also can control the information bit via the blinding attack [26,55,56]. Specifically, Eve first applies a strong continuous wave or pulsed light to transfer the SPD from the Geiger mode to the linear mode, then the SPD is no longer sensitive to a single photon. This is because, as shown in Figure 10a, the resistor R b ias reduces the voltage across the APD to be lower than the breakdown voltage ( Figure 10b), once a bright light illuminates at the APD. Then the blinded detector is employed in the "fake-state" attack. Eve intercepts Alice's state and resends a faked state with a well-designed intensity to the blinded detector. The faked state triggers a click with high probability, even 100%, once Bob and Eve choose the same basis. Otherwise, Bob's SPD almost does not click. By increasing the power of the hacking light, Eve can conduct the laser damage attack to actively engineer multiple loopholes of a well-characterized detector [37]. A bright light with power 0.3 to 0.5 W can reduce the detection efficiency of the SPD by 80%-90%. This hacking light with a certain encoded state would permanently decrease the detection efficiency of a target SPD, which creates an efficiency mismatch between SPDs in Bob. Moreover, increasing the hacking power in the range from 1.2 to 1.7 W, the SPD is permanently blinded into the linear mode. Then, Eve performs the same as the blinding attack mentioned above, and the detector is fully controllable. In terms of the other power level, Eve may also change the characteristics of the detector, but there seems to be no help for Eve [37]. When the power of the hacking laser is over a threshold, 2 W in this case, the detector is catastrophically damaged. Detector The side channels of the detectors may leak the result of the detection, even though the decoder module randomly decodes the basis bit and the information bit. For example, the backflash attack takes advantage of the phenomenon that an APD has a chance to emit photons back to the channel after each detection [57]. The backflashed photon may be varied in the polarization, reflection time, and so on, depending on which SPD it comes from. Therefore, Eve can tell the clicked detector to learn the secret information. Another possible side channel in the detector is in the timing domain. Since the optical path to each detector or the response time of each detector may be slightly different, the registration time of detection might be varied depending on different detectors. If Eve has access to this timing side channel, she can derive the secret information [58]. Security Model and Parameters According to the discussion above, Eve can break some security requirements and perform quantum hacking by exploiting the imperfections of practical devices. In this section, we show how to define the main security parameters in each module to describe the deviation between the theoretical requirement and the practical implementation. Before the main text, we give some discussions about the security parameter here. First, although the main security parameters are shown, the final key rate is not discussed in this paper. This is because it is still an open and very difficult question to calculate the final key rate by taking all the security parameters in one general security model. In some previous works [59,60], the flaws in the source and encoder were analyzed together, but most of flaws in the decoder and detector are still excluded. Second, these security parameters are measurable, and thus the legitimate parties can measure these parameters in the security evaluation phase, then evaluate the practical security and performance. In fact, by taking these security parameters into the key rate or monitoring them in real time, almost all of the discovered quantum hacking can be efficiently defeated. The Intensity and Photon Number Distribution Generally speaking, in order to estimate the contribution of the single-photon pulses, Alice should know the PND of her source {P n }. However, the PND varies in the practical systems due to the fluctuation of the average intensity of the optical pulse [61], or Eve's active attacks [30,31]. Thus, Alice should estimate the upper and lower bounds of the probability for each n-pulse, which is defined as Strictly speaking, Alice should measure the PND for the source with a photon number resolving detector. However, it is still quite experimentally challenging to achieve because only a few photons can be probably distinguished for some state-of-the-art detectors [62,63]. Thus, a reasonable assumption for Alice is that the source is a coherent state (any other source with a known PND in theory, such as the heralded single photon source [64], also can be analyzed with the same method given above) which is widely used in practical systems, and the variability of the PND can be estimated by the fluctuation of the average intensity of the source [38,61]. With the assumption given above, the deviation of the average intensity of the source is a proper parameter to bound the PND [61]. When Alice sends an optical pulse with average intensity µ, the factual intensity is bounded by Then, Alice can redefine the average intensity of the optical pulses and the deviation of intensity, which are given by [61] Thus, for the WCS, the bounds of the probability for each n−photon pulse are given by The Random Phase of Source In order to estimate the yield and error rate of the single photon pulses in the decoy state method, the source should be considered a mixed state of all photon number states. This assumption is valid only when the phase of the WCS is uniformly randomized within [0, 2π]. Then the density matrix of the WCS can be written as Here, µ is the average intensity of the source, |n is the Fock state with n−photon. Note that the security of BB84 also can be guaranteed with the discrete-phase-randomized WCS by modifying the post processing [65]. However, the phase-random assumption should be broken by Eve's active attacks [28,29,66] as described in Section 3. Thus, the practical density matrices for each encoded state should be rewritten as where α = z, x is the basis, i = 0, 1 is the bit for each basis, and P(θ) is the probability distribution of phase θ. The detailed expression of |α i e iθ depends on the encoding of the QKD protocol. For example, |α i e iθ = |αe iθ for the polarization encoding, and |α i e iθ = |αe i(θ+ϕ i ) s |αe iθ r for the phase encoding. Here, ϕ i is the encoded phase, and the subscript s(r) means the signal (reference) pulse. For the given state of Equation (6), the virtual entanglement states between Alice and Bob can be written as Here, |z 0(1) and |x 0(1) are the ideal quantum states required by the BB84 protocol. When the phase of the source is not uniformly randomized, the measured bit error in the x-basis does not equal the phase error in the z-basis. The phase error can be bounded by the measured bit error and the following parameter [23] where F(ρ, σ) is the fidelity between ρ and σ. The Distinguishability of the Decoy States For the discrete variable QKD with a non-single-photon source, the decoy state method [17][18][19] is considered one of the best ways to defeat photon-number-dependent attacks [11,12]. One of the basic assumptions for the decoy state method is that all the decoy states should be indistinguishable, except for the intensity. However, this assumption is hard to be guaranteed for some practical systems, due to the active attacks of Eve or passive side channels of Alice's source [32,67]. When the side channels are taken into account, the density matrix of the decoy state with intensity µ i can be written as where, ω includes all the side channels that can be exploited by Eve to distinguish the decoy states, such as time t, wavelength λ, waveform w, and so on. According to the analysis of Refs. [32,67], the distinguishability of the decoy states can be defined as here, D(ρ, σ) is the trace distance of ρ and σ. The Inaccuracy of the Encoded State Due to the finite extinction ratio of practical optical devices or Eve's active attacks [43], the practical encoded states of Alice may be different from the ideal states required by the QKD protocol. For example, Alice wants to send a quantum state |H , but the practical state sent by her may be cos θ|H + sin θ|V with a small angle deviation θ = 0. The density matrix of the practical encoded state can be written as ρ en α i . Simply, if we assume that the encoded state of Alice is pure, then where P[|a ] = |a a| is the project operator. Then the deviation of the encoded state can be written as Here, we consider the worst case by maximizing ε α i ,β j EN for all α, β = x, z and i, j = 0, 1. The Side Channel of Encoder The encoded states of Alice may be distinguishable in the non-encoded degrees of freedom, whose examples are given in Section 3. Then the practical density matrix of the encoded state should be written as where ω includes all the side channels that can be exploited by Eve to distinguish the encoded state. The distinguishability of the side channels can be defined as In all the side channels, the Trojan horse attack plays an important role since it is one of the most well-known attacks in both classical and quantum communication. Here, we only consider the optical Trojan horse attack in QKD processing. When an optical pulse with intensity µ is reflected from Alice's zone, the quantum state of such a Trojan horse photon can be written as where the subscription α i means the encoded state of Alice, and the superscription th means the Trojan horse pulse. We assume that the quantum state above is pure to maximize Eve's information. Thus, the deviation of the Trojan horse photon belonging to each α i can be defined as Channel In the security model of QKD, it is assumed that the channel is totally controlled by Eve who can do any operation and measurement admitted by the quantum mechanics. Thus, generally speaking, the imperfections of the quantum channel will not break the security of the generated key. However, the performance of the QKD system is compensated by the loss of the quantum channel. First, the final key rate is directly reduced by the loss and noise of the quantum channel. Second, the flaws of source could be amplified by the loss of the quantum channel [23]. For a quantum channel with transmittance η, the total count rate is the function of the loss, Q = Q(η). The deviation of source flaws (ε EN , ε SI , and ε TH ) should be rewritten as [23] ε where γ = EN, SI, TH. Obviously, the deviation is large for long-distance communication. In order to overcome this problem, by introducing the "qubit" assumption, the loss-tolerant protocol was proposed by Tamaki et al. [68]. However, because of the side channels of the encoder [45] described in the next subsection, the "qubit" assumption is hard to be guaranteed in practical systems. Thus, the loss-tolerant protocol is not analyzed here. Decoder When the encoded states are flying into Bob's zone, he randomly measures it with one of two bases. That is, the basis bit is randomly chosen by Bob (actively or passively). In each basis, the photon arrives at one of two SPDs to decide the value of Bob's information bit. Strictly speaking, both the basis bit and the information bit should be totally random. However, due to the imperfection of the decoder, they could be controlled by Eve, such as the wavelength-dependent attack [48] and the detection efficiency mismatch attack [49] described in Section 3. The weak randomness of Bob's basis bit (x 0 ) and information bit (x 1 ) can be analyzed by introducing two hidden variables λ de 0 and λ de 1 [24,25]. By controlling λ de 0 and λ de 1 , Eve can determine x 0 and x 1 for each pulse. Setting k, k ∈ {0, 1} as the value of x 0 and x 1 , the probabilities that Bob obtains x 0 = k and x 1 = k are respectively given by where ∑ i p(λ 0 = i) = ∑ j p(λ 1 = j) = 1. p(x 0 = k|λ 0 = i) is the conditional probability that Bob obtains x 0 = k, given the hidden variable λ 0 = j, and p(x 1 = k |λ 1 = j) has the same definition. Obviously, Eve can determine the basis-bit and information-bit for each pulse by controlling the probability p(λ 0 = i) and p(λ 1 = j). Thus, the conditional probabilities p(x 0 = k|λ 0 = i) and p(x 1 = k |λ 1 = j) represent Bob's basis bit and information bit leaked to Eve. In other words, the deviation of the decoder can be defined as [24,25] Here we remark that in Equation (19), the deviation of basis bit (x 0 ) and information bit (x 1 ) are analyzed independently. However, generally speaking, Eve can control x 0 and x 1 at the same time with a joint hidden variable λ. Then Equation (19) should be rewritten as Detector In the BB84 protocol, two or four SPDs are required by Bob to register the photon of Alice. There are two major imperfections for these SPDs. One is that the efficiency of these SPDs may depend on the parameters of the optical pulse, such as the time, wavelength, polarization, photon number (or intensity), and so on. The other one is the side channels, such as the reflection light [27,57,69]. For the first one, since each SPD represents the basis bit or information bit, it can be considered the flaw of the decoder (see Equation (19)). In this subsection, only the second one should be analyzed. The density matrix of the photon emitted into the quantum channel from Bob's zone can be written as ρ Det α i . Then, Eve can guess which SPD clicks for each pulse by measuring the leakage signal. Thus, the deviation of the side channels can be defined as where D(a, b) is the trance distance between a and b. Security Evaluation and Standardization The implementation of QKD systems, especially decoy-state BB84 ones, continues to mature. Commercial QKD products based on the decoy-state BB84 protocol are available in the market. Moreover, large-scale QKD networks all over the world are being deployed. During the commercialization and globalization of QKD, the reliability in use is essential for practical QKD systems, which highly depends on the security performance of the practical QKD system. However, as discussed in Section 3, the violation of the security requirement may be exploited by Eve to perform quantum hacking and then may threaten the practical security of a QKD system. In order to close the possible security loopholes (quantum attacks) and support the reliable use, one shall conduct the evaluation to verify the practical security of a QKD system. Generally speaking, in the evaluation phase, all the security parameters given in Section 4 should be carefully measured to guarantee that they are lower than the given threshold. Moreover, the optical and electrical signal also should be carefully monitored in the key-exchange phase to make sure that the evaluated security parameters are valid in practical situations. In other words, the evaluation phase provides the confidence to the QKD users and broadens the deployed range of QKD systems (if a QKD system passes through the evaluation test, it is secure even if there exist flaws). To evaluate the security performance of a QKD system, the tester mimics as a quantum hacker to attack the QKD system under test, which may disclose the security vulnerabilities or show the defense against the attacks. For each testing item, the testing procedure follows the steps of conducting a certain quantum attack. Then, the corresponding behavior of the QKD system under attack shall be judged by a quantified criteria with a pass/failure threshold. For the decoy-state BB84 QKD system considered in this paper, most of the attacks described in Section 3 can be tested. Furthermore, the testing results can be quantified by the security parameters defined in Section 4. The typical attacks and the corresponding security parameters are summarized in Table 1. According to Table 1, the attacks affecting the same security parameter in each module are classified, which indicates that fully characterizing a parameter requires multiple tests. The more tests are conducted, the better one knows about the practical performance of a QKD system. Generally, all the security parameters are considered in the final key rate. However, it is still a big challenge to take all of them into account in one security model at the same time. Target Attacks Exploited Imperfections Source Source attack [28,29] Nonrandom phase ε RP Laser injection [30] Nonrandom phase under laser injection ε RP Distinguishable decoy states [32] Pump-current intensity modulation ε DS Side channels in free-space Alice [33] Multiple laser diodes ε DS Intersymbol effect [34] Intensity correlation between neighboring pulses µ, ε µ Timing side channel [58] Detector-related detection timing tag ε Det This methodology of evaluation is possible to be standardized to serve as a third party certification for all decoy-state BB84 systems. The standardized verification provides a person-independent evaluation outcome, helping the customers build confidence and trust in QKD products. Most importantly, the security standard also guides the commercial company to produce the QKD products with high security performance, which promotes global deployment and enhances their application in various situations. The security evaluation standards are established by many organizations [70][71][72]. However, we should note that setting the thresholds for these security parameters is still an open question in practical application since a general security model including all the parameters is still unavailable; the final key rate may be rapidly reduced by parts of parameters, making the QKD system unusable. Therefore, a practical choice for the security evaluation and standard is to divide all the security parameters as two parts; one is considered in the security model (called analyzed parameter), and the other one is monitored (called monitored parameter). If a security parameter is analyzed in a security model, and some quantum hacking strategies by exploiting this loophole are discovered, this security parameter can be called an analyzed parameter. For these analyzed parameters, the QKD system is secure, no matter which threshold is set (the threshold only determine the final key rate). If a security parameter is not included in the security model, or no efficient hacking strategy is discovered by exploiting this loophole, this security parameter is called a monitored parameter. For these monitored parameters, the threshold should be carefully set to make sure that Eve's potential attack can be removed within the current technology. Author Contributions: S.S. wrote the paper for Section Sections 1, 2 and 4, and A.H. wrote the paper for Section Sections 3 and 5. All authors have read and agreed to the published version of the manuscript. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: QKD Quantum key distribution PND Photon number distribution PNS Photon number splitting attack SPS Single photon source SPD Single photon detector WCS Weak coherent source
2022-02-13T16:12:14.742Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "ad1af2f59c68fc6fa9a526cf64f5aca74e9a765d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/24/2/260/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "27bf7b4a690505e27fbc84bf80cb5bbd8c75354e", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
258405551
pes2o/s2orc
v3-fos-license
Artificial Intelligence Technologies in the Microsurgical Operating Room (Review) Surgery performed by a novice neurosurgeon under constant supervision of a senior surgeon with the experience of thousands of operations, able to handle any intraoperative complications and predict them in advance, and never getting tired, is currently an elusive dream, but can become a reality with the development of artificial intelligence methods. This paper has presented a review of the literature on the use of artificial intelligence technologies in the microsurgical operating room. Searching for sources was carried out in the PubMed text database of medical and biological publications. The key words used were “surgical procedures”, “dexterity”, “microsurgery” AND “artificial intelligence” OR “machine learning” OR “neural networks”. Articles in English and Russian were considered with no limitation to publication date. The main directions of research on the use of artificial intelligence technologies in the microsurgical operating room have been highlighted. Despite the fact that in recent years machine learning has been increasingly introduced into the medical field, a small number of studies related to the problem of interest have been published, and their results have not proved to be of practical use yet. However, the social significance of this direction is an important argument for its development. Introduction In recent decades, there has been significant interest in the practical application of artificial intelligence (AI), including machine learning, in the field of clinical medicine. The current advances in AI technologies in neuroimaging open up new perspectives in the development of non-invasive and personalized diagnostics. Thus, methods of radiomics, i.e., extracting a large number of features from medical images, are actively developing. These features may contain information to describe tumors and brain structures which are not visible to the naked eye [1][2][3][4][5]. It is assumed that the correct presentation and analysis of images with neuroimaging features will help to distinguish between types of tumors and correlate them with the clinical manifestations of the disease, prognosis, and the most effective treatment. Technologies that evaluate the relationship between features of tumor imaging and gene expression are called radiogenomics [6][7][8][9]. These methods are aimed at creating imaging biomarkers that can identify the genetic signs of disease without biopsy. The AI advances in the analysis of molecular and genetic data, signals from invasive sensors, and medical texts have become known as well. The universality of approaches to the use of AI opens up new, original ways of using them in the clinic. From a technical point of view, the term "artificial intelligence" can denote a mathematical technology that automates the solution to some intellectual problem traditionally solved by a person. In a broader sense, this term refers to the field of computer science in which such solutions are developed. Modern AI relies on machine learning technologiesmethods for extracting patterns and rules from the data reviews representative of a specific task (medical images, text records, genetic sequences, laboratory tests, etc.). For example, AI can find "rules" for predicting poor treatment outcomes from a set of predictors by "studying" retrospectively a sufficient number of similar cases with known outcomes. This AI property can be used in solving tasks of automating individual diagnostic processes, selecting treatment tactics, or predicting outcomes of medical care according to clinical findings. In medical practice, particularly in surgery, AI, along with surgical robots, 3D printing and new imaging methods, provides solving a wide range of problems, increasing the level of accuracy and efficiency of operations. The use of AI is even more important in microsurgery, when it comes to interventions on small anatomical sites with the use of optical devices and microsurgical tools. An AI challenge in microsurgery is the automatic recognition of anatomical structures that are critical for the microsurgeon (arteries, veins, nerves, etc.) in intraoperative photographs, video images, or images of anatomical preparations. The solution to this problem creates prospects for the development of AI automatic alert tools at the risk of traumatization of critical structures during surgery in real time, the choice of trajectories for safe dissection or incisions in functionally significant areas [10]. Artificial intelligence can evaluate handling of surgical instruments, check the positioning of the micro instrument in the surgeon's hands (its position in the hand, position to the surgical wound), and hand tremor during surgery. Determining a phase of surgery, predicting outcomes and complications, and creating the basis for an intelligent intraoperative decision support system are prospective goals for AI in microsurgery. A non-trivial task of using AI in microsurgery is to assess the skills of novice surgeons and residents, as well as improve the skills of more experienced specialists. The solution to this problem, due to the extreme work complexity and responsibility of a microsurgeon, will bring this field of medicine to new frontiers. To assess the available solutions to the issue of using AI in the microsurgical operating room, an analysis of articles in the PubMed text database of medical and biological publications was performed. Literature search was carried out using the key words "surgical procedures", "dexterity", "microsurgery" AND "artificial intelligence" OR "machine learning" OR "neural networks" among articles in English and Russian with no limitation to publication date. Automatic assessment of the level of microsurgical skills Continuous training and constant improvement of microsurgical techniques are essential conditions for the formation of a skilled microsurgeon. It often takes most of the professional life to acquire the required level of microsurgical skills [11][12][13]. Microsurgical training requires constant participation of a tutor who would correct non-optimal actions and movements of the microsurgeon and supervise the learning process. A parallel could be drawn between the training of microsurgeons and Olympic athletes: achieving a high level is impossible without a proper training system and highly qualified coaches. However, due to the high clinical workload and strenuous schedule of skilled microsurgeons-tutors, their permanent presence in the microsurgical laboratory is impossible, and the start of training in a real operating room is in conflict with the norms of medical ethics. In this situation, AI technologies can be used in the learning process to control the correctness and effectiveness of the manual actions of a novice neurosurgeon. To date, the set of AI technologies that would be adapted for the analysis of microsurgical manipulations is significantly limited. For example, the use of accelerometers attached to microsurgical instruments to assess the level of microsurgical tremor was described in the papers by Bykanov et al. [14] and Coulson et al. [15]. In the work by Harada et al. [16], infrared optical motion tracking markers, an inertial measurement unit, and load cells were mounted on microsurgical tweezers to measure the spatial parameters associated with instrument manipulation. AI and machine learning methods were not applied in this work. Applebaum et al. [17] compared parameters such as the time and number of movements in the process of performing a microsurgical task by plastic surgeons with different levels of experience, using an electromagnetic motion tracking device to record the movement of the surgeon's hands. This approach to the assessment of microneurosurgical performance stands out for its objectivity and reliability of instrumental measurements, but requires special equipment. Expert analysis of video images of the surgeon's work in the operating room is an alternative method for assessing the degree of mastering microsurgical techniques. However, involving an expert assessor in the analysis of such images is a time-consuming and extremely laborious method. Frame-by-frame analysis of microinstrument motion based on video recordings of a simulated surgical performance was applied by Óvári et al. [18]. Attempts to objectively evaluate and categorize the microsurgical effect based on the analysis of a video recording of a microsurgical training were made by Satterwhite et al. [19]. However, the analysis and evaluation of the performance of trained microsurgeons in this work were carried out by the expert assessors by viewing video recordings and grading according to the developed scale, which does not allow leveling the influence of the subjective factor on the results of the analysis. A promising alternative to these technologies is reviews machine learning methods, computer vision, primarily, for automated evaluation of the effectiveness of macro-and microsurgical performance. These methods can be applied on the base of the detection and analysis of microsurgical instrument motion in the surgical wound. After analyzing the limited scientific literature on this topic, we summarized the main processes for obtaining data for the analysis of microsurgical procedures using machine learning ( Table 1). The few scientific literature data indicate that machine learning methods allow to identify complex relationships in the movement patterns of a microsurgeon and predict the parameters of the effectiveness of microsurgical performance. To implement these tasks, the first step is to train the model to correctly classify the motion and the microsurgical instrument itself in the surgery video. The ongoing research studies in this direction are mostly focused on teaching computers two main functions: determining the phase of a surgical operation and identifying a surgical instrument [20]. In works on microsurgery using machine learning, two types of data sources are most often used: these are video recordings of surgery [21] and a set of variables that are obtained from sensors attached to microinstruments or on the body of the operating surgeon. Some studies combine both sources [22]. In the study by Markarian et al. [21], the RetinaNet, a deep learning model was created for the identification, localization, and annotation of surgical instruments based on intraoperative video recordings of endoscopic endonasal operations. According to the findings of the study, the developed model was able to successfully identify and correctly classify surgical instruments. However, all the instruments in the work belonged to the same class -"instruments". An interesting study was carried out by Pangal et al. [23]. In this work, the authors evaluated the ability of a deep neural network (DNN) to predict blood loss and damage to the internal carotid artery based on the 1-minute video data obtained from a validated neurosurgical simulator for endonasal neurosurgery. The prediction results of the model and expert assessors coincided in the vast majority of cases. In the work by McGoldrick et al. [24], researchers used video recordings made directly from the camera of the operating microscope and the ProAnalyst software to analyze the smoothness of movements of a vascular microsurgeon performing microanastamosis, using a logistic regression model and a cubic spline. [25] designed a stereoscopic system with two cameras that recorded images from different angles of surgical tweezers. The 3D motion T a b l e 1 Recording of interoperative features Recording interoperative features such as intra-abdominal pressure, weight of suction and irrigation bags, surgical Oliveira et al. [26] showed in their work that the use of machine learning and computer vision in the simulation of microsurgical operations provides enhancing basic skills of both residents and experts with extensive experience. The use of neural networks with long short-term memory (LSTM) in the analysis algorithms became a major advance in the process of surgical phase recognition, which made it possible to improve the accuracy of determining a surgical phase up to 85-90%. It is important to note that due to typical data volume limitations, model developers often use the so-called transfer learning [27], which allows the model to be pre-trained on the same data (most often, on open sets that solve similar problems in the same subject area) and then retrain on others, on which the target problem is solved. Currently, the following sets of open data are known, which are used in solving problems related to assessing the accuracy of surgical operations: EndoVis Challenge datasets are a collection of labeled datasets that contain videos of various types of surgical operations for classification, segmentation, detection, localization, etc. [28]; Cholec80 contains 80 videos of endoscopic operations performed by 13 different surgeons; all videos are labeled taking into account the phases of operations and the presence of instruments in the frame [29]; MICCAI challenge datasets are ones that allow a large number of contests in the analysis of medical data, including the analysis of surgical materials [30]; JHU-ISI and JIGSAWS, a labeled dataset of video recordings of operations performed by eight surgeons having three skill levels who performed a total of 103 basic robotic laboratory tests [31]; ATLAS Dione have 99 videos of 6 types of surgeries performed by 10 different surgeons using the da Vinci Surgical System. The frame size is 854×480 pixels, each of which is labeled for the presence of surgical instruments in the frame [32]. Theoretically, hundreds and thousands of videos can be used to analyze them using machine learning methods. However, to train the model, it is necessary to view and perform video image labeling in the "manual mode", which requires a lot of time. A possible solution to this problem is the use of new algorithms that provide annotating video files independently [33]. Table 2 shows a list of machine learning methods used, according to scientific literature, in the analysis of video images of microsurgical interventions, with a brief description of them. Most studies with the use of AI for the analysis of microneurosurgical performance were conducted on T a b l e 2 Supervised machine learning algorithm for classification based on Bayes' theorem. A simplified version of the Bayes algorithm is a naive Bayes approach built with the assumption that the features are conditionally independent. Class with the highest posterior probability is the result of the prediction 1. Simple, reliable, and easy to interpret logic 2. Insensitive to missing data 3. Works well when features are close to conditionally independent ones 4. Works well with small datasets 1. Hypothesis of conditional independence is required 2. Tends to perform worse the more complex models with large datasets or correlated features are used 3. Prior probability is required Decision trees [40] Supervised learning algorithm for classification. The data are repeatedly split into subsets and eventually classified at the end nodes according to the logic of the nodes along the way An iterative clustering algorithm (unsupervised) that separates unlabeled data into k distinct groups. Therefore, observations having similar features are grouped together. The decision for a new point to be grouped into one of the k-groups is based on its minimum distance from the center of the group. Centers will be recalculated iteratively until convergence. Then, the means of the clusters will be used to determine the classes of new observed data points [46] A collection of artificial neurons that interact with each other. An ANN is a network of nodes (or neurons) connected to each other to represent data or approximations. DNN is an ANN with many layers (i.e. deep layers). Deep ANNs can learn and determine optimal features from data that can be generalized to get the best classification results under implicit scenarios 1. The algorithm can achieve high accuracy 2. Ability to model complex and non-linear problems 3. Ability to learn patterns and generalize to process unseen data 4. Reliable and fault-tolerant to noise 1. A large amount of training data is required 2. Time-consuming learning process and need for significant computational power to train complex networks 3. Difficulty in interpretation due to its "black box" 4. The learning process is stochastic -even learning with the same data can lead to receiving different networks Convolutional neural networks (CNN) [47] CNN is an artificial neural network with a "deep" structure, as well as layers of convolution operations and pooling layers. The CNN has the ability to learn the best representation of features which are then used for a statistically shift-invariant classification of input information based on its hierarchical structure 1. Learning representative features from data 2. Handling data with noise and lack of information 3. Wide use for high-resolution image classification 4. Pooling can abstract high-level information 5. Learning can be parallelized 1. Time-consuming learning process and there is need for significant computational power (compared to common methods of machine learning) 2. The pooling function results in the loss of detailed and valuable information 3. Poor performance at low resolution of an input image Recurrent neural networks (RNN) [46] RNN networks are a kind of neural network architecture, where connections between elements form a directed sequence. They are designed for modeling sequential processes. They use the current observation together with the output of the network in the previous state to generate the output 1. Parameter sharing mechanism, Turing completeness 2. The ability to memorize makes the algorithm suitable for processing time series signals, including semantic analysis of text, classification of its emotional coloring, and language translation 1. Difficult to train 2. Imperceptible problem with vanishing gradient 3. Gradient explosion problem which can be solved using clipping gradient 4. Problems with short-term memory Long shortterm memory networks (LSTM) [48] An LSTM network is an artificial neural network containing LSTM modules instead of or in addition to other modules in INS. An LSTM module is a recurrent network module capable of storing values for both short and long periods of time 1. Better vision of complex dependencies than recurrent models 2. Networks are less sensitive to data outliers 1. Long-term dependencies are used with low quality 2. It is difficult to parallelize calculations 3. Longer time to train reviews models of the simplest surgical procedures, separate elementary phases of operations (for example, suturing, making incisions). Certainly, pilot studies in this area typically start with simplified models. However, surgery is a complex set of various factors that affect the surgical technique and the results of manipulations which are difficult to take into account during an experiment. And, therefore, the transfer of machine learning models from experimental conditions to real practice cannot ensure high quality-work, thus reducing their value. Conclusion Despite the rapid development of machine learning methods in the field of clinical medicine, they are in the initial phase of approbation in the tasks of evaluating microsurgical techniques so far, and they do not seem to be introduced into everyday clinical practice in the nearest future. However, there are all grounds to believe that the use of machine learning technologies, computer vision in particular, in microsurgery has a good potential to improve the process of learning microsurgical techniques. And this serves a good prerequisite for the development of a special area of artificial intelligence in the field of microneurosurgery. Study funding. The study was supported by a grant from the Russian Science Foundation, project No.22-75-10117. Conflicts of interest. The authors declare no conflicts of interest.
2023-04-30T15:20:08.136Z
2023-03-29T00:00:00.000
{ "year": 2023, "sha1": "166499e84a42a7db2c9e4eb5846008a5951b4fe7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.17691/stm2023.15.2.08", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec28cc5c2f14d570b4ee68aeb00e77f20f4cf630", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [] }
73694804
pes2o/s2orc
v3-fos-license
Nerve alterations in rhytidoplasty : a systematic literature review Introduction: Rhytidoplasty has become one of the most common aesthetic surgeries performed by plastic surgeons worldwide. Along with the increase in the number of surgeries performed, the number of procedure-related complications has also increased. In particular, nerve injuries are the major concern. By conducting a systematic review, the present study aimed to identify the main nerve structures injured during rhytidoplasty, by either the conventional or endoscopic technique. Methods: A systematic literature review was performed in the main databases currently used. Articles that met the inclusion criteria were analyzed in their entirety, and their references were checked. Finally, 20 studies were included. Results: In these 20 articles, 3,347 patients were evaluated and 142 nerve injuries found, of which 79 were of the facial nerve, 55 were of the trigeminal nerve, and eight were of the great auricular nerve. Of these, only two were definitive. The lesions were more prevalent (81%) with the video-assisted techniques than with the conventional techniques (19%). Conclusion: We found that the injuries of the temporal and buccal branches were more frequent during facelifts; and those of the great auricular nerve, during cervical rhytidoplasty. Although nerve injuries are infrequent in the literature, well-designed studies that aim to better understand these complications are lacking. INTRODUCTION Rhytidoplasty is currently one of the most common aesthetic surgeries performed by plastic surgeons worldwide.Many techniques for facial rejuvenation have been described in the literature, the oldest of which was reported in 1919 by Passot 1 , who described in detail a browlift.Since then, facial rejuvenation has gone through a constant evolution of surgical techniques, beginning with the simple classic procedures to composite techniques, involving various procedures.Currently, the number of minimally invasive and endoscopic surgeries has increased since the 1990s 2,3 .Along with this increase, the number of procedure-related complications has grown, with nerve injuries being the major concern. The present study aimed to identify by conducting a systematic review of the literature the main nerve structures injured during a rhytidoplasty, by either the conventional or endoscopic technique, regardless of the approach used, in order to direct plastic surgeons toward reducing the risk of complications from facial or cervical nerve injuries. METHODS The study began with a search of the topic in the following main electronic databases currently used: PUBMED, SCOPUS, and EMBASE.The following keywords were used in the following order: 1) Facelift 2) Paresthesia or paresis 3) ( # 1) and ( # 2) The initial research strategy was to search articles pertaining to the relationship of the abovementioned words, in the abovementioned main databases.The titles and abstracts were read, and all the articles that were included in the study were analyzed in their entirety.The references were also rigorously researched in order to include articles of interest.All of the articles were accessed by two independent researchers (MAS and EP) by using the following inclusion criteria: texts in English and Portuguese, published over the past 15 years, patients of both sexes and older than 18 years, and no age limit.Duplicate articles were removed.Studies that used animals, corpses, or adolescents were excluded.All of the survey data were tabulated in a spreadsheet for statistical analysis of the data.The organogram that exemplifies the search is described in Figure 1. RESULTS The search revealed 20 articles.The following items were analyzed: country of origin; the number, sex, and age of the patients; the surgical technique used; the nerve injury found; and the time of postoperative control (Table 1).The country that produced the most number of articles of interest was the United States with nine, followed by Brazil and France with two articles each.South Korea, Lithuania, Argentina, United Kingdom, Singapore, Spain, and Turkey contributed one article each.The total number of patients involved was 3,347.The study of Tanna and Lindsey (2008) 4 from the University of Washington had the largest number of individuals assessed (1000 patients).The articles by Malata and Abood (2009) 5 and Newman (2006) 6 from the United Kingdom and United States assessed the smallest number of patients (30 and 10 patients, respectively).Some articles did not detail sex [7][8][9][10][11][12] .In those that reported sex, 143 were men and 1,825 were women; therefore, only 7.2% patients were male.In our comparative analysis of mean age, we found that this ranged from 42.4 years for women to 69.8 years for despertam maiores preocupações.O presente estudo visa a identificar, por meio de uma revisão sistemática, as principais estruturas nervosas lesadas durante uma ritidoplastia, tanto por técnicas convencionais como endoscópicas.Métodos: Uma revisão sistemática da literatura foi realizada nas principais bases de dados utilizadas atualmente.Artigos que preencheram os critérios de inclusão foram analisados na íntegra e suas referências, verificadas.Ao final, 20 estudos foram incluídos.Resultados: Nestes 20 artigos, no total, foram avaliados 3.347 pacientes, sendo encontradas 142 lesões nervosas: 79 do nervo facial; 55 do nervo trigêmeo, e oito do nervo auricular magno.Destas, apenas duas foram definitivas.As lesões, proporcionalmente, foram mais comuns nas técnicas videoassistidas (81%), quando comparadas com as convencionais (19%).Conclusão: Encontramos que as lesões dos ramos temporal e bucal são mais frequentes no facelift e as do nervo auricular magno, na ritidoplastia cervical.Apesar de as lesões nervosas serem pouco frequentes na literatura, faltam estudos bem desenhados que busquem conhecer melhor estas complicações. Nerve alterations in rhytidoplasty men, the youngest and oldest being 29 years 13 and 84 years, respectively 14 .The mean patient follow-up period also varied greatly according to study type.The smallest and most common study type had 6 months' follow-up 4,6,15,16, and the largest had 5.5 years 17 .Regardless of the specific surgical technique used, the rhytidoplasties were allocated into two groups as follows: those that used conventional or classic techniques, and those that were endoscopy or video assisted.In seven articles, the conventional technique was used, while 10 articles reported video-assisted techniques.In three articles 8,11,14 , both techniques were used. Regarding the nerve injuries involved in rhytidoplasty, the number of sensory injuries observed, which were mostly of the facial nerve compared with the trigeminal nerve, was much larger than that of motor injuries.In total, of 79 facial nerve injuries, 18 were caused by conventional rhytidoplasty and 61 were caused by the endoscopic rhytidoplasty (Figure 2). Of 55 trigeminal nerve injuries, 1 was caused by conventional rhytidoplasty and 54 were caused by endoscopic rhytidoplasty.Eight great auricular nerve injuries were incurred during cervical rhytidoplasty (Table 2).All of the injuries found were transitory, with the exception of those in the study by Sullivan et al. (1999) 20 , who identified a permanent injury on the frontal branch of the facial nerve, which was caused during the training of residents in otolaryngology, and those in the study by Williams et al (2003) 22 , who found a permanent change in the maxillary branch of the trigeminal nerve. Regarding the type of technique used, conventional rhytidoplasty was performed in 2,046 patients, whereas endoscopic rhytidoplasty was performed in 1,301 patients (61% versus 39%).When we separated the nerve injuries according to either conventional or endoscopic rhytidoplasty, we observed that the video-assisted technique presented a much higher prevalence of injuries than the classical techniques (81% versus 19%). DISCUSSION Rhytidoplasty is becoming increasingly common.The number of techniques published and their results vary greatly.The ability to restore the harmony of facial features requires rigor in applying the techniques, exquisite knowledge of the anatomy, and artistic sensibility to individualize the surgical objective for each patient 11 .Failure to observe these basic laws can lead to extremely undesirable changes, some permanent. The complications of rhytidoplasty are well known, including hematoma, alopecia, hypertrophic scar, infection, facial contour deformity, and sensory and motor lesions.Hematoma is still the most common complication; however, if controlled early, it has little effect on the final surgical result 24 . Great auricular nerve injury is the most common nerve injury related to cervical rhytidoplasty 16 .In a residency program, Sullivan et al. 20 , during the assessment of sensory injuries in rhytidoplasty, found six cases of temporary paresthesia in the ear and one case that evolved permanent sensory loss of the auricular region due to great auricular nerve injury. Transient paresthesia and hyperesthesia of the lower two-thirds of the middle ear, the preauricular region, and neck usually last from 2 to 6 weeks and are the result of inevitable injury to a small amount of nervous tissue in the surgical area of rhytidoplasty.The permanent sensory injury in the lower portion of the ear, in turn, is generally due to deep dissection of the middle portion of the sternocleidomastoideus muscle 25 . The mechanism of sensory injury more commonly involved anesthesia infiltration, nerve perforation by the anesthesia needle, and deep and extensive dissection, in addition to swelling or injury of the nerve during electrocautery.In the articles included in this review, no reference was made on the use of the latter methodology.However, in the study of Firmin et al. 7 , a device similar to a cautery, the harmonic blade, was used.In this study, only four cases of temporary paralysis of the facial nerve were observed, all of which were completely resolved in 3 postoperative months. In general, paresthesia caused by anesthesia infiltration spontaneously resolve in a few hours, when the anesthetic effect ceases.However, the temporary injury can last from 24 hours up to weeks and is usually caused by direct injury to the nerve 20 .In our review, we observed a large variation in the recovery of temporary nerve lesions.The minimum recovery period was 41 days for an injury to the temporal branch of the facial nerve in the study by Heinrichs and Kaidi in 1998 8 .The maximum recovery period was 2 years in a patient with an injury of the supratroclear branch of the ophthalmic nerve incurred during a facelift, in the study by Behmand and Guyuron in 2006 17 .However, we observed that in most of the articles analyzed, the most common recovery interval was between 6 weeks and 6 months [5][6][7]9,13,14,19 . Permanent facial nerve injury is a rare complication, whereas temporary injuries are much more common.In a review of the literature conducted by Rubin and Simpson 26 in 1996, in 7,000 cases of superficial rhytidoplasty, only 55 cases were motor injuries, the most common being of the temporal branch, followed by the marginal mandibular nerve.Of the 55 cases, only seven were definitive.In our review of 3,347 patients, 139 had some degree of temporary injury, only two of which were permanent.One of the permanent injuries was of the maxillary branch of the trigeminal nerve incurred during a browlift 22.The patient progressed with permanent loss of sensitivity of the region supplied by this nerve.The second case was of the great auricular nerve, incurred during a cervical rhytidoplasty in a residency program, as described before 20.Although the prevalence of injuries varies greatly depending on the study, all studies agree that the frontal and marginal mandibular branches of the facial nerve have the highest risk of injury and permanent dysfunction during a facelift 25 .The mechanisms of injury to the marginal branch include transection during deep dissection of the subplatysmal flap, plication sutures, tissue traction, and cervical liposuction in the subplatysmal plane.Ellenbogen 28 described two cases of transient pseudoparalysis of the marginal mandibular branch due to an injury in the cervical branch.These injuries can be distinguished from injuries of the marginal mandibular branch because these patients can still evert the lower lip because of preservation of the function of the mentalis muscle.The vulnerable point to injury of the marginal mandibular nerve is after leaving the deep cervical fascia, when it runs on the anterior face of the jaw, in the region of the facial artery 27 . Regardless of the article analyzed, in all of the studies, we opted for a conservative assessment of nerve injuries.In none of these studies was any directed clinical treatment proposed. In our systematic review, we observed that the video-assisted techniques presented a higher prevalence of nerve injuries than the classical techniques (81% versus 19%).Although only few studies have addressed this topic in the literature, it is true that in the United States, the endoscopic technique has been progressively abandoned because of the high cost of the equipment, the long learning curve, or the long operative time required for this procedure.In fact, in that country, more attention has been given to approaches that require reduced access but by using the conventional techniques 29 . CONCLUSION The actual incidence of nerve injuries in rhytidoplasty has not yet been determined.Prospective studies are required that more accurately objectively and critically assess the sensitivity, and facial and cervical movements of patients.This systematic review reaffirms the statements of other authors on the main facial changes and still managed to observe that these lesions are more prevalent when endoscopic procedures are performed. Figure 1 . Figure 1.In total, 113 studies were initially found, but only 18 met the inclusion criteria.After the analysis of the references of these articles, two more were added, for a total of 20 studies. Figure 2 . Figure 2. Of the 142 nerve injuries identified, 79 were of the facial nerve branches, of which 77% were caused by video-assisted techniques. Table 1 . Articles of interest with the main variables.
2018-12-30T03:14:16.457Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "3afac8d5f2f0e9cb9eee2700329149e134890e8b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5935/2177-1235.2014rbcp0081", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3afac8d5f2f0e9cb9eee2700329149e134890e8b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18836313
pes2o/s2orc
v3-fos-license
Edinburgh Research Explorer A Novel p53 Phosphorylation Site within the MDM2 Ubiquitination Signal The p53 DNA-binding domain harbors a conformationally flexible multiprotein binding site that regulates p53 ubiquitination. A novel phosphorylation site exists within this region at Ser 269 , whose phosphomimetic mutation inactivates p53. The phosphomimeticp53(S269D)exhibitscharacteristicsofmutant p53: stable binding to Hsp70 in vivo , elevated ubiquitination in vivo , inactivity in DNA binding and transcription, increased thermoinstability using thermal shift assays, and (cid:1) max of intrinsic tryptophan fluorescence at 403 nm rather than 346 nm, characteristic of wild type p53. These data indicate that p53 conformational stability is regulated by a phosphoacceptor site within an exposed flexible surface loop and that this can be destabilized by phosphorylation. To test whether other motifs within p53 have similarly evolved, we analyzed the effect of Ser 215 mutation on p53 function because Ser 215 is another inactivating phosphorylation site in the The p53 DNA-binding domain harbors a conformationally flexible multiprotein binding site that regulates p53 ubiquitination. A novel phosphorylation site exists within this region at Ser 269 , whose phosphomimetic mutation inactivates p53. The phosphomimetic p53 (S269D) exhibits characteristics of mutant p53: stable binding to Hsp70 in vivo, elevated ubiquitination in vivo, inactivity in DNA binding and transcription, increased thermoinstability using thermal shift assays, and max of intrinsic tryptophan fluorescence at 403 nm rather than 346 nm, characteristic of wild type p53. These data indicate that p53 conformational stability is regulated by a phosphoacceptor site within an exposed flexible surface loop and that this can be destabilized by phosphorylation. To test whether other motifs within p53 have similarly evolved, we analyzed the effect of Ser 215 mutation on p53 function because Ser 215 is another inactivating phosphorylation site in the conformationally flexible PAb240 epitope. The p53 S215D protein is inactive like p53 S269D , whereas p53 S215A is as active as p53 S269A . However, the double mutant p53 S215A/S269A was transcriptionally inactive and more thermally unstable than either individual Ser-Ala loop mutant. Molecular dynamics simulations suggest that (i) solvation of phospho-Ser 215 and phospho-Ser 269 by positive charged residues or solvent water leads to local unfolding, which is accompanied by local destabilization of the N-terminal loop and global destabilization of p53, and (ii) the double alanine 215/269 mutation disrupts hydrogen bonding normally stabilized by both Ser 215 and Ser 269 . These data indicate that p53 has evolved two serine phosphoacceptor residues within conformationally flexible epitopes that normally stabilize the p53 DNA-binding domain but whose phosphorylation induces a mutant conformation to wild type p53. p53 protein is a sequence-specific DNA-binding protein and transcription factor that can regulate the cellular response to cellular stresses. Activating signals include virus infection, irradiation, hypoxia, and metabolic stress (1). The molecular basis for p53 activation involves coordinated inhibition of the ubiquitin-proteasome degradation system that is regulated by the E3 ubiquitin ligase MDM2 and induction of sets of activating enzymes including protein kinases, proline isomerases, and acetyltransferases that regulate p53 responsive gene expression. Of these signaling pathways that activate p53 that include enzymes like ATM, PIN, and p300, the most well characterized signals are those mediated by phosphorylation. The most highly conserved phosphorylation sites of human p53 occur in the N-terminal transactivation domain at Ser 15/18/20 and the C-terminal CK2 phosphorylation site at Ser 392 (1). Phosphomimetic mutation of these sites can stimulate p53-dependent transcription (2), and mouse transgenes with alanine-substituted mutations have increased cancer incidence in stress or tissue-specific manner (3)(4)(5). The biochemical basis for these effects has been reported; transactivation domain phosphorylation at Thr 18 can directly inhibit MDM2 binding (6,7), whereas Ser 20 phosphorylation can stabilize p300 binding (8,9). Ser 392 phosphorylation can stimulate p53 sequence-specific DNA binding according to the ensemble model of allostery (10) in part by stabilizing p53 tetramers from the dimeric state (11). Phosphomimetic mutation of codon 392 increases the thermostability of the core DNA-binding domain, demonstrating an allosteric effect to the activation (12). There are other phosphorylation sites on p53 that stimulate its function as defined using transgenics; one such site includes the MAPK kinase site at Ser 46 (13), but the molecular basis for these activating effects on p53 structure and function is not as well defined as is the former cluster of phosphoacceptor sites. Post-translational modifications can also catalyze the inhibition of p53 protein; the most well characterized are those induced by ubiquitination-degradation pathways, but there are also kinase-signaling pathways that can inhibit p53. MDM2mediated degradation of p53 is the best characterized of the p53-inhibiting pathways (14) that can in turn be stimulated by MDM2 interactions with other proteins, including TAFII250 (15) and MDMX (16). Although there are many phosphoacceptor sites on MDM2 that have been identified, the best characterized site with potential to stimulate MDM2-mediated inhi-bition of p53 is in the N-terminal MDM2 pseudosubstrate motif, or "lid" (17,18). In addition to the MDM2-stimulated degradation pathway, there are three distinct kinase pathways that directly target p53 and inhibit the protein function, although the molecular basis for this inhibition is not known. One inactivating phosphorylation site on p53 is at Ser 315 . Although Ser 315 phosphorylation can stimulate the specific DNA-binding function of p53 (19) and DNA damage can activate p53 function via Ser 315 phosphorylation in cells (20), this phosphorylation in dividing cells is catalyzed by a GSK3-signaling pathway that can promote the nuclear export and inhibition of p53 under conditions of endoplasmic reticulum stress (21). These data indicate that phosphorylation at Ser 315 can inhibit or activate p53 and therefore depends upon the context. A second p53 inactivating kinase pathway is at Ser 215 , whose phosphorylation can be catalyzed by an Aurora-dependent signaling pathway (22), but the molecular basis for how this inhibits p53 is not known. The final kinase-inactivating pathway is triggered by COP9, which phosphorylates p53 at Thr 155 , and this triggers p53 degradation (23). As with the other inactivating site reviewed above, the protein-protein interactions that are driven by Thr 155 phosphorylation are not defined. In an accompanying paper (24), we report the identification of a novel p53 phosphorylation site in a multiprotein docking site in the DNA-binding domain of p53 at Ser 269 . This phosphorylation site is notable in that it occurs within the MDM2binding site (e.g. "ubiquitination signal") that triggers p53 ubiquitination and that it also forms docking sites for distinct class of protein kinases (25). Phosphomimetic mutation at Ser 269 suggested that this phosphorylation would inactivate rather than activate p53 function in vivo. In this paper, we report on the biochemical basis for p53 inactivation by Ser 269 phosphorylation. Using phosphomimetic mutants, we propose that Ser 269 phosphorylation inactivates p53 by direct destabilization of the p53 core DNA-binding domain, thus inducing a mutant conformation to wild type p53. Molecular dynamics simulations further suggest specific mechanisms to explain how the phosphate at Ser 269 could destabilize p53 protein folding by altering the allosteric effects of Gln 100 and Thr 102 on p53 conformational stability. These data indicate that p53 has evolved a kinase pathway(s) that can regulate the conversion of WT p53 between folded and unfolded conformational ensembles and suggest the existence of signaling cascades that can induce a mutant conformation to the wild type p53 tetramer. Peptide ELISA-Biotinylated unphosphorylated and phosphorylated peptides were captured onto ELISA wells coated with streptavidin and blocked with 3% BSA in PBS-Tween as described previously to measure Mdm2 binding (17) or mAb binding (6). Cell Culture, Transfection, and Analysis-H1299 cells were cultured in RPMI 1640 medium supplemented with 10% fetal bovine serum. Cells were harvested and lysed using urea lysis buffer as described previously (28) unless otherwise stated. For mutant p53 conformation analyses, p53 within these lysates was analyzed by ELISA using PAb1620 and DO-1 as described previously (29) or by immunoprecipitation. For immunoprecipitation, cell lysates (100 ng) were precleared with protein G beads (Sigma) for 1 h before incubation with 1 g of DO-1, PAb1620, or PAb240 at 4°C. DNA Binding Assays-BL21 AI Escherichia coli were transformed with pRSET p53 core wild type or S269A or S269D mutant form of the p53 core domain (residues 94 -312) and were expressed and purified from soluble lysates using SP cation exchange and heparin affinity Hi-Trap columns (Amersham Biosciences), as described previously (27,30). The DNA binding activity of p53 was examined by EMSA. p21 promoterderived sequences (31) were labeled with [␥-32 P]ATP and incubated with purified recombinant wild type or serine 269 mutants of p53 core domain in 30 mM Hepes pH 7.5, 50 mM KCl, 5% glycerol, 0.4 mM DTT, 0.1 mg/ml BSA, and 0.5% Triton X-100 containing 1 g of poly(dI-dC) DNA (Sigma) and 500 ng of salmon sperm DNA in a final volume of 12 l for 30 min at room temperature. Reactions were processed by adding 6ϫ DNA sample buffer, loaded onto 5% polyacrylamide Tris borate gels, and separated by electrophoresis at 35 mA for 3 h at 4°C. Gels were dried, and images were analyzed following exposure to a phosphor storage screen. Intrinsic Fluorescence-Fluorescence emission spectra of the purified wild type, S269A, and S269D forms of the p53 core domain were measured using a SPEX FLUOROMAX-3 spectrofluorometer as described previously (32). Excitation wavelengths of 280 and 295 nm were used for tyrosine and tryptophan residues, respectively, and tyrosine and tryptophan fluorescence spectra were recorded from 300 to 350 nm and from 320 to 450 nm, respectively, at 10°C using 0.5-nm steps and an integration time of 1 s. The final spectrum was the average of three emission scans minus the background buffer (50 mM Tris, pH 7.2, 5 mM DTT) fluorescence scan alone. Thermal Protein Unfolding Assay-p53 protein unfolding was monitored using fluorescent SYPRO Orange dye (Invitrogen). Recombinant p53 core proteins were diluted to a final concentration of 2.5 M in buffer ( (27)) and incubated on ice for 15 min before SYPRO Orange was added to a final concentration of 5ϫ (from a stock of 5000ϫ). Samples were aliquoted into a 96-well PCR plate and sealed with optical quality sealing film (Bio-Rad). Thermal protein unfolding was carried out using an iCycler iQ real-time PCR system (Bio-Rad) by heating samples from 15 to 55°C in 0.5°C increments with a 30-s incubation at each increment. The fluorescent intensity was measured using excitation/emission wavelengths of 485/575 nm in relative fluorescent units (RFU), and the thermal denaturation graphs were plotted as a function of the gradient of protein unfolding against the temperature gradient (d(RFU)/dT). Molecular Dynamics Methods-For the modeling studies, we used chain A from the crystal structure of the p53 core domain dimer bound to DNA (Protein Data Bank entry 2AHI resolved at 1.85 Å) (33). This chain was chosen because it had the least number of disordered residues. Modeling studies were carried out using the AMBER9 (34) package. The missing atoms were built using standard geometries as implemented in AMBER. Mutants were constructed in silico using SCWRL (35). The parameters for phosphoserine were taken from the AMBER parameter data base at University of Manchester (available on the World Wide Web). The DNA-binding domain of p53 consists of a zinc ion that is coordinated to three Cys residues and one His residue, and the parameters for this coordination were taken from earlier studies (36,37). Each system was solvated with a box of TIP3P (38) water molecules such that the boundary of the box was at least 10 Å from any protein atom. The positive charges in the system were balanced by adding chloride ions. The Parm99 force field was used for intermolecular interactions. The particle mesh Ewald method (39) was used for treating the long range electrostatics. All bonds involving hydrogen were constrained by SHAKE (40). A time step of 2 fs was used for propagating the dynamics. Each system was initially minimized for 2000 steps using steepest descent and conjugate gradient minimizers to remove any unfavorable interactions between the protein and the solvent. This was followed by heating each system to 300 K over 75 ps under normal pressure/temperature conditions. Subsequently, each system was simulated for ϳ40-48 ns at constant temperature (300 K) and pressure (1 atm) (41), and the structures were stored every 2 ps for analysis. Analysis was carried out using VMD (42), and figures were made using PyMOL (43). RESULTS The Phosphomimetic p53 S269D Is in an Unfolded or Mutant Conformation-A variety of mechanisms could account for inactivation of p53 following phosphorylation at Ser 269 (24). Inactivating p53 mutation of one neighboring residue within this MDM2-binding site in the S9-S10 linker region (p53 F270A ) has been shown to unfold p53 protein and promote "hyperubiquitination" of p53 in vivo (44). This is attributed to enhanced MDM2 binding affinity to destabilized, unfolded p53 mutants (like p53 R175H ) due to exposure of the second MDM2 binding site in the DNA-binding domain of p53 (29,45). Indeed, MDM2 protein preferentially binds to peptides derived from this con-formationally sensitive region of p53 (Fig. 1A, peptide 17), and phosphorylation of p53 at Ser 269 does not inhibit MDM2 interactions with this motif (Fig. 1B). We examined therefore whether p53 S269D is phenotypically equivalent to mutant, inactive, and unfolded p53 as defined by sensitivity to ubiquitin-like modification in cells. Immunoblotting lysates from H1299 cells transfected with wild type p53 and with p53 S269A and p53 S269D mutants reveals a ladder of higher molecular mass bands in lysates from cells expressing p53 S269A and p53 S269D mutants (Fig. 1C, lanes 7 and 8 versus lane 6), a phenomenon indicative of p53 ubiquitin-like modifications (29). The intensity of the high molecular mass p53 ladder of "ubiquitin-like adducts" was significantly increased when protein degradation was inhibited by treating H1299 cells with the proteasome inhibitor, MG-132 (Fig. 1C, compare lanes 1-4 with lanes 5-8). Thus, p53 S269D is phenotypically equivalent to p53 R175H in that it is inactive (24) and highly sensitive to ubiquitin-like adducts in cells. By contrast, because transfected p53 S269A can be more active than WT p53 at inducing elevated levels of MDM2 protein (24), the observed ubiquitin-like adducts of p53 S269A can be attributed to enhanced induction of endogenous MDM2 protein. This phenomenon has been observed previously using the gain-of-function wild type p53 mutants in the S9-S10 loop: p53 S261A and p53 S264A (44) (highlighted in Fig. 2 ) (24). We next evaluated whether the phosphomimetic p53 S269D protein exhibits a mutant conformation in vivo, thus explaining the inability of p53 S269D to act as a transcription factor (24). To test this, we used conformation-specific monoclonal antibodies to examine folding of the p53 isoforms. p53 alleles were transfected into H1299 cells, and the proteins were immunoprecipitated from lysates using PAb1620 and PAb240 monoclonal antibodies, which specifically recognize folded/wild type and denatured/mutant conformations of p53, respectively (46). The DO-1, PAb1620, and PAb240 antibodies immunoprecipitated equivalent levels of transfected wild type p53 ( Fig. 2A, lanes 4 -6, respectively). The equivalent amounts of PAb1620 (native/folded) and PAb240 (mutant/unfolded) reactive wild type p53 are due to the equilibrium that exists between the "folded" and "unfolded" states of p53 protein (47). Transfection of p53 S269A into cells also produced p53 protein in an equivalent folded and unfolded equilibrium ( Fig. 2A, lanes 7-9). However, the amount of p53 S269D immunoprecipitated using PAb1620 was significantly lower than that of wild type p53 ( Fig. 2A, compare lane 12 with lanes 6 and 9). This was not due to lowered expression of the p53 S269D mutant because blotting of the total cellular p53 pool shows equivalent expression of p53 wild type, p53 S269A , and p53 S269D forms (Fig. 2B). The ratio of folded to non-folded p53 S269D mutant was further examined by quantitative ELISA. Lysates from cells transfected with wild type and p53 S269A showed significant and comparable binding to PAb1620 (Fig. 2, C versus D). In contrast, p53 S269D showed significantly less binding to PAb1620 compared with wild type p53 (Fig. 2, C versus D), confirming that the phosphomimetic mutant is in a non-native conformation. Codon 269 is adjacent to Phe 270 and Asn 268 , and mutation of either residue to Ala 270 or Asp 268 can either destabilize or stabilize the p53 tetramer, respectively (29,44,48). For example, the N268D mutation can form an altered hydrogen bond network that links the S1 and S10 sheets of the ␤-sandwich in a more energetically stable manner. It is therefore possible that the S269D mutation destabilizes the p53 core domain in a manner similar to the F270A mutation (29). To determine whether loss of transcriptional activity of p53 S269D was due to reduced DNA binding, we examined the ability of the p53 core domain variants to bind DNA in a sequence-specific manner at 4°C. Recombinant wild type p53, p53 S269A , and p53 S269D core domain mutants were expressed and purified from E. coli. The wild type p53 and p53 S269A core domain proteins both demonstrated a concentration-dependent increase in binding to the p21 promoter sequence (Fig. 3). By contrast, p53 S269D did not bind to the p21 promoter element (Fig. 3), demonstrating that phosphomimetic mutation of p53 at serine 269 ablates its sequence-specific DNA binding function. In order to further evaluate whether the phosphomimetic mutant is in a misfolded conformation in cells, we analyzed whether this p53 mutant can interact with Hsp70. This molec-ular chaperone interacts with unfolded mutant forms of p53 and can target the mutant protein for degradation (49). Analysis of immunoprecipitated p53 shows that Hsp70 associates with the p53 S269D mutant (Fig. 4A, lane 4 versus lane 1) yet does not co-immunoprecipitate with wild type p53, supporting the hypothesis that mutation of serine 269 to aspartate leads to p53 unfolding. Surprisingly, p53 S269A was also found to co-immunoprecipitate Hsp70, despite its wild type-like activity (24). This suggests that this mutant (S269A) is partially destabilized, despite being fully active. In fact, biophysical studies (see below) confirm and indicate that p53 protein instability in vitro can be uncoupled from loss-of-function effects. Together, these findings indicate that mutation of serine 269 to the phosphomimetic aspartate form leads to p53 protein unfolding in vivo and suggests that phosphorylation of p53 protein at serine 269 may induce conformational changes within p53 that cause it to adopt a mutant conformation. This may account for the observed loss of DNA binding in vitro and lowered transcriptional activity toward endogenous p21 and . MDM2 binding to p53 peptides was determined by ELISA. Streptavidin-coated plates were coated with biotinylated peptides and incubated with MDM2, and the amount of MDM2 captured was determined using monoclonal 2A10 followed by chemiluminescence. The data are plotted as MDM2 binding (relative light units) as a function of MDM2 levels. B, effects of serine 269 phosphorylation on MDM2 binding to its p53-DNA-binding domain docking site. MDM2 binding to wild type and Ser 269 -phosphorylated p53 BOX-V domain peptide (LGRNSFEVR) was examined by ELISA as in A. C, mutation of p53 at codon 269 increases p53 ubiquitination. pcDNA expression vectors encoding p53, p53 S269A , or p53 S269D were transfected into H1299 cells (without or with 10 M MG-132 treatment for 4 h prior to harvesting). Lysates (10 g) were immunoblotted with DO-1 to detect total p53 and total ubiquitin-like modification of p53. Arrow indicates p53. R.L.U., relative light units. Error bars, S.D. Mdm2 promoters. Biophysical analyses were thus initiated to determine whether the phosphomimetic p53 S269D was in fact misfolded because these methodologies could reflect fundamental thermodynamic properties of the wild type and mutant p53 core DNA-binding domains. Aspartate Mutation of Codon 269 Produces a Mutant p53 Conformation; Implications for Control of p53 Folding by Phosphorylation at Serine 269-Intrinsic tyrosine or tryptophan fluorescence is highly sensitive to its local environment, and changes in fluorescence can reflect conformational changes, ligand binding, or denaturation. These properties have been useful in defining the conformational flexibility and thermodynamic instability of wild type p53 at physiological temperatures and for defining the enhanced instability of tumor-derived mutations in the p53 core domain (27,30,50). The structural integrity of p53 wild type, p53 S269A , and p53 S269D mutants was examined by studying the intrinsic fluorescent properties within the purified core domain. The p53 core domain contains a single tryptophan residue (Trp 146 ), which is not freely accessible on the surface and faces the interior of the p53 protein (PyMOL, Protein Data Bank entry 2FEJ, in solution). The p53 core domain also contains eight tyrosine residues, some of which are accessible on the surface of the protein and some of which are buried in the interior. Strong tyrosine fluorescence spectra were observed for the wild type p53 and p53 S269A core domain proteins, peaking at 302 and 303 nm, respectively (Fig. 5A). The intrinsic fluorescent properties of the tryptophan residue within the wild type p53 and p53 S269A core domains was also examined, and both showed similar tryptophan fluorescent spectra (Fig. 5B). The spectra of p53 S269A was, however, shifted to a longer wavelength (red shift) relative to the wild type core domain with a max peak of tryptophan emission at 350 nm compared with 346.5 nm (Fig. 5B). Previous studies have shown that natively folded p53 exhibits a tyrosine-dominated fluorescent spectrum with a maximum tyrosine emission at 305 nm (27,50), suggesting that like p53 wild type, p53 S269A exists in a native conformation. The red shift in tryptophan and tyrosine fluorescence may, however, suggest that p53 S269A differs from that of p53 wild type and possibly exists in a more open or flexible conformation. By contrast, the tryptophan and tyrosine fluorescent spectra obtained for the p53 S269D core domain mutant were very different from that of the p53 wild type (Fig. 5, A and B). We were unable to detect distinct peaks of tyrosine fluorescence characteristic of wild type p53 (Fig. 5A), and the tyrosine spectrum obtained for p53 S269D was only marginally above that of the background buffer. By contrast, a strong peak of tryptophan fluorescence was detected; however, the fluorescence spectra obtained did not exhibit the characteristic peak (ϳ346 nm) observed in the presence of the wild type p53 and instead displayed a significant red shift to longer wavelengths peaking with a max of 403.5 nm (Fig. 5B). Because the fluorescent spectrum of p53 core domain changes from a tyrosine-dominated to a tryptophan-dominated spectrum upon denaturation (27,50), these findings indicate that p53 S269D exists in a denatured or aggregated form and suggest that phosphomimetic mutation of p53 at serine 269 leads to denaturation of the core domain. 3, 6, 9, and 12). p53 in the immunoprecipitates was detected using CM-1 (panspecific p53 rabbit IgG) (A), whereas total expression of p53 protein isoforms was determined by immunoblotting lysates (B). The intensity of p53-reactive bands was quantified by Scion Image software, and the ratio of PAb1620-to PAb240-reactive p53 is indicated below A. C and D, H1299 cells were transfected with wild type or the indicated mutant p53 expression vectors, and p53 forms were captured on solid phase precoated with PAb1620 (C) or DO-1 (D) by incubations with lysate as indicated. Captured p53 was quantified using chemiluminescence. The data are plotted as p53 bound to the respective monoclonal antibody as a function of lysate concentration in relative light units (R.L.U.). Error bars, S.D. The DNA-binding domain of p53 is highly structured; however, the thermodynamic stability of this region is relatively low, and mutations can reduce the thermodynamic stability of p53 (51,52). To examine whether phosphomimetic mutation of serine 269 alters the thermodynamic stability of p53, we monitored the thermal unfolding transition of the p53 core domain using SYPRO Orange fluorescent dye, an environmentally sensitive dye whose fluorescence is low in aqueous solutions yet increases in a hydrophobic environment. This dye is a useful tool to monitor the transition in protein folding during thermal denaturation because exposure of the hydrophobic interior of a protein upon heating enhances the fluorescent intensity of the dye (53,54). SYPRO Orange fluorescence was measured at temperatures ranging from 15 to 50°C. The fluorescent emission in the presence of p53 wild type and p53 S269D began to increase around ϳ32-34°C and increased to a maximum around ϳ41-44°C (Fig. 6A). Thereafter, the fluorescent intensity decreased, presumably due to aggregation of unfolded p53 protein-dye complexes quenching emission. By plotting the gradient of fluorescent emission against the temperature (Fig. 6B), the midpoint temperature (T m ) of p53 core domain folding-unfolding transition can be obtained. Wild type p53 core domain displayed an unfolding transition peak with midpoint temperature of 40.5°C (Fig. 6B), whereas p53 S269A displayed a midpoint transition peak at 37°C (Fig. 6B), suggesting that wild type and p53 S269A core domains exist in two thermodynamically distinct states. Because the intrinsic fluorescence data suggests p53 S269A may exist in a more open conformation (Fig. 5, A and B), the lowered thermodynamic stability of p53 S269A may be due to greater conformational flexibility within the p53 S269A protein. Very little change in SYPRO Orange fluorescent emission was observed in the presence of p53 S269D (Fig. 6A), although a small but noticeable shoulder was obtained when the gradient of fluorescent emission in the presence of p53 S269D was plotted FIGURE 3. Phosphomimetic mutation of p53 at serine 269 ablates its specific DNA binding function. The DNA binding function of p53 containing codon 269 mutations was measured using a radiolabeled p21 DNA sequence using native gel electrophoresis. The binding activity of p53 wild type, p53 S269A , or p53 S269D (200, 300, and 400 ng) was determined in reactions containing the p21 promoter sequence. DNA-p53 complexes were resolved using a native polyacrylamide gel, dried, and detected by storage phosphor screen. Bound and free probe are highlighted by arrows. cells were transiently transfected with p53 wild type, S269A, or S269D, and total p53 was immunoprecipitated (IP) using DO-1. Co-immunoprecipitating Hsp70 was detected by immunoblotting (IB). Bottom panel, equal immunoprecipitation of p53 from lysates was determined by blotting with anti-rabbit p53 (CM-1). Total Hsp70 (B) and Hsp90 (C) levels in H1299 cells following transfection with p53 wild type, S269A, or S269D were determined by immunoblotting. FIGURE 5. Alanine or aspartate substitution of serine 269 generates conformational differences within the p53 core domain. Tyrosine (A) and tryptophan (B) fluorescence spectra of the wild type p53, p53 S269A , and p53 S269D core domain variants were determined and corrected for the presence of buffer background spectra as described previously (32). The fluorescent maxima for both tryptophan and tyrosine emission and the corresponding wavelength are detailed in the table below. cps, counts per second. against temperature (Fig. 6B). This gave p53 S269D a calculated midpoint temperature of transition of 29.5°C. This is significantly lower than T m calculated for wild type p53 and p53 S269A , suggesting that phosphomimetic mutation of serine 269 increases the thermodynamic instability of the p53 core domain. Little if any unfolding of p53 S269D was observed at the temperatures required to unfold the wild type p53 and p53 S269A proteins, suggesting that only a small proportion of the purified p53 S269D core domain pool has a thermostability similar to that of wild type p53. The intrinsic fluorescent data show that p53 S269D exists in an unfolded, aggregated form (Figs. 5A and 6B). This would account for the lack of SYPRO Orange binding because greater melting temperatures may be required to unfold higher order aggregates. To further examine the differences in thermodynamic stability of p53 and p53 S269A , we investigated the impact ligands would have on thermal unfolding transitions. p53 consensus DNA had a significant effect on the thermal stability of both wild type p53 and p53 S269A . The unfolding transition peak of p53 wild type was increased to 43, 44.5, and 46.5°C in the presence of 1, 3, and 8 M consensus DNA, respectively (Fig. 6C). Because the consensus DNA did not display any change in fluorescence above background at any of the temperatures tested (data not shown), these data indicate that binding to consensus DNA induces changes that significantly enhance the thermodynamic stability of the p53 core domain. Consensus DNA also altered the fluorescent profile of the p53 S269A mutant (Fig. 6D). The unfolding kinetics of p53 S269A was shifted in the presence of 1, 3, and 8 M consensus DNA, and the unfolding transition midpoint temperatures were increased from 37 to ϳ42.5°C (Fig. 6D). These data show that, like that of wild type p53, the stability of p53 S269A is significantly enhanced upon binding to DNA. However, DNA binding does not overcome the marginal thermal instability incurred by alanine mutation of serine 269 because even in complex with DNA, the p53 S269A core domain remains slightly less thermodynamically stable than the wild type p53 core domain complexed to DNA (T m of ϳ42.5°C versus T m of ϳ46.5°C), and the FIGURE 6. Thermal stability of p53 serine 269 substitution mutants. A, thermal melting profile of p53 mutants. Unfolding of p53 mutants was monitored between 15 and 50°C using SYPRO Orange fluorescent dye, and melt curves were generated by plotting SYPRO Orange fluorescence as a function of temperature in relative fluorescent units (RFU). B, phase transitions in the thermal melting profile. The gradient of p53 unfolding was plotted as a function of temperature to obtain the midpoint temperature transition (T m ) for each variant. C and D, effect of ligands on p53 unfolding. Unfolding of p53 wild type (C) and p53 S269A (D) was measured between 15 and 60°C in the presence of increasing concentrations of p21 consensus DNA. The midpoint unfolding transition temperatures are indicated below. Thermal analyses were performed in triplicate; however, a single trace representative of each mutant is shown. p53 S269A -DNA complex appears structurally distinct from the wild type p53 core domain-DNA complex. As a control, the addition of magnesium ions did not qualitatively alter the unfolding kinetics of wild type p53 or p53 S269A (data not shown). Two Conformationally Flexible Loops Containing Phosphoacceptor Sites Regulate p53 Conformation and Activity-Characterization of the codon 269 mutants has shown that the alanine mutant can induce a more open conformation as defined by intrinsic fluorescence and that this mutant does not reduce the specific activity of p53 in cells. However, the Ala 269 mutant does increase thermoinstability in vitro and increases Hsp70 interactions in vivo. By contrast, the phosphomimetic mutant p53 S269D is highly destabilized, inactive, and characteristic of unfolded mutant p53. These data suggest first that this conformationally flexible loop in p53 has evolved a serine residue to maintain a degree of intrinsic inflexibility (i.e. flexibility or a more open conformation can be improved upon by alanine mutation of codon 269). Second, the serine has evolved to function as a phosphoacceptor site whose phosphorylation drives the unfolding and destabilization of wild type p53. This would create a rapid and flexible mechanism to convert the wild type p53 tetramer to a mutant conformation. Although the physiological functions of this switch are not yet evident and will require additional research, the existence of this phosphorylation site provides a novel mechanism to unfold and inactivate wild type p53 in vivo. In order to further evaluate the evolutionary significance of this post-translational mode of p53 inactivation, we also examined a previously identified phosphorylation site of p53 at Ser 215 . Phosphorylation at the Ser 215 site of p53 was previously shown to occur in cells, and phosphomimetic mutation at Ser 215 created a transcriptionally inactive p53 (22). The mechanism accounting for p53 inactivation following Ser 215 phosphorylation has not been described. In analyzing this Ser 215 phosphorylation site, we noticed that it occurs at another socalled conformationally flexible epitope, bound by the monoclonal antibody PAb240 (55). This mAb was the original probe used to demonstrate that mutant p53 protein is unfolded in human cancer cells (56). We evaluated the alanine-substituted mutants of codon 215 mutants to determine whether this flexible motif behaved like the codon 269 mutants and secondarily whether these two conformationally flexible motifs interact allosterically and alter the folding of the p53 DNA-binding domain. For example, we could predict that Ala 215 mutation could overcome the Asp 269 mutation or vice versa. Alternatively, because Ala 269 and Ala 215 are individually fully active but with enhanced conformational flexibility, it could be predicted that the double mutant S215A/S269A would be more active than WT p53. However, neither of these two outcomes was observed (see below). The structural integrity of wild type p53, p53 S215A , p53 S269A , and the double mutant p53 S215A/269A was tested to evaluate these possible outcomes. Tyrosine fluorescence spectra observed for the wild type p53, p53 S215A , p53 S269A , and the double mutant p53 S215A/S269A core domain proteins peaked from ϳ300 to 301.5 nm (Fig. 7A), indicating that all alanine-substituted mutants were similar in their folding properties to wild type p53 and that the double alanine-substituted mutant did not have characteristics of unfolded mutant p53 (Fig. 5). However, the intrinsic tryptophan fluorescence within the p53 S215A and the double mutant p53 S215A/S269A core domains also revealed differences similar to those seen with the p53 S269A mutant. The spectrum of p53 S215A was shifted to a longer wavelength (red shift) relative to the wild type core domain although not to the degree observed using p53 S269A (Fig. 7B). The double mutant p53 S215A/S269A also exhibited a red shift in intrinsic tryptophan fluorescence that is also indicative of a more open or flexible conformation. There was no apparent synergy in red shift using the double alanine-substituted p53 mutant. Together, these studies suggest that, like p53 S269A , p53 S215A also exists in a native and more "opened" conformation. The stability of wild type p53, p53 S215A , p53 S269A , and the double mutant p53 S215A/S269A proteins was also examined in thermal shift assays in order to compare the rate of p53 S215A unfolding with that of p53 S269A . Relative to wild type p53, which exhibits a thermal transition at 41°C, both Ala 215 and Ala 269 mutations showed an equivalent increase in the thermal instability of p53 by approximately the same temperature of 4°C (Fig. 8, A and B). However, the double mutant displayed a further enhanced thermoinstability 7°C below wild type p53 (Fig. 8). Together, these data indicate that each of these conformationally flexible loops maintains p53 thermostability to equivalent degrees, and removing both serine residues precludes further the acquisition of a wild type conformation. The thermal shift assays were also performed in the presence of increasing DNA concentrations, where enhanced thermostability of ϳ4 -5°C was observed using wild type p53, p53 S215A , and p53 S269A proteins (Fig. 9, A-C). FIGURE 7. Alanine substitution of serine 215 also generates conformational differences within the p53 core domain. Tyrosine (A) and tryptophan (B) fluorescence spectra of the wild type p53, p53 S215A , p53 S269A , and p53 S215A/S269A core domain variants were determined and corrected for the presence of buffer background spectra as described previously (32). The fluorescent maxim for either tryptophan or tyrosine emission and the corresponding wavelength are detailed in C. RFU, relative fluorescent units. The double mutant p53 S215A/S269A exhibited an increased thermal shift of 2.5°C (Fig. 9D), indicating that the protein can still interact with the DNA ligand (as seen in Fig. 9F) but not to the extent of wild type p53 or the single alanine-substituted mutants (see Fig. 9E). We finally evaluated the activity of the p53 S215A , p53 S269A , and p53 S215A/S269A and mixed alanine/aspartate mutants in cells to determine whether single and double loop mutants affected p53 function in vivo. As observed using the single S269A or S269D mutant p53s (Fig. 10, lanes 4 and 5), alanine or aspartate mutations at codon 215 produced a p53 that is either fully active or inactive, respectively (Fig. 10, D and E, lane 2 versus lane 3). The double mutants provided additional evidence that p53 conformational integrity can be controlled by the surface loop mutants. The double alanine mutant (S215A/S269A), although in a wild type conformation as defined by (i) enhanced flexibility (Fig. 7), (ii) significant destabilization in thermal shift (Fig. 8), and (iii) ability to be stabilized by DNA in vitro at 4°C (Fig. 9), was completely inactive as a transcription factor as defined by loss of p21 and MDM2 protein production (Fig. 10, lane 6). The mixed aspartate/alanine mutant was also inactive (Fig. 10, lanes 7-9). Together, these data suggest that p53 has evolved two serine residues that maintain a degree of thermodynamic stability but whose individual phosphorylation can unfold and produce a p53 with a mutant-like conformation. The negative dominance of the double alanine-substituted mutant further suggests that at least one of the two serine residues is required to maintain thermostability of p53. The Effects of Phosphoserine on p53 Conformation Using Molecular Dynamics Simulations-Molecular dynamics simulations were carried out on the core domain of wild type p53 and its phosphorylated states (phosphorylated at Ser 215 and separately at Ser 269 ). We also carried out simulations on S269D, S269A, and S215A/S269A double mutants. In the wild type state, molecular dynamics simulations show (Fig. 11A, ii) that the side chain of Ser 269 makes hydrogen bonds with Gln 100 and Thr 102 and also with solvent water molecules (the crystal structure by itself does not show any hydrogen bonds between Ser and the other two residues). The N-terminal loop is tethered to the surface of the protein by a few hydrogen bonds, and these are made largely with the S10 ␤-strand. Upon phosphorylation at Ser 269 , the highly negatively charged phosphate moiety involves Gln 100 and Thr 102 in hydrogen bonds for ϳ20 ns (Fig. 11A, iii). However, with no cationic residues in the vicinity of this phosphate to stabilize it through salt bridges, the negatively charged phosphate is energetically more stable if well solvated by water molecules. As this site fills with water, the N-terminal loop moves away from it by up to 8 Å after 45 s (Fig. 11A, iv). These events are also accompanied by destabilization of the DNA binding regions and the dimerization interface. When Ser 269 is mutated to alanine, the hydrogen bonds that Ser 269 made with Gln 100 and Thr 102 are replaced by transient hydrogen bonds to solvent and to various surrounding atoms (Fig. 11B, i). The side chain of Ala is stabilized by the hydrophobic side chains of Leu 252 and Ile 254 located on ␤-strand S9. Overall, this mutation does not appear to cause as much perturbation as the phosphorylation at Ser 269 . In contrast, the S269D mutation initially maintains the hydrogen bond with Gln 100 and Thr 102 for ϳ40 ns, after which a major conformational change in the N-terminal loop occurs, which is accompanied by a flip of Lys 101 (its terminal amine moves by ϳ11 Å). This results in the formation of a salt bridge with the side chain of Asp 269 (the hydrogen bond with Gln 100 is lost; Fig. 11B, ii). This is also accompanied by the formation of a new hydrogen bond between Asp 269 and Gln 130 that in turn perturbs the spatially contiguous DNA binding region. Together, these data highlight the importance of the interaction of the S10-␤ strand and the motif preceding the first ␤-strand in the DNA-binding domain of p53. Ser 215 is initially hydrogen-bonded to the backbone of Leu 206 . Within 2 ns, Arg 209 , which is located on the loop connecting S6 and S7 and is exposed to solvent in the crystal structure, undergoes a local conformational change and moves ϳ11 Å and makes a salt bridge with Asp 258 (located on S4), which is also hydrogen-bonded to Arg 158 . In this process, Ser 215 gets buried (Fig. 11B, iv). Phosphorylation at Ser 215 initially leads to the formation of a salt bridge between the phosphate and Arg 158 . The cavity near the phosphate initially widens as the phosphate is also hydrated. Accompanying this is a conformational change in the S6-S7 loop and an ϳ11 Å movement of Arg 213 by ϳ10 ns as it moves to make a salt bridge with the phosphate (Fig. 11B, iv). It appears that this takes place because the space between the phosphate and Arg 213 is separated by only a few water molecules when the phosphate is temporarily hydrated. This gives the space between the negatively charged phosphate and the positively charged Arg 213 a low dielectric character, which results in the Arg 213 moving toward the phosphate. The resulting salt bridge extends an existing network that involves Arg 213 -Ser(P) 215 -Arg 158 -Asp 258 -Arg 156 , and this remains quite robust. During the transition of Arg 213 , a hydro-phobic interaction between Pro 98 , Met 160 , Ile 162 , and Thr 211 that appears to weakly tether the N-terminal loop to the surface of p53 is broken by the migrating Arg 213 . This triggers the peeling away of the N-terminal loop and also results in destabilization of regions that are at the dimerization and DNA binding interface. In the S215A/S269A double mutant, the lack of hydrogen bonds between the side chain at 215 or 269 with Leu 206 and FIGURE 9. The effect of consensus DNA on the stability of p53 core domains S215A and S269A mutants. Unfolding of p53 wild type (A), p53 S215A (B), p53 S269A (C), and p53 S215/269A (D) was measured between 15 and 60°C in the presence of increasing concentrations of consensus DNA. The midpoint unfolding transition temperatures are indicated in E. Thermal analyses were performed in triplicate; however, a single trace representative of each mutant is shown. F, the binding activity of p53 wild type, p53 S215A , p53 S269A , or p53 S215A/S269A (300 or 600 ng) was determined in reactions containing the p21 promoter sequence. DNA-p53 complexes were resolved using a native polyacrylamide gel, dried, and detected by storage phosphor screen. Bound and free probe are highlighted by arrows. Gln 100 , respectively, leads to local destabilization, which propagates in such a manner that the loop containing Leu 206 loses its interactions with the Pro 98 region, undergoes a conformational change, and is accompanied by the moving away of the N-terminal loop (Fig. 11B, iii). DISCUSSION Unfolding of the p53 Core DNA-binding Domain by Phosphorylation at Ser 269 -The transcriptional activity of p53, its stability, and its turnover are regulated by multiple post-translational modifications, including acetylation, ubiquitination-like modifications, and phosphorylation (57). Ubiquitination targets p53 for degradation by the proteasome, p53 acetylation facilitates recruitment of co-activators, and phosphorylation facilitates changes in its specific activity as a transcription factor by altering protein-protein interactions (58). The binding of MDM2 to the N-terminal LXXLL motif of p53 induces conformational changes within MDM2 that enhance its binding to the ubiquitination peptide signal (SXXLXGXXXF) within the flexible linker in the DNA-binding domain of p53 (26,59). Phosphorylation of p53 also requires allosteric interactions between kinases that bind within the flexible linker in the DNA-binding domain or at sites in the tetramerization domain. Thus, this central flexible motif (Fig. 2) (24) forms a common docking site that allosterically modulates the activity of several enzymes that regulate p53 function. This underscores the importance of the flexible linker in the DNA-binding domain of p53 in coordinating incoming stress signals and p53 activity. In the accompanying study (24), the p53 multiprotein docking domain containing the MDM2 ubiquitination signal is shown to harbor a phosphorylation site at Ser 269 (Fig. 11). The inactivity of p53 S269D in cells suggests that this phosphorylation would be inactivating. In this paper, we have characterized biochemically this phosphomimetic p53 mutant to determine how phosphorylation could inactive p53. The phosphomimetic mutant destabilized the p53 core DNA-binding domain and reduced sequence-specific DNA binding, thus explaining its reduced transactivation potential. Molecular dynamics simulations also suggest that the introduction of negatively charged phosphate groups at either Ser 215 or Ser 269 requires either solvent water or positive charges to stabilize it. There are no positively charged residues near Ser 269 , so solvent waters preferentially hydrate this site, leading to the N-terminal loop in its vicinity being pushed away, accompanied by the exposure of the second MDM2 binding site in the S10 ␤-strand. In contrast, two positively charged residues, Arg 158 and Arg 213 are coordinated by Ser(P) 215 ; the latter results from a large conformational change, which also disrupts the interaction of the N-terminal loop with the p53 surface. In the case of S269D, the planar anionic environment of the carboxyl group leads to the formation of a novel salt bridge with Lys 101 that is accompanied by local structural rearrangements. Both events lead eventually to destabilization of the p53 core DNA-binding domain (Fig. 11). In light of our current studies, the interaction between the S10-␤-strand (containing the MDM2 binding site and the Ser 269 phosphorylation site) and the motif (residues 100 -103) preceding the S1-␤-strand requires further evaluation. This flexible motif between positions 100 and 102 is not as well studied, given that a significant number of p53 DNA-binding domain analyses begin with residue 102/103, which is just prior to the start of the structured S1-␤-strand. However, there are previous reports suggesting an important role for this motif. First, although residues 100 -102 do not generally represent hot spots in a large range of human cancers, there is a report indicating that it is a hot spot for mutations in adrenal cancers (60). These mutations in this motif can include Q100R, K101N, and T102I, any of which could destabilize the interaction of this motif with the S10-␤-strand (Fig. 11A). Additionally, NMR studies have shown that conformational changes can be detected in this motif of p53 (residues 101-103) after the interaction with peptides from the central domain of MDM2 (61). This might lead to "unfolding" of p53 as a prerequisite for substrate ubiquitination catalyzed by MDM2. A recent structural analysis of p53 tetramers also suggested an important role for Gln 100 and Lys 101 at the dimer interface (62), which we have not evaluated in our current study. The Glu 224 of one monomer forms a pair of hydrogen bonds with the main-chain amide of Gln 100 and Lys 101 of an adjacent monomer. Also, Lys 100 from one monomer donates a hydrogen bond to the carbonyl of Val 225 of an adjacent monomer. These clinical and biophysical data suggest an important functional interaction between the S10-␤-strand and the flexible motif (residues 100 -103) and suggest a mechanism underlying the ability of Ser 269 phosphorylation to destabilize p53 conformation. The overproduction of the kinases and phosphatases that target this serine 269 site on p53 has implications for how cells might produce two distinct forms of wild type p53 in response to DNA damage; (i) one form of wild type p53 is transcriptionally active coordinated in part by phosphorylation at sites including Ser 20 and Ser 15 , and (ii) the other form of wild type p53 with a radiation-induced phosphorylation at Ser 269 (24) is inactive and would have function independent of the classic wild type p53 transcription program. Because mutant forms of p53 have distinct subcellular localizations and functions (63), the phospho-Ser 269 form of wild type p53 protein might have similar physiological functions. The identification of the kinase and phosphatases that regulate this equilibrium in wild type p53, between hypophosphorylation and hyperphosphorylation at Ser 269 , has implications for wild type p53 inactivation in human cancers. The kinases and/or kinase-signaling pathways that target the Ser 269 in vivo remain to be defined. The Role of the Two Conformationally Flexible Phosphorylation Sites in Regulating the Folding of the p53 DNA-binding Domain-The core DNA binding drives specific DNA binding and forms a highly structured domain (52). The ␤ sandwich loop-sheet-helix motif in p53 is stabilized by many hydrophobic and electrostatic interactions; however, the thermodynamic stability of this region is low, and missense mutations can easily destabilize the core domain (52). There are three major classes of p53 mutants: (i) DNA contact mutations (R273H) remaining in a folded conformation, (ii) weakly destabilized and partially unfolded mutants (G245S) that are unable to bind DNA, and (iii) mutants (R175H) that are highly destabilized and globally unfolded and aggregate at physiological temperatures (50). Phosphorylation of p53 (or S269D phosphomimetic mutation) should lead to global unfolding of the core domain and create a mutant with characteristics like p53 R175H . Not all p53 mutations inactivate transcriptional activity; some mutants show a gain of function on a subset of genes that may promote tumorogenic growth (reviewed in Refs. 64 and 65). Many gain-of-function mutants show altered selectively for subsets of p53 target genes. For example, p53 R213Q is able to potently induce Mdm2 yet is unable to induce expression of apoptosis-inducing genes PIG3 and PIG11 (66). Other mutant forms of p53 display differential gene expression through binding to DNA at sites lacking p53-responsive elements or as a result of altered interaction with transcription factors such as Ets1 and Sp1 and act in a manner opposite that of wild type p53 (67). The increase in specific activity and thermoinstability of p53 S269A appears different mechanistically from the incremental increase in p53 thermostability and activity previously induced via mutation of up to four amino acids in the DNA-binding domain (48,68). By contrast to the inactive S269D mutant p53, p53 S269A forms a control for specificity because it was (i) more active than the wild type p53 at inducing endogenous p21 and Mdm2, (ii) more flexible as defined by red shift in tryptophan fluorescence, and (iii) more thermally unstable as defined by thermal shift assays. Simulations revealed that although the S269A mutation looses the hydrogen bonding with Gln 100 and Thr 102 , providing a rationale for reductions in thermostability (Fig. 7), Ala 269 does not form a destabilizing bridge with this chain (Fig. 11) because p53 activity in cells can be partially improved upon by the serine to alanine substitution at codon 269 and produce a structurally modified form of p53 with a reduced thermostability but with an elevated specific activity in cells. The latter finding suggests that increasing the flexibility and thermoinstability of p53 can perhaps increase its ability to adopt different conformations with transcriptional components in cells. A phosphorylation site was also previously reported at Ser 215 , but a mechanism to account for p53 inactivity was not demonstrated (22). This site is notable in that it contains the conformationally flexible PAb240 monoclonal antibody epitope that can be used to define p53 mutant unfolding in human cancers (56). Similar to the p53 S269A protein, p53 S215A is as active as wild type p53, but the phosphomimetic mutant p53 S215D is thermally unstable (Figs. 7-9). These data suggest that both Ser 215 and Ser 269 phosphorylation convert wild type p53 to the mutant misfolded conformation. Surprisingly, the double Ala mutant protein, p53 S215A/S269A , is completely inactive in cells and is significantly more thermally unstable than the individual, active alanine mutants (Figs. 7-9). Molecular dynamics simulations also suggest that the double alanine mutant is unable to maintain the intradomain interactions required to maintain p53 conformation (Fig. 11B, iv) and suggest an important dual role for Ser 215 and Ser 269 in stabilizing WT p53. A previous report also highlighted a novel role of phosphorylation in driving destabilization of a target protein. The splicing regulatory protein KSRP has a phosphoacceptor site at Ser 193 whose phosphorylation leads to unfolding of the protein as defined by NMR (70). This unfolding allows access of the phosphomotif to 14-3-3 that in cells regulates compartmentalization of the protein. Similar to our studies on p53, biophysical studies demonstrated that an alanine mutation at codon 193 has little effect on the stability of the protein, but asparate mutation increases significantly the thermoinstability of KSRP. The hydration of phosphate observed here from the molecular dynamics simulations (Fig. 11) may be a feature that is more ubiquitous. A recent report examining the binding of the POLO box binding domain to phosphopeptides (71) using molecular dynamics simulations found similarly that negatively charged phosphate groups can contribute significantly to binding by stabilizing several waters of hydration; indeed, the authors found that these very same water molecules in the absence of phosphorylation, are energetically unfavorable. Further work on other systems will clearly reveal more information on how water molecules of hydration modulate interactions in biology. In summary, biophysical studies have shown that the p53 core domain is thermodynamically unstable and that phosphorylation in the DNA-binding domain could inactivate p53 by enhancing intrinsic thermoinstability. This reversible thermoinstability of p53 is the feature by which there is promise in reactivating mutant, inactive p53 (72). p53 stability can be enhanced by interacting proteins, and evidence for this dynamic regulation through docking interactions has emerged in recent years as an additional layer of regulation for the complex signaling network (69,73). In this study, we show that phosphomimetic mutation within the MDM2 ubiquitination signal also destabilizes p53 structure and function. This provides a model to describe how phosphorylation at Ser 269 would inactivate p53 in cells. Because the Ser 215 phosphorylation site in another conformationally flexible loop also has a phosphorylation site, these data indicate that the two surface loop serine residues can function not only as phosphorylation sites but that they together are required to maintain the thermostability of p53. This also identifies an intriguing paradigm in p53 protein evolution; it has two kinase phosphoacceptor sites that regulate the thermostability of p53 by virtue of controlling the equilibrium between folded and unfolded states.
2017-04-01T12:30:22.969Z
2010-09-17T00:00:00.000
{ "year": 2010, "sha1": "a2278d581b07c945423ae9362fb9a1d5c0d16b1f", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/285/48/37762.full.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "3943ebb5fc300461234cd31bf7c04f6ca7e15b9e", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
18403468
pes2o/s2orc
v3-fos-license
Is real world evidence influencing practice? A systematic review of CPRD research in NICE guidances Background There is currently limited evidence regarding the extent Real World Evidence (RWE) has directly impacted the health and social care systems. The aim of this review is to identify national guidelines or guidances published in England from 2000 onwards which have referenced studies using the governmental primary care data provider the Clinical Practice Research Datalink (CPRD). Methods The methodology recommended by Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) was followed. Four databases were searched and documents of interest were identified through a search algorithm containing keywords relevant to CPRD. A search diary was maintained with the inclusion/exclusion decisions which were performed by two independent reviewers. Results Twenty-five guidance documents were included in the final review (following screening and assessment for eligibility), referencing 43 different CPRD/GPRD studies, all published since 2007. The documents covered 12 disease areas, with the majority (N =7) relevant to diseases of the Central Nervous system (CNS). The 43 studies provided evidence of disease epidemiology, incidence/prevalence, pharmacoepidemiology, pharmacovigilance and health utilisation. Conclusions A slow uptake of RWE in clinical and therapeutic guidelines (as provided by UK governmental structures) was noticed. However, there seems to be an increasing trend in the use of healthcare system data to inform clinical practice, especially as the real world validity of clinical trials is being questioned. In order to accommodate this increasing demand and meet the paradigm shift expected, organisations need to work together to enable or improve data access, undertake translational and relevant research and establish sources of reliable evidence. Background It is generally agreed that the provision of healthcare should be based on evidence, principally so that a patient receives the best advice or treatment for their condition [1]. As medical evidence is vast and at times contradictory, it is important to have a standard format which presents the evidence for a specific disease or treatment in a way that will help healthcare professionals to grasp and apply such evidence in everyday practice [2]. Standardised evidence also helps to address issues such as inappropriate variability among healthcare professionals in the provision of care [3]. Examples of this are guidelines and guidances. Guidelines and guidances are documents which incorporate current evidence from reviewed sources in order to develop clear and comprehensive recommendations on the prevention, treatment and care of patients with specific diseases and conditions [4]. These documents can then be used by health and social care professionals to support their decision-making on the care of a patient. The benefits of using guidelines/guidances include the rapid dissemination of updates and changes in clinical practice and the ability to tailor treatment to different clinical situations [5]. Within the UK, the main body responsible for the generation and publication of guidelines and guidances is the National Institute for Health and Care Excellence (NICE) whose primary objective is to advise professionals working in the National Health Service (NHS) on how to provide the highest achievable standard of care [6]. Although the process for developing these documents has moved from being primarily expert knowledge based to being primarily evidence based, there is still concern regarding the sources of evidence. There is heavy reliance on Randomised Controlled Trials (RCTs) for generating evidence for clinical guidelines as (according to many 'hierarchies' of evidence) they are thought of as the 'gold standard' [7]. However, there are several disadvantages which make evidence from RCTs appear less practical in terms of application to patient care, a key one being the fact that RCTs are generally conducted under controlled conditions on a small number of patients over a fairly short period of time. Even if treatment proves effective in the trial, this does not mean the same effect will translate into the general population as patients in the 'real world' can often be more diverse in terms of age, ethnicity, gender and tend to have more comorbidities which may have an impact on the efficacy of a treatment [8]. Therefore there is a limit to the type of evidence that can be generated from RCTs to address key clinical questions which clinicians face on a daily basis [9]. Additionally, there is also the cost of running clinical trials [10] and the increased interest in obtaining return on investment in healthcare [1]. One possible solution is the use of routinely collected data or clinical databases, research outputs of which are often collectively called Real World Evidence (RWE). Since the transition of paper healthcare records to Electronic Health records (EHRs) it has been possible to create large datasets containing important information such as clinical events, laboratory results, treatment history, etc. [11] These are often referred to as big data or Real World Data (RWD) and present several advantages to health care: they help to strengthen current understanding of healthcare delivery and the outcomes of patients [12], they greatly increase the potential of generating new knowledge as researchers can work to answer important clinical questions (which may otherwise not have been possible) [12] and they can support the development of evidencebased personalised medicine through the linking of EHRs to genomic datasets [13]. EHRs may also enable patients to take a more active role in their healthcare by presenting their health records to other healthcare professionals, if and when necessary [14]. Lastly (and in this case more importantly), RWD could help with the dissemination of key information by bridging the knowledge gap for clinicians and by improving the quantity and quality of evidence used in guidelines and guidances. Best evidence can only be generated when starting with the best data [4]. An example of such a database is the Clinical Practice Research Datalink (CPRD). CPRD (previously the General Practice Research Database) is one of the largest longitudinal databases in the world containing anonymised EHR data (e.g. demographics, symptoms, behavioural factors, tests, etc.) for 11.3 million patients in the UK [15]. CPRD has been used in over 1500 observational research studies covering a variety of disease and therapeutic areas [16]. However, it is currently not known to what extent CPRD studies have been used to inform clinical practice. In this context, this study aimed at systematically reviewing the literature to identify guidelines or guidances published from 2000 onwards in England which have referenced studies using RWD from the CPRD. The review has focused particularly on governmental organisations (in terms of data providers and guideline developers) as the UK healthcare system is one of the best integrated systems globally. Operational definitions Guidances and guidelines were categorised according to the definitions provided by the NICE. The categories were as follows: NICE Clinical Knowledge Summaries: "A readily accessible summary of the current evidence base and practical guidance on best practice in respect of over 330 common and/or significant primary care presentations" [17]. Technology appraisals guidance: "Recommendations on the use of new and existing medicines and treatments within the NHS" [18]. Clinical guidelines: "Recommend how healthcare professionals should care for people with specific conditions" [19]. Results which did not fit these categories but focused on the delivery of medications were defined as 'prescribing' guidelines. Search strategy This systematic review adopted the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [20] and was in line with the protocol agreed by all authors. Guidelines and guidances of interest were identified by a systematic search of four databases from 1st January 2000 to 21st March 2016 (last day of search update): NICE Evidence Search, Medline PubMed, Embase and the National Clinical Guideline Centre. All four databases were searched for guideline/guidance documents referencing studies using data from CPRD using combinations of the following keywords: "CPRD", "Clinical Practice Research Datalink", "GPRD", and "General Practice Research Database". Indicatively for Medline the following algorithm was used: (((CPRD OR "Clinical Practice Research Datalink" OR GPRD OR "General Practice Research Database")) AND "guideline"[Publication Type]) AND ("2000/01/01"[Date -Publication]: "3000"[Date -Publication]). A search diary recording the search results for each database and the meeting of the inclusion/ exclusion criteria for each document was maintained. The specific inclusion criteria were as follows: 1) a UK guideline/ guidance and 2) references research using data from CPRD or GPRD. Documents were excluded if they met one or more of the following criteria: 1) Irrelevant, 2) Not written in English, 3) Not a guidance or guideline (any other primary or secondary research paper), 4) Only mentions CPRD/GPRD (e.g. as a potential source for future studies), 5) Not available (as being updated) and 6) Draft documents or in consultation. Reference lists of all studies previously identified as having met the inclusion criteria were also manually reviewed for additional relevant documents. The search and assessment of eligibility for included studies were performed by two reviewers working independently. Any duplicate documents were consolidated. All decisions were reached by consensus, with the addition of a third reviewer where required. A relevant PRISMA flow chart was constructed to detail the number of papers retrieved and the steps undertaken. Data extraction All data were extracted by two independent investigators and consensus was reached after the involvement of a third investigator where required. Full text was available for all documents. The following information was extracted for each identified document meeting the inclusion criteria: title, year of publication, disease area, the CPRD studies cited, the exact sentences referencing and the type of guideline/ guidance and the references. Each guidance document was categorised by disease area following the categories of the British National Formulary (BNF). The information was then summarised and tabulated in a standard form. In the assessment of the results, studies referenced in more than one guideline document, were treated separately as providing different evidence in each. A purely descriptive approach was adopted for data synthesis. Sums and means were derived where appropriate. No further statistical analysis was undertaken. As this review did not include any primary research, no form of quality assessment was necessary. Search results The PRISMA search strategy yielded 297 documents in total from four bibliographic databases, of which 293 were not duplicates. Following screening, 178 documents were excluded as they were not a guidance or guideline. A further 90 records were excluded based on the other exclusion criteria. In total, 25 documents were included in the final review ( Fig. 1), referencing a total of 43 CPRD/GPRD studies. Study characteristics Baseline characteristics and detailed information of the included studies, including how the evidence from these studies was used to inform the documents, are listed in Table 1 Of the guidelines identified, ten were clinical and the remainder were prescribing. Of the guidances identified, seven were Clinical Knowledge Summaries, two were prescribing, three were Technology Appraisals and the remainder were clinical. The guidelines and guidances covered 12 topics (grouped according to the BNF). The majority of the documents (N =7) were focused on diseases of the Central Nervous system (CNS). The guidances covered eight topics with the majority again focusing on diseases of the CNS. The Guidelines covered seven topics with the majority focusing on Infections, the Central nervous system and Ear, nose and oropharynx (N =4 for each topic). The top seven disease areas are listed in Table 2. Forty-three studies using CPRD data were referenced in the 25 documents with three studies receiving the most citations [21,22,24]. The guideline which referenced the most CPRD studies (N =15) was 'Suspected Cancer: recognition and referral' (2015). Only three guidances referenced more than one CPRD study (N =2 in all cases), 'Osteoporosis: assessing the risk of fragility fracture' (2012), 'Rivaroxaban for the prevention of stroke and systemic embolism in people with atrial fibrillation' (2012), and 'Sore throatacute' (2012). The evidence used from CPRD studies can be grouped into five categories. Almost three quarters provided information on disease epidemiology (N = 32) while the remainder were in relation to pharmacovigilance (N =12), pharmacoepidemiological evidence (N = 5), incidence or prevalence rates (N = 3) and health utilisation (N = 1). Main findings The results of this study show RWD from CPRD have not been used to provide input for guidelines and guidances too often. The identified numbers of guidelines and guidances referencing CPRD studies seem significantly small considering that over 900 documents providing guidance have been published by NICE since its inception [64] and that CPRD has been used in over 2000 publications since it began its existence in 1987. These findings correlate to a similar study by Tricoci et al. in the USA who conducted a review of the American College of Cardiology (ACC) and the American Heart Association (AHA) guidelines in 2009. The study identified 16 relevant guidelines for their review with only 11 % of the recommendations being based on evidence from multiple sources instead of expert opinion or evidence from a single study [65]. However, the results of this review do show an increase in the frequency of RWD studies being used in guidelines/guidances in recent years. Despite the fact that clinical datasets and large longitudinal databases have been in existence for well over three decades, interest in RWD and large clinical databases for health research to influence guidelines and guidances is still in its infancy. Improvements in data processing, innovation in bioinformatics, the increased uptake of EHR systems, the increased demand for better and more efficient health care and the need for rapid generation of evidence are all contributing towards an increasing trend in its use by researchers, clinicians and policy makers [66]. Disease areas Identified documents covered a variety of disease areas demonstrating the breadth of research using RWD generated from EHRs. This is partly due to the range of data available through large healthcare databases and datasets and the depth of the data being enhanced by linking to other datasets. The majority of the documents focused on diseases or treatments to do with the CNS. It appears, in this area of health, the benefits of RWE are being more utilised. For example, two of the documents refer to the treatment of patients with mental illness which correlates with research trends as several studies have investigated the patterns, drug effects and outcomes of patients used Clinical Knowledge Summary Central nervous system No clear association between varenicline and an increased risk of fatal or non-fatal self-harm. [27] Guideline 2012 Psoriasis Clinical Skin 3 Higher risk of mortality from cardiovascular disease or cerebrovascular disease in severe psoriasis patients compared to an unexposed cohort. [31] Higher risk of mortality from diabetes in psoriasis patients compared to an unexposed cohort; the risk of mortality from liver disease was not significantly higher [32]. Incidence of major adverse cardiac events was higher in psoriasis patients [33]. Guidance 2012 Sore throat -acute Clinical Knowledge Summary Ear, nose and oropharynx 2 Low benefit of using antibiotics to prevent complications from acute sore throat [34]. Incidence of quinsy was low but develops very quickly; low doses of antibiotics less likely to protect against quinsy [35]. Guideline 2012 Rivaroxaban for the prevention of stroke and systemic embolism in people with atrial fibrillation Technology appraisals Cardiovascular system 2 Prevalence of atrial fibrillation (AF) in people aged 55-64 in the UK [36]. Event rates according to baseline level of stroke risk and the distribution of patients with different CHADS2 scores [37]. Guidance 2012 Medications in recovery: re-orientating drug dependence treatment Prescribing Central nervous system 1 Increased risk of mortality in the first few weeks of prescribing opioid substitution Therapy (OST); overall mortality ratio was lower in those prescribed OST than in opioid users [38]. Guidance 2012 Osteoporosis: assessing the risk of fragility fracture Clinical Endocrine system 2 Included in systematic review for the 'history of falls' as a prognostic factor for the risk of fragility of falls in Osteoporosis [39]. Dose effect relationship between steroid use and fracture risk [40]. Provided the positive predictive values of symptoms for lung, oesophageal, stomach, colorectal, bladder and renal cancer to improve the diagnosis of such cancers [49]. Provided the positive predictive values of s ymptoms for pancreatic cancer to improve the diagnosis of this cancer [50]. Provided the positive predictive values of symptoms for oesophageal and stomach cancer to improve the diagnosis of these cancer [51]. Provided the positive predictive values of symptoms for breast cancer to improve the diagnosis of this cancer [52]. Provided the positive predictive values of symptoms for endometrial cancer to improve the diagnosis of this cancer [53]. Provided the positive predictive values of symptoms for bladder cancer to improve the diagnosis of this cancer [54]. Provided the positive predictive values of symptoms for renal cancer to improve the diagnosis of this cancer [55]. Table 1 An overview of the studies included in the review (Continued) Provided the positive predictive values of symptoms for myeloma to improve the diagnosis of this cancer [56]. Provided the positive predictive values of symptoms for bladder cancer to improve the diagnosis of this cancer [57]. Provided the positive predictive values of symptoms for urological cancer, brain cancer, CNS cancer, neuroblastoma, retinoblastoma and Wilms' tumour in children to improve the diagnosis of these cancers [58]. Provided the positive predictive values of symptoms for urological cancer, brain cancer, CNS cancer, leukaemia/lymphoma, Non-Hodgkin's lymphoma, Hodgkin's lymphoma, bone sarcoma, soft tissue sarcoma, abdominal cancer, neuroblastoma, retinoblastoma and Wilms' tumour in children and young adults to improve the diagnosis of these cancers [59]. Provided the positive predictive values of symptoms for brain cancer, CNS cancer, leukaemia, Non-Hodgkin's lymphoma, Hodgkin's lymphoma, bone sarcoma, soft tissue sarcoma neuroblastoma, r etinoblastoma and Wilms' tumour in children, young adults and adults to improve the diagnosis of these cancers [60]. Provided the positive predictive values of symptoms for brain and CNS cancer to improve the diagnosis of these cancer [61]. Provided the positive predictive values of symptoms for brain and CNS cancer in children and young adults to improve diagnosis of these cancers [62]. Provided the positive predictive values of symptoms for brain, lung and CNS cancer to improve diagnosis of these cancers [63]. RWD [67]. The document which cited the most CPRD studies was the guideline on Suspected Cancer which highlighted the need for better methods of diagnosis and early detection and gave precise and more up-to-date information on how to detect over 200 cancers [16]. This reflects the increased interest in this disease area where several studies have looked into using EHRs to 'flag' recognised diagnostic clues in a timely manner [68]. This will have a substantial benefit to cancer patients as delays in diagnosis have been linked to poorer prognosis [69]. Evidence from studies using CPRD is significantly under-represented in conditions which are primarily treated in primary care (e.g. Diabetes, Obesity, Asthma, etc.). This is not because there is a lack of studies on the subject. The possible reasons for this have already been discussed above and as a result, there needs to be a review of the kind of evidence used in guidelines and guidances, for if the data is not for the purpose it was created then it is a waste of a potentially health-changing resource. Studies using CPRD data were not referenced in guidelines in other disease areas at all, such as nutrition and blood, eye, and anaesthesia. For most of these disease areas, the reason is quite clear. CPRD currently has no or limited data on diseases and treatments that are mainly administered in secondary care (e.g. drugs administered through the eye) and there are no efficient centralised databases containing such information. However, the linking of datasets from varied health settings can provide a fuller picture of disease and health outcomes in the general population and therefore provide even more robust evidence [70]. A good example of this is the cardiovascular disease research using linked bespoke studies and electronic health records (CALIBER) dataset comprising of CPRD GOLD data and linked data from Hospital Episodes Statistics (HES), deprivation data, the Office for National Statistics (ONS) mortality information and the Myocardial Ischaemia National Audit Project (MINAP) [71]. This was a bespoke linked dataset to perform studies to improve the health of patients suffering from cardiovascular diseases and has been used in a number of useful studies, for example, to identify new associations for a range of risk factors in cardiovascular disease [72]. This shows that the ability to link datasets from a variety of sources provides immense opportunity to not only get a fuller picture of a patient's medical history, but also investigate the interactions and associations between different treatments/diseases in different clinical settings and possibly developing predictors of health outcomes [73]. The future Looking towards how clinical evidence can be improved, one would directly look into the organisations providing the data. As CPRD and other data providers continue to expand their linkages to other data sources and the benefits of linked data sources increase in recognition, funds should be invested in creating datasets in all sectors of health. This will enable healthcare professionals to make sound decisions based on RWD, regardless of their line of work. The Health and Social Care Information Centre (HSCIC) who manage and maintain the balance between the sharing of information for community benefit and respecting the confidentiality and wishes of patients have been key in enabling research through linked data in the UK [74]. Strengths and limitations There are several strengths to this study. Firstly, it was conducted using the gold-standard method for conducting systematic reviews [75]. The PRISMA methodology has been found to improve the completeness of systematic review and meta-analysis reporting [76]. Furthermore, an exhaustive search of multiple bibliographic databases was followed. NICE and the National Clinical Guideline Centre are the main databases for UK clinical guidelines and guidances with the majority of healthrelated organisations referencing these sites for further information or access. The study focused on identifying research which has used data from the most widely used source of RWE (CPRD) and in a country that uses medical informatics research extensively. Lastly, the authors have the relevant experience and knowledge of CPRD, the provision of healthcare in the UK and the process for conducting systematic reviews. However, limitations of the review need to be acknowledged. Firstly, this review focused only on guidelines and guidances which used evidence from studies using CPRD data. There are other longitudinal databases in the UK such as The Health Improvement Network (THIN) and QResearch. It also did not look into what type of data was used in each study (e.g. use of linked data). For future studies, it would be interesting to investigate whether evidence from other longitudinal databases have been used to inform guidelines/guidances. This study also focused on guidelines and guidances published for health and social care in England. Future studies could compare how RWD is used not only nationally but also internationally and identify the trends and differences that may exist. Lastly, guidelines and guidances which were currently 'under review' were not included in the review as they were liable to change once published. Therefore it would be worth conducting the review again at such a time when these guidelines/guidances become available to see how they affect the current results. Conclusions In this systematic review, we confirmed that Real World Evidence from the Clinical Practice Research Datalink has been used inconsistently but increasingly in the last decade, to inform guidelines and guidances published in the United Kingdom. The increased uptake in recent years, noted in our results, shows that this area of healthcare is changing and this review captures a phase in this transition. To capitalise on the potential value of using Real World Evidence, researchers need to ensure they undertake research of translational value to the healthcare community. Organisations which develop guidelines should also work to identify Real World Evidence sources which will give a more realistic view of how an intervention works in actual healthcare settings. Finally, key points extrapolated from our review include increasing the quality of available Real World Evidence (which will require investment on capacity, skills and accessibility) and maintaining public trust (which will be key for wider uptake). Abbreviations ACC, American college of cardiology; AHA, American heart association; BNF, British national formulary; CALIBER, cardiovascular disease research using linked bespoke studies and electronic health records; CNS, central nervous system; CPRD, Clinical Practice Research Datalink; EHR, electronic health record; HES, Hospital Episodes Statistics; HSCIC, Health and Social Care Information Centre; MINAP, myocardial ischaemia national audit project; NHS, National Health Service; NICE, National Institute for health and Care Excellence; ONS, Office for National Statistics; PRISMA, preferred reporting items for systematic reviews and meta-analyses; RCT, randomised controlled trial; RWD, real world data; RWE, real world evidence; THIN, the health improvement network
2018-04-03T02:24:23.720Z
2016-07-26T00:00:00.000
{ "year": 2016, "sha1": "0a56a6668685865bc10fe723311cfce08171f44b", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-016-1562-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0a56a6668685865bc10fe723311cfce08171f44b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236294301
pes2o/s2orc
v3-fos-license
Some High-Order Convergent Iterative Procedures for Nonlinear Systems with Local Convergence : In this study, we suggested the local convergence of three iterative schemes that works for systems of nonlinear equations. In earlier results, such as from Amiri et al. (see also the works by Behl et al., Argryos et al., Chicharro et al., Cordero et al., Geum et al., Guitiérrez, Sharma, Weerakoon and Fernando, Awadeh), authors have used hypotheses on high order derivatives not appearing on these iterative procedures. Therefore, these methods have a restricted area of applicability. The main difference of our study to earlier studies is that we adopt only the first order derivative in the convergence order (which only appears on the proposed iterative procedure). No work has been proposed on computable error distances and uniqueness in the aforementioned studies given on R k . We also address these problems too. Moreover, by using Banach space, the applicability of iterative procedures is extended even further. We have examined the convergence criteria on several real life problems along with a counter problem that completes this study. Introduction The most common and difficult problem in the field of computational mathematics is to obtain the solutions of where F : Ω ⊂ B 1 → B 2 a Fréchet-differentiable, B 1 and B 2 Banach domains, Ω, a nonempty convex. It is hard to obtain the exact solution in analytic form for such problems or, in simple words, it is almost fictitious. This is one of main reasons that we must obtain an approximated and efficient solution up to any specific degree of accuracy by means of an iterative procedure. Therefore, researchers have been putting great effort into developing new iterative methods over the past few decades. In addition, the accuracy of a solution is also dependent on several facts, some of them are: the choice of iterative method, initial approximation/s and structure of the considered problem with software such as Maple, Fortran, MATLAB, Mathematica, and so forth. Further, the people who used these iterative schemes faced several issues, some of which include: choice of starting point, derivative being zero about the root (in the case of derivative free multi-point schemes), difficulty near the initial point, slower convergence, divergence, convergence to an undesired solution, oscillation, failure of the iterative method, and so forth (for further information, please see [1][2][3][4][5]). A radius of convergence r shall be shown to be Notice that 0 ≤ ψ 0 (θ) < 1, and for all θ ∈ [0, r). LetS(a, b) stand for the closure of S(a, b) a with center a ∈ Ω and of radius b > 0. The conditions (B) are used in the local convergence analysis of iterative procedure (2) provided the "ψ" functions are as given previously. Assume: Set Ω 2 = Ω ∩S(x * ,r). Next, we develop the analysis of iterative procedure (2) by the preceding notation and conditions (B). Theorem 1. Under the conditions (B) forr = r, further suppose that x 0 ∈ S(x * , r) − {x * }. Then, sequence {x σ } generated by iterative scheme (2) is well defined, remains in S(x * , r) for all σ = 0, 1, 2, 3, . . . and converges to x * . Moreover, the following assertions hold and where the "G i " functions are given previously and r is defined by (9). Furthermore, x * is the only solution of equation F(x) = 0 given in Ω 2 by (B 6 ). Proof. Sequence {x σ } shall be shown to be well defined, to remain in S(x * , r) and to converge to x * using mathematical induction. In order to achieve this, we shall also show estimates (14)- (16). Let us assume that x ∈ S(x * , r) − {x * }. Using B 2 , (8) and (9), we have The Banach perturbation lemma on inversible operators [6], together with estimation (16), ensure: the existence of F (x) −1 The induction for assertions (14)- (16) is terminated by simply substituting x σ , y σ , z σ and x σ+1 by x σ+1 , y σ+1 , z σ+1 and x σ+2 , respectively in the preceding calculations. It follows by the estimation Secondly, we study iterative procedure (3) in an analogous way. There will be no change in the function G 1 . However, we must re-define the functions G 2 and G 3 in the following way withḠ 1 = G 1 : respectively. Then, we arrive at the following theorem with these changes: Under the conditions (B) forr =r, further suppose that x 0 ∈ S(x * ,r) − {x * }. Then, sequence {x σ } generated by iterative scheme (3) is well defined, remains in S(x * ,r) for all σ = 0, 1, 2, 3, . . . and converges to x * . Moreover, the following assertions hold and where the "Ḡ i " functions are given previously. Furthermore, x * is the only solution of equation F(x) = 0 given in Ω 2 by (B 6 ). Proof. By simply repeating the proof of Theorem 1 but using iterative procedure (3) instead of method (2), we get the estimates The proof of uniqueness of the solution is given in Theorem 1. Next, in order to study the local convergence of iterative procedure (3), we add condition (B ) in (B) as follows: Again, there are no changes in the function G 1 . But, we have to re-define the functions G 2 andḠ 3 in the following way forḠ 1Ḡ1 : We define the radius of convergence for method (4) in the following way: wherer 4 is the smallest positive solution of the equation With these new functions, we arrive at the following theorem: Under the conditions (B ) forr =r, further suppose that x 0 ∈ S(x * ,r) − {x * }. Then, sequence {x σ } generated by iterative scheme (4) is well defined, remains in S(x * ,r) for all σ = 0, 1, 2, 3, . . . and converges to x * . Moreover, the following assertions hold and where the "Ḡ i " functions are given previously. Furthermore, x * is the only solution of equation F(x) = 0 given in Ω 2 by (B 6 ). Proof. By simply repeating the proof of Theorem 1 but using iterative procedure (4) instead of method (2), we get the estimates The proof of uniqueness of the solution is given in Theorem 1. Numerical Examples Here, we present the computational results based on the suggested theoretical results in this paper. We also compare the results of iterative procedures (2)- (4) with on the basis of radii of convergence. By the proceeding definition of H(θ), we choose for method (4). This way, hypothesis (B ) is satisfied. We use [x, y; F] = 1 0 F y + µ(x − y) dµ. We choose a well mixture of standard and applied science problems for the computational results, which are illustrated in Examples 1-5. The results are listed in Tables 1-5. Additionally, we obtain the COC approximated by means of or ACOC [19] by: In addition, we adopt = 10 −100 as the error tolerance and the terminating criteria to solve nonlinear system or scalar equations are: (i) x σ+1 − x σ < , and (ii) F(x σ ) < . The computations are performed with the package Mathematica 11 with multiple precision arithmetic. In Table 1, we present radii for example (1). It is straightforward to say that method (2) is better than other mentioned methods because it has larger radius of convergence. Example 2. Let B 1 = B 2 = R 3 and Ω = S(0, 1). Assume F on Ω with v = (x, y, z) T as where, u = (u 1 , u 2 , u 3 ) T . Then, we obtain So, we obtain convergence radii that are mentioned in Table 2. It is clear to say on the basis of above table that method (2) has a larger radius of convergence as compared to the other mentioned methods. So, we concluded that it better than the methods namely, (3) and (4). Since the method (2) has a larger radius of convergence as compared to the other methods (3) and (4). This means that method (2) has a wider domain for the choice of the starting points. So, we conclude that method (2) has more number of convergent points as compared to methods (3) and (4). We noticed from the above table that method (2) has better choices of staring points as compared to methods (3) and (4). Because methods (3) and (4) have a smaller domain of convergence as a contrast to method (2). It is straightforward to say on the basis of above table that method (2) has a larger domain of convergence in contrast to methods (3) and (4). We provide the radii of convergence for Example 3 in Table 3. We mentioned the radii of convergence for Example 4 in Table 4. We list the radii of convergence for Example 5 in Table 5. Remark 1. We have noticed that, in all five examples, method (2) has a bigger radius of convergence as compared to all the other mentioned methods. So, we conclude that method (2) is better than the methods (3) and (4) in terms of convergent points and domain of convergence. Conclusions A comparative study was presented for three high convergence order methods utilizing only the first derivative (and the divided difference of order one) that only exist in these methods. Our analysis generated error bounds and results on the uniqueness of x * that can be computed using majorant functions. However, in earlier studies, these concerns were not addressed and the procedures were limited to operators with the ninth order derivatives that are not in these methods. Our technique is applicable to extend to other procedures, since it is so general. In our numerical experiments, a comparison is given between the convergence radii.
2021-07-26T00:05:37.748Z
2021-06-14T00:00:00.000
{ "year": 2021, "sha1": "3720d2e1169b87f57c91d7cff6230d215631141a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7390/9/12/1375/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "62498a86a6da1e481fb71a60d2a2042ed46fc714", "s2fieldsofstudy": [ "Mathematics", "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
9386328
pes2o/s2orc
v3-fos-license
Geriatric Traumatic Brain Injury: Relationship to Dementia and Neurodegenerative Disease The author explores a hypothesis that the elderly are vulnerable to traumatic brain injury (TBI) precipitating or accelerating dementia or neurodegenerative diseases due to intracellular changes in physiological resilience. Bearing in mind the global human experience, biologic resilience is vulnerable to a vast number of human created and natural conditions. Respectfully, this paper focuses on the role of certain cultural biases, aging, genetic factors, synaptic efficacy, and intracellular enzymatic balance. Clinical vigilance enables diagnosis and treatment leading to improved quality of life. Introduction The USA Centers for Disease Control and Prevention (CDC) has defined traumatic brain injury (TBI) as injury caused by a bump or blow to the head or by a jolt of the head. The severity of a TBI may range from "mild" (i.e. a brief change in mental status or consciousness) to "severe" (i.e., an extended period of unconsciousness or amnesia after injury) [1]. The CDC has collated data about incidence and cause of TBI from 2006 through 2010 [1]. These data reveal that falls are the most common cause of TBI among infants and children aged to 4 years and among adults aged 65 years or older. The statistically significant number of TBI-related events among the geriatric population can prompt a sharpening of diagnostic questions about short-and long-term consequences, including neurodegenerative disease. There are unnecessary clinical obstacles to identifying the consequences of a TBI. Some clinicians mistakenly conclude that pathology from TBI is ruled out in the context of either brief loss or no loss of consciousness combined with benign results from a computed tomography (CT) scan of brain. Nuances of pathophysiology are not discernable with CT. "TBI is referred to as the invisible epidemic. These disabilities, arising from cognitive, emotional, sensory, and motor impairments, often permanently alter a person's vocational aspirations and have profound effects on social and family relationships" [1]. Certain behaviors of injured individuals may encourage others to ignore the signs and symptoms of a TBI. For example, the youthful individual may want to preserve an illusion of invincibility; the athlete may want to reenter the contest; and an elderly person may want to ward off fears of impermanence. A TBI challenges the resilience of impacted tissue. Areas of the brain vary in susceptibility to different types of trauma [2]. Whether partial or full recovery develops or deterioration becomes progressive depends upon numerous variables. The author (ET) hypothesizes that TBI results from metabolic stress imposed on intracellular mitochondria. Because of advancing age, the geriatric patient experiences losses in physiological adaptive response and physiologic reserve, as well as elevated vulnerability to the onset of neurodegenerative disease and dementia. To explore this hypothesis, ET provides a brief review of intracellular response to metabolic stress. Biologic Resilience: Genetic Factors, Synaptic Efficacy and Intracellular Enzymatic Balance An individual's biologic resilience has genetic roots expressed by nuclear and mitochondrial DNA (mtDNA). Mitochondria are the primary energy sources of the cell, and mtDNA is uniquely inherited from the mother. It is theorized that mtDNA originated evolutionarily as a product of engulfed, circular, haploid genetic material, mostly of bacterial ancestry-a very different origin than the diploid nuclear DNA. There is evidence that specific haploids may determine the outcome of physiological stress by providing a protective "physiologic reserve" [3]. Over the course of hundreds of thousands of years of evolution, the genetic lineage of mitochondrial survivors presumably came to contain the most adaptive haploids, in reference to mtDNA. All human cellular functions require energy generated mostly through mitochondrial oxidative phosphorylation, which creates adenosine triphosphate (ATP) from adenosine diphosphate (ADP). ATP is generated through the respiratory process and an electron transport chain, with ADP being converted to ATP through oxidative phosphorylation within the mtDNA. The electron transport chain's last electron acceptor is molecular oxygen (O 2 ), creating superoxide (O 2 -) that is metabolized to hydroxyl (-OH) and hydrogen peroxide (H 2 O 2 ). If certain metals bind with superoxide, the result is an aggressive oxidation. Reactive oxidative species (ROS) are free radicals (molecules with one unpaired electron). In its lowest energy state, O 2 has two unpaired electrons with similar spin orientation in its outer electron orbit. If one of the unpaired electrons is excited, the electron orbit changes so that the O 2 becomes destabilized [4]. Excessive ROS potentially oxidizes intracellular proteins, lipids, mtDNA, and nuclear DNA, opening the mitochondrial permeability transition pore and causing mitochondrial swelling and release of apoptotic effectors. The capacity for modulation of enzymes is crucial for cell survival, with potentially helpful molecules, under different circumstances, becoming harmful. For example, although ROS can be destructive through oxidation, lower concentrations of ROS may function as an intracellular signal transduction regulator [5,6]. Mitochondria are not fixed in location within the cell. Rather, the energy needs of the intracellular microenvironments determine the location of the mitochondria. The endoplasmic reticulum forms a network of tubules and cisternae throughout the cytoplasm, with an inhomogeneous distribution of calcium uptake and release sites. Cytoplasmic signals converge in the endoplasmic reticulum to form spatiotemporally controlled patterns of calcium release [7]. In brain cells, the endoplasmic reticulum has two release channels for calcium ions: inositol 1,4,5-triphosphate receptor and ryanodine receptor type 3. Mitochondria linked to the inositol 1,4,5-triphosphate receptor and ryanodine receptor channels control local calcium release and cytoplasmic calcium regulation [8]. Essentially, each cell has multiple microenvironments. Failure of one environment may precipitate a cascade of metabolic changes that alter cellular morphology and synaptic efficacy. Mitochondria are involved in numerous metabolic activities, including the urea cycle, lipid metabolism, porphyrin synthesis, and homeostasis of steroid hormones. Calcium ions stimulate oxidative phosphorylation, upregulate the creation of ATP, and affect the metabolism of other molecules. Calcium ions enter the mitochondria via a regulated outer and inner mitochondrial membrane process. The efflux of calcium ions is dependent on sodium exchange. Glutamate stimulates plasma membrane influx of calcium ions, and glutamate synapses are tripartite, including the presynaptic neuron, postsynaptic neuron, and glial cell. There is evidence that the overstimulation of Nmethyl-D-aspartate receptors by glutamate may cause excessive calcium ion influx, which compromises mitochondrial membrane polarity. Glutamate overstimulation may increase ROS oxygen species, leading to cellular compromise [9]. Among the enzymes that regulate the intracellular microenvironments are the two isoenzymes monoamine oxidase A and B (MAO-A and MAO-B) and superoxide dismutase (SOD). Enzymes are catalysts that lower the threshold of energy needed to enable a reaction. Through oxidation, the MAO enzymes deaminate biogenic amines, including neurotransmitters. The function of MAO, which is attached to the outer membrane of the mitochondrion, is influenced by its immediate microenvironmental pH level, ion concentration, and heat [10]. SOD is an enzyme that partitions the unstable oxygen molecule (O 2 -) to produce hydrogen peroxide (H 2 O 2 ) and/or molecular oxygen (O 2 ). SOD protects the cell against oxidative stress. The intracellular stress of TBI requires increased production of ATP. Aging mitochondria have an impaired rate of electron transfer, especially in regard to respiratory complex I and complex IV [11]. As people age, the concentration of MAO-A does not change, but the concentration of MAO-B increases [4] and the concentration of SOD diminishes. The increase of MAO-B enables a corresponding increase of ROS through the deamination of biogenic amines. The lower concentration of SOD creates less ability to partition superoxide. Diminished Biological Reserves As a result of the oxidative stress that occurs after TBI, the most effected and/or vulnerable areas of brain will manifest signs and symptoms of disease. This process is similar to that of depression, in which areas of the brain vulnerable to oxidative stress predict presentation of mood disorder [4]. The resultant metabolic trauma of TBI may not remain stable [2]. The International Classification of Disease-10 defines 'Mild Cognitive Disorder' , F06.7, (comparable with Mild Cognitive Impairment) as presenting with diverse cognitive impairments, differentiated from postencephalitic syndrome (F07.1) and postconcussional syndrome (F07.2) by etiology, usually 'milder symptoms' and 'shorter duration' . This nosology is confusing. Shorter duration assumes significant or complete recovery. If mild is diagnostically differentiated from dementia or delirium by severity, mild becomes a misnomer that might represent a very disabling condition. Repetitive or continuing trauma is associated with a pattern of initial mild cognitive disorder followed by increased risk of early onset of dementia, even in the absence of further brain insult. Patients with genetic neurodegenerative disease may be at risk for an early onset of the diseases induced by TBI, such as Parkinson disease, Lewy body dementia, amyotrophic lateral sclerosis, and Huntington disease. These diseases are synucleinopathies, one or more of which develop when the protein α-synuclein is mutated. If the pathology affects the substantia nigra, the presentation is Parkinson disease. In this disease, the oxidative stress detaches α-synuclein from the cell membrane, and the synuclein proteins begin to intertwine, forming inclusion bodies. The inclusion bodies may contain tau protein and synuclein within the same cell. Tau proteins help stabilize microtubules involved in intracellular transportation. There are numerous tauopathies affecting different areas of the brain and different neural cells, and there is a delicate balance between intracellular and extracellular tau protein. The pathological development of tangles, which occurs as proteins coalesce, leads to cellular death. Tauopathy does not only affect the neurons, but "…overexpression of human tau in glial cells results in the formation of hyperphosphorylated and aggregated tau moieties. " [12]. Future Options Because of the suggested co-relationship between TBI and neurodegenerative diseases, advances in diagnosing and treating the early phases of neurodegenerative disease may permit interventions to mitigate its evolution. One example of progress is the documentation of a relationship between the abnormal presence of deposits of amyloid and Alzheimer Disease. The most extensively studied and perhaps best validated tracer has been 11carbon-labelled Pittsburgh Compound-B (11C-PIB) [13]. Rabinovici et al. demonstrated a correlation between 11C-PIB imaging and Mini Mental State Examination scores [13]. A visual evaluation of the FDG-PET can reveal pathologic metabolic changes in patients with TBI. The medical value of FDG-PET is vulnerable to technique and the human factor. Misreads of FDG-PET are due to a variety of factors such as: the reader assumes the test is performed to rule out metastatic cancer; glucose levels at the time of the testing were compromised; a lack of established protocol to specifically document "mild cognitive disorder" (mild cognitive impairment). ranged well over two standard deviations from the examinee's mean, the significance of the scattering is debated by some neuropsychologists, especially in a forensic setting. The initial use of FDG-PET before neuropsychological testing can enable the evaluator to provide the necessary specific psychological testing that would further elucidate abnormal metabolic findings on FDG-PET. Treatment approaches can be significantly affected if for example FDG-PET shows basal ganglia pathology, which would question the use of dopamine blocking agents. FDG-PET may suggest non-motoric seizure disorder. This author has evaluated several patients with FDG-PET and discovered abnormal findings of seizure. Upon inquiry the patients admitted to hearing voices, without the presence of delusions or psychosis, or struggling with disturbing smells. In the incidence of olfactory hallucinosis, one patient was placed on antiepileptic medication that significantly ameliorated symptomatology. FDG-PET can monitor progress or deterioration. Summary The author has outlined why the elderly are vulnerable to a TBI: • Increased incidence of falls • Response to stress requires increase energy production • Oxidative phosphorylation of ADP creates ATP and superoxide • Age related increased concentration of MAO-B • Age related diminished concentration of SOD • Aging mitochondria slow the electron transport system to create ATP. Overall, the intracellular microenvironments can become compromised, causing oxidations of essential biologically active molecules the altered metabolism of proteins, especially the tau and synuclein groups of proteins. Impairment of a specific tissue provides an explanation for the varied signs and symptoms of dementia or various mood and cognitive disorders. Each neural cell is part of a chain of interacting cells. The disruption of cellular morphology and function without adequate physiologic reserve leads to a deterioration of vulnerable tissues. A physical insult-such as sudden acceleration-deceleration or inhalation of sulfur, carbon monoxide, mercury, or combustible smoke-may impact one brain area more than another. Brain tissue differences in degree of recovery or progressive deterioration based not necessarily on connectivity of brain areas (such as the hippocampus to the parahippocampus) but rather on the vulnerabilities of specific brain areas to a given insult [2]. Thus, some damaged cellular areas will progressively deteriorate, while other damaged areas may remain stable or partially reconstitute. Regardless of whether this author's hypothesis eventually proves incorrect or too simplistic, this discussion will hopefully encourage clinicians to perform a special evaluation of the elderly to separate what might be labeled the "natural aging process" from a more debilitating disease. The high prevalence of TBI in the elderly should also encourage a longitudinal evaluation for the emergence of mild cognitive disorder, neurodegenerative disease, and dementia. Although there are no definitive treatments for these conditions, physicians should not feel disheartened. Functional imaging may provide improved insights into the pathological mechanisms, which will aid in treatment decisions. For example, a basal ganglia pathology would not be conducive to dopamine-blocking agents. Other pharmacologic approaches might include acetylcholinesterase inhibitors, partial glutamate agonists, direct stimulants, and various antidepressants, especially MAO-B inhibitors. A systematic therapeutic approach to environmental factors as well as to psychopharmacology could best enhance quality of life.
2019-03-13T13:40:26.151Z
2016-03-14T00:00:00.000
{ "year": 2016, "sha1": "f2842d6506d3566e068a6e3509e0ea481f7796fb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2167-7182.1000292", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "da3beaec5f1158ac1ec3c9e2bab83672442e84cb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55680158
pes2o/s2orc
v3-fos-license
Students ’ Proficiency and Textual Computer Gloss Use in Facilitating Vocabulary Knowledge Learning vocabulary forms a major part for any language learner. Apart from direct teaching of vocabulary, language teachers are always searching for ways to increase their students’ vocabulary to enable them to use the language more effectively. Therefore, this study sets out to investigate whether the use of computer textual glosses can aid vocabulary development. With a sample of 99 English as second language students, this study examines whether a computer-aided textual glosses embedded in a narrative text is able to aid students in developing their vocabulary knowledge. Using ANOVA and descriptive statistics, it was found out that students with different language proficiency levels used the gloss in a similar pattern. The similarity was that there were gains after immediate use of the glosses but the gains were not maintained over time. High proficiency students made the most gains followed by mid and low proficiency students. What can be learnt from this study is that computer textual glosses can be used to develop students’ vocabulary knowledge in the short term. However, this should be supplemented with other vocabulary teaching/learning activities for more robust vocabulary knowledge development. The implications of measuring vocabulary knowledge by using vocabulary tests in the study could have resulted in the students having more gain in productive vocabulary knowledge compared to receptive vocabulary. Introduction A language student faces many challenges in his or her quest to learn another language.Among the challenges are learning the structures and vocabulary of the language.These are needed by them to allow students to use and comprehend information in the language that they are learning.The incomprehensibility of information may be attributed to many factors such difficulty of text, lack of background knowledge and linguistic constraints such as lack of vocabulary.The lack of vocabulary can be a major obstacle for second language learners as researchers such as Grabe (1991), Haynes and Baker (1993), Laufer (1997), Read (2004) have found out that vocabulary is the main factor which can impede or enhance comprehension.Read (2004) writes that second language learners are aware that limitations in their vocabulary hinders them in their communication in the target language as "lexical items carry the basic information load of the meanings they wish to comprehend and express " (p. 146).This is the reason why Read justifies that vocabulary is important for students to acquire as compared to other features of the language.Therefore, it is important and useful to find ways to effectively develop students' vocabulary knowledge.It can complement teaching vocabulary as teaching is time consuming and as Parry (1993) notes that "it is simply not economic to spend precious minutes on items whose chances of reoccurrence may be low" (p.2). With that as a background, this study examines whether gloss use is able to help students develop their vocabulary knowledge.It has been documented that gloss use is beneficial for vocabulary learning.Nation (2001) describes three main reasons why glosses are useful to learners.His first reason is that learners are able to handle difficult texts as a glossary of the unfamiliar words are provided.Secondly, with the aid of glosses, learners will not guess the meanings of the words wrongly as the correct meanings are given.Thirdly, he affirms that glosses do not intrude into the learners' reading where using a dictionary may interrupt the reading process. The purpose of the study is to examine how students with different proficiency levels use a computer-aided textual gloss and to find out what kind of gains they had obtained in their vocabulary knowledge.Gloss use in this study is defined as the clicking done on the words which were glossed.This would allow the students to consult the glosses which contained the meanings of the unfamiliar words.The glosses which were created were textual ones, at both word and sentence levels in the students' L1 which is Bahasa Melayu (BM) and English.This study had not incorporated multimedia glosses as textual glosses are more straightforward and simpler to design compared to multimedia type of glosses.Proficiency levels of the students were determined by the grade they had obtained in English in a national level examination, Sijil Pelajaran Malaysia (SPM).The grades were then categorised as low, mid and high levels of proficiency. Vocabulary knowledge is operationalised as knowing the meaning of a word productively and receptively.In other words, students should know the meaning of the words as well as to produce the right form of the words.This knowledge is measured in the study by specifically designing productive and receptive vocabulary tests.To sum up, the independent variables in the study are proficiency and gloss use, while the dependent variable is vocabulary knowledge.Thus, the study is designed in search for the answers to the following research questions: 1) How do students of different levels of proficiency use computer textual glosses? 2) How does this use affect their vocabulary knowledge? 3) What type of gains are made in the type of vocabulary knowledge a) Will the knowledge be maintained over time? Proficiency and Gloss Use From the studies on gloss use which have been reviewed, it is revealed that some researchers have pointed out that the proficiency level of the learners does play a role in their look-up behaviour.For example, Ercetin (2001) states that, "second language learners interact with text differently based on their proficiency level and prior knowledge" (p.70).Jacobs, Dufon, and Hong (1994) also found that the effectiveness of glosses varies according to the learners' language proficiency.They argue that glosses have a different impact on learners with different proficiency levels in the L2.In terms of vocabulary learning, they found out that higher proficiency learners recalled more vocabulary if they had accessed to gloss words.On the other hand, Li (2010) who examined 20 Chinese ESL learners' vocabulary retention after they had read a text with and without access to computer-mediated English monolingual and English-Chinese bilingual dictionaries revealed that both computer-mediated glosses and bilingual dictionaries were effective for learners with lower proficiency levels.Yoshii and Flaitz (2002) also considered proficiency level of the learners who were at beginning and intermediate stages.They found out that there were no significant differences between the two levels in the rate of change between the immediate and delayed test scores.Knight (1994);Yoshi (2006) and Abraham (2008) analyzed gloss use behaviour and have concluded that the language ability of the learners affects the way they use the gloss, that is, it has a role in the learners' look-up behaviour.Their interpretations of the gloss use have implicated language ability of learners but these researchers have not delved deeper into the issue.Providing further evidence of the effect of proficiency and gloss use, Yun (2011) who carried out meta-analysis of 10 studies on gloss use on reading comprehension and vocabulary learning revealed that the variable of learner proficiency was found to be statistically significant as glosses made an impact to L2 vocabulary acquisition on beginning learners. Vocabulary Knowledge Defining vocabulary knowledge is complicated as the definition of vocabulary knowledge varies among researchers.Clearly, vocabulary knowledge has been defined differently by researchers.Nation (2001), for instance, writes that vocabulary knowledge can be defined as the sum of interrelated subknowledges which involves knowing its "form, meaning and use" (p.26).From these three perspectives, knowledge of a word expands to knowing how the word is spoken, written, its form and meaning, concept and referents, its associations, grammatical functions, collocations as well as its constraints on use.Furthermore, vocabulary knowledge is looked upon as a continuum comprising several levels.The first level can be considered as superficial familiarity of the words by researchers such as Faerch, Haastrup, and Phillipson (1984) and Palmberg (1987).At the next level is one of the most common distinctions of vocabulary knowledge which is "receptive and productive knowledge" (Schmitt, 2010, p. 80).Nation (2001) elaborates that receptive knowledge is needed to deal with words in listening and reading.On the other hand, productive knowledge is called active knowledge and it is needed to use word in speaking and writing.Schmitt (2002) writes that for most first language vocabulary learning, a large part of the input comes from listening and reading.The same can also be applied to second language learners, where Read (2004) states that second language learners acknowledge the importance of vocabulary in the target language.Furthermore, second language learners need to be exposed to different modalities such as written or listening to get input in order to acquire new vocabulary.Two different perspectives are put forth next to discuss vocabulary learning. Noticing, Retrieval and Generation From Nation's (2001) broad research on vocabulary learning, he suggests that there are three stages which are required for vocabulary acquisition.There are noticing, retrieval and generative.The first is noticing where the learner notices that there is a word which he or she is unfamiliar with.The second is Retrieval where there is a possibility of remembering words so that they can be retrieved by the learner, and the third is generating the vocabulary retrieved.Nation makes two distinctions of retrieval: receptive retrieval and productive retrieval.Receptive retrieval is when learners perceive the form and retrieve the meaning, while the opposite is true for productive, that is, learners have the meaning and retrieve the form.Nation puts across that both types of retrieval are important, however he feels that productive retrieval is better for vocabulary learning.The final concept is generative where learners are able to use the words learnt in a different grammatical form, in different contexts or with a new meaning (Alum, 2004). Involvement Load Hypothesis Another vocabulary learning perspective which can also be applied to the interaction between the computer and the learner is provided by Laufer and Hultsijn (2001).They have termed it as the Involvement Load Hypothesis which posits that tasks which induce higher involvement from the learner.It works on the premise of the learners' level of processing, that is, when the learner processes the vocabulary more, the eventuality for the vocabulary to be learnt and retained is increased.Laufer and Hulstijn (2001) suggest that the involvement of vocabulary learning has three components.The three components are: Need, Search and Evaluation.Need is the requirement to understand a linguistic feature in order to perform a task, for example, need for a meaning of a word in a reading comprehension task.Search is to look for the meaning, which could be looking up the word in a dictionary or even use a gloss.Finally, Evaluation is to evaluate if the word can be suitably used in certain contexts.This Evaluation stage is cognitive type of interaction, one which Chapelle (2003) terms as "Intra" that is within the learner's mind.The involvement load can be triggered by the learner first noticing the lexical item, the amount of time spent engaging with the lexical item in terms of interaction thus increasing the engagement with the lexical item.According to Schmitt (2010), these are among the factors that can facilitate vocabulary learning.He adds that "the more a learner engages with a new word, the more likely he/she is to learn it" (p.26). Measuring Vocabulary It can be seen that research on vocabulary is a complex matter, whether it is in defining the vocabulary construct and even pinning a theory to learning vocabulary.This complexity continues when it comes to measuring vocabulary knowledge.It is not easy to measure vocabulary (Nation, 2001).This claim by Nation is aptly described by Schmitt (2010) who states that: It is virtually impossible to measure all the word-knowledge aspects for words for a least three reasons. The first is that many of the word knowledge aspects do not have accepted methods of measurement. A second reason has to do with time…a test battery for word would be extremely unwieldy and time consuming. A third reason is related to the difficulty of controlling for cross-test effects. (p. 79) Nevertheless, efforts to provide justifications for the types of tests used and to explicitly state what they measure and to what extent have to be in place.A good starting point would be to define the construct vocabulary as explained in the earlier section.The operationalization of the vocabulary construct is to make a distinction between types of vocabulary knowledge tested.As there can be different types of vocabulary knowledge there is a need for multiple tests to measure varied aspects of vocabulary knowledge.Another use of multiple tests for vocabulary is because a single test would not be able to measure every aspect of word knowledge (Milton, 2009).Different types of tests are developed to capture types of vocabulary knowledge of learners with varying degrees of sensitivity.Thus, in this study two separate tests were designed to capture the students' productive and receptive vocabulary knowledge. Participants One hundred and seventeen students took part in the initial stage of this research.They were diploma students from 4 intact classes of a public university in Malaysia.They were in their first semester of their studies and their ages were from 18-20 years old.These tertiary English as a second language (ESL) students were chosen as knowledge of word meanings and the ability to access that knowledge efficiently was recognized as important factors in reading and listening comprehension, especially as learners progress to higher or tertiary level (Chall, 1983).The final number of students who fully completed the research process was 99 students as some students did not complete the delayed test while some students' interaction data with the glosses were not recorded by the tracking device in the computers. Materials A short narrative text titled "A Scary Night" was adopted from a study by Yoshi (2006) and uploaded on the internet.In the text, vocabulary items that were unfamiliar to the students were glossed.The vocabulary items are glossed in two ways.One is for the researchers and their two colleagues to identify the words which they think the students may need help in finding out the meaning. Secondly, a group of students with the same background as those in the study were given the text in the form of pencil-and-paper and asked to circle the words that they found difficult and did not understand.Thirteen words were finally glossed in BM and English.The type of glosses which were created was textual ones at word and sentence levels.The word gloss provided definitional meaning of the words while sentence type of glosses provided meaning in contextualised form. To measure the vocabulary knowledge, two sets of tests were designed.To test receptive vocabulary knowledge, a multiple choice type of test was set.In this test, the students had to choose the correct definition of the meaning of the word.For the productive vocabulary knowledge, students had to fill in the correct word into the blank of each sentence.In each blank, the initial letter(s) of the word was/were provided to help the learners provide the targeted words instead of using other words.Both the tests contained 13 questions corresponding to the 13 glosses words in the text. Research Procedure The research was carried out following a set procedure.Firstly, the students were asked to state their SPM English grade before the experiment.From this, the students were stratified according to their grades which were then classified into three categories, low, mid and high proficiency levels.The next step was for the students to be in the language laboratories where they accessed the website which contained the story.They were told to read the story and they could click on the words which were glossed (using a different colour from the rest of the text).When the words were clicked, the meanings of the words were given either in BM, English or at word and sentence levels. After reading the text, the students were given both the productive and receptive vocabulary tests.These tests formed the immediate test.After three weeks, the students were given the same set of vocabulary tests.For this delayed test, the items in the tests were not in the same sequence as in the immediate test.This was done to avoid the test effects of the earlier immediate test. Results The data obtained from the study was analysed using ANOVA and straight forward descriptive statistics.Firstly, data how the students interacted or used the glosses were analyzed.Secondly, data were examined according to the proficiency level and the type of vocabulary knowledge. Gloss Use of Students with Different Proficiency Levels A one way analysis of Variance (ANOVA) was conducted to assess whether there were differences between the total number of clicks indicating gloss use among the students with different proficiency levels.The dependent variable in each ANOVA was the total number of clicks and the independent variable was the proficiency levels.If significant results were found, then Tukey's post -hoc multiple comparisons were further computed to determine where the difference between the proficiency levels. Preliminary assumption testing was conducted to check for normality and homogeneity of variance of the dependent variable.The results showed no serious departure from the two assumptions for all proficiency levels.Table 1 indicates the mean and standard deviation of the number of clicks at low, mid and high proficiency level.The results of one way ANOVA are presented in Proficiency Levels and Type of Vocabulary Knowledge To investigate the above variables, mean test scores of the students in the different proficiency levels and type of vocabulary knowledge, which is productive and receptive vocabulary knowledge were examined.Table 3 shows the mean scores across immediate test and delayed post-test in different proficiency levels for the productive vocabulary knowledge.Information from the table suggests that the high proficiency level group gained more than the two other groups. Figure 1 shows the interaction plot of the estimated marginal mean scores across test 1 and test 2 in different proficiency levels.An inspection of the results and Figure 1 reveals that averaged across the three tests, those with high proficiency levels had the highest test scores, followed by those with mid proficiency level.The least test scores group was those with low proficiency levels.But all three groups experienced the same trend (parallel lines), where there was increase in the immediate test followed by a decrease in the delayed test.Figure 2 shows the interaction plot of the estimated marginal mean scores across word receptive test 1 and word receptive test 2 in different proficiency levels.The results indicated that students with high proficiency levels had the best test scores than mid and low levels, averaged across the two tests. Figure 2. Receptive vocabulary knowledge and proficiency levels As for the variable of proficiency, it can be seen that high proficiency students benefitted the most from glosses in both types of vocabulary knowledge, that is, productive and receptive.Again what can be observed is that all proficiency levels had a similar pattern of development and type of vocabulary knowledge gained meaning that high proficiency students made the most gains, followed by mid and low proficiency level students.Parallel to the patterns in productive vocabulary knowledge, the high proficiency students benefitted the most from interaction with the glosses in receptive knowledge followed by the mid and low proficiency students.Therefore it can be assumed that different levels of proficiency had used the glosses in a standard manner but it had affected their vocabulary knowledge in different ways. Language Proficiency and Gloss Use From the findings, it can be concluded that language proficiency effects gloss use.This is in line with the finding from Cheng and Good (2009).In their study, all proficiency levels gained from the use of glosses but the benefits were not equal.Here, it was seen that high proficiency learners gained the most, followed by mid and low proficiency learners.This observation does seem to concur with Watanabe's (1997) explanation that even with glosses which were comprehensible, learners with small vocabulary size would not benefit.In his study's context a small vocabulary size may be assumed to mean low proficiency.The higher proficiency learners performed better in the vocabulary tests could also be attributed to their competence in the L2 and they were able to maximize the use of the glosses for their benefit.This is similar to what Jacobs et al. (1994) and Yun (2011) had found in their study.The results from Jacobs' research are consonant with what has been found in this study where high proficiency students made the most gains in vocabulary knowledge compared to the mid and low proficiency students. What can be learnt from the results is that it is important to consider learners' proficiency levels when designing glosses as learners in varying language proficiency levels benefit from gloss use differently.The researchers discovered that high proficiency level learners gained the most in productive and receptive knowledge.This finding fits into the following observation by Groot (2000), "One might argue that high level learners have meta-cognitive strategies at their disposal which make their acquisition of new vocabulary much less dependent on externally imposed learning conditions than is the case for low level learners" (p.21). It is also documented (Cheng & Good, 2009;Jacobs, Dufon, & Hong, 1994) that high proficiency students with bigger vocabulary would consult lesser words compared to students in low proficiency with smaller vocabulary.The high proficiency students were able to take advantage of the glosses which provided more contexts for vocabulary development.This is in line with Schmitt's (2010) claim that language proficiency determines to what degree learners can take advantage of any contextualization in language learning tasks or tests. In the case for low proficiency learners, the retention loss is consistent with the study by Abraham (2008) who also discovered that beginner L2 learners experienced a significant amount of vocabulary retention loss.The findings in this study are similar to the findings in Watanabe's (1997) study.In that study it was found out that even with glossing, learners with small vocabulary size would not be able to effectively use the glosses provided. It could be interpreted that low proficiency students were the ones who had small vocabulary size.Finally, it is emerging that proficiency has a role in relation to gloss use.It is clear that students' proficiency levels and their use of glosses impact vocabulary knowledge development in different ways. Learning Vocabulary Related to the use of glosses as in this study, both approaches reviewed earlier are applicable for vocabulary learning.For instance, in keeping to Nation's (2001) approach, the first element is met when students notices the gap in their linguistic knowledge, which prompted them to retrieve the meaning.In this study, it would be the clicking and using the glosses.Finally, in the generative stage, the students generated the vocabulary items in the vocabulary tests. As for the Involvement Load theory, the three major components of the theory were also present in this study.Firstly, the students read the text and met the unfamiliar words which triggered a need for them to understand the words.Secondly, search was in the form of clicking on the target words to access the glosses, and thirdly, the evaluation stage where the students evaluated the suitability of words for use in the vocabulary tests. The elements in both approaches described above are evident in this study.However, it appears that Nation's Vocabulary Learning approach is more suited to explain the vocabulary learning in this study with the elements of noticing, retrieval and generative, in tandem with noticing, gloss use and output in the form of the tests as manifested in this study.As for Hultsjin's Involvement Load theory (2001) it may not be applicable here because the involvement load or processing of the students with the target vocabulary is insufficient for effective learning. It can be assumed that interactions with the textual gloss and the subsequent vocabulary tests in the study did not provide enough involvement for the students.Furthermore, the study did not attempt to document the level of processing of the students. Vocabulary Knowledge The literature on vocabulary learning shows evidence that receptive knowledge is more readily gained when compared to productive knowledge.It is documented that the ideal vocabulary learning cline should move from a receptive stage to more productive use of vocabulary (Laufer & Paribakht, 1998;Laufer & Goldstein, 2004;Ortega, 2009).In simple terms, it means learners can recognise more words than they can actually use.However, this trend of development was not seen in this study. From the data, it can be seen that students gained more productive vocabulary knowledge compared to receptive knowledge.This mismatch could have resulted from the tests used in the study.What could have transpired resulted from the nature of the tests used in the study.One probable reason was that the productive test with its gap-filling and initial letters format could have given more readily clues of the right answers to the students.The next reason was that the sentence-level format may have provided the necessary context for the students to arrive at the right answer, thus they were able to score higher in the productive test.It could also be reasoned out that the lenient scoring guide could also be the reason the students did better in the productive test.The scoring guide allowed for marks to be awarded even though the inflections were not correct.In contrast, the multiple choice format of the test measuring receptive vocabulary knowledge without any context may have been difficult for the students.They were unable to provide the correct meaning of the target words.Hence the disparity in performance of the students in these two tests.The conclusion that can be made is that the design and format of the tests could play a significant role in the outcome of the tests. It has been shown that vocabulary learning is certainly a dynamic process.It is seen that vocabulary development is possible however as in most kinds of learning, it is common that there will be instances of attrition (Schmitt, 2010).In fact, according to him vocabulary knowledge is more susceptible to be lost when compared to other linguistic elements such as phonology or grammar.He reasons that this is because vocabulary is made up of individual units instead of a series of rules as in grammar, hence the tendency for backsliding to occur is greater.This phenomenon is also present in this study.The vocabulary knowledge gained was not sustained after three weeks. Implications to Pedagogy Textual glossing as in this study may enhance vocabulary learning in the short-term.It would seem that learning vocabulary through computer-textual glosses can be an interim solution to the teaching and learning of vocabulary.Glossing of unfamiliar or difficult words can act as an autonomous vocabulary episode for the students to be later complemented with direct teaching or some other learning vocabulary activity. For retention of knowledge however, more robust and "pushed output" tasks have to be designed to sustain knowledge from the initial gain made by textual-only gloss use.The tests measuring the productive and receptive knowledge in this study did not allow the learners sufficient opportunities for them to produce output.Therefore for more long-term gains by teaching, more tasks are designed to create more instances for noticing, and processing which would enable favourable long-term gains in vocabulary knowledge. Conclusion It can be concluded that proficiency level of students does influence gloss use and vocabulary knowledge.From this study, there appears to be not much difference in the way the students with different proficiency levels use the gloss; with initial gains and loss of vocabulary knowledge in the long term.In addition, students with different proficiency levels benefited differently from gloss use.As expected students with high proficiency level made the most gains in vocabulary knowledge followed by mid and low proficiency levels students, and both productive and receptive knowledge.Therefore textual computer glosses can be used to initiate vocabulary learning but for more sustained vocabulary knowledge, other vocabulary teaching/learning activities have to be in place.It was also discovered that students seem to gain more of productive than receptive vocabulary knowledge.Although this can be attributed to the gloss use, the researchers point to the kinds of tests used in this study that measured the productive and receptive vocabulary knowledge.The researchers cautiously conclude that the type of tests used may influence the type of vocabulary knowledge. Figure 1 . Figure 1.Productive vocabulary knowledge and proficiency levels Table 2 Table 1 . Mean and standard deviation for number of clicks in each proficiency level This indicates that the gloss use of the students in the different proficiency levels was almost the same. Table 3 . Mean scores and standard deviation for production tests Table 4 . Mean sores and standard deviation for word receptive tests in each proficiency level
2018-12-12T19:08:29.353Z
2014-10-23T00:00:00.000
{ "year": 2014, "sha1": "697c132d5be9177daa9bbe3ad68665a4da125368", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/elt/article/download/41509/22744", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "697c132d5be9177daa9bbe3ad68665a4da125368", "s2fieldsofstudy": [ "Computer Science", "Education", "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
18827378
pes2o/s2orc
v3-fos-license
Tai Chi Exercise can Improve the Obstacle Negotiating Ability of People with Parkinson’s Disease: A Preliminary Study [Purpose] The purpose of this study was to examine the effects of Tai Chi (TC)-based exercise on dynamic postural control during obstacle negotiation by subjects with mild or moderate Parkinson’s disease (PD). [Subjects] Twelve subjects (mean age, 65.3±6.1 years) diagnosed with idiopathic PD were enrolled for this study. [Methods] All the subjects were tested a week before and 12 weeks after the initiation of the TC exercise. In the test, they were instructed to negotiate an obstacle from the position of quiet stance at a normal speed. They were trained with TC exercise that emphasized multidirectional shift in weight bearing from bilateral to unilateral support, challenging the postural stability, three times per week for 12 weeks. Center of pressure (COP) trajectory variables before and after TC exercise were measured using two force plates. [Results] A comparison of the results between pre- and post-intervention showed a statistically significant improvement in anteroposterior and mediolateral displacement of COP. [Conclusion] Twelve weeks of TC exercise may be an effective and safe form of stand-alone behavioral intervention for improving the dynamic postural stability of patients with PD. INTRODUCTION Parkinson's disease (PD) is a chronic, progressive, neurodegenerative movement disorder of unknown etiology.It usually results from the continuous degeneration of dopanergic cells in the substantia nigra and depletion of dopamine in the corpus striatum 1) .Parkinsonism is characterized by the gradual development of two or more of the major symptoms: bradykinesia (reduced speed of movement), resting tremor, rigidity, akinesia (poverty of movement), postural instability, and cognitive impairment as well as age related impairments, such as decreased muscle strength of the lower extremities 2,3) .In general, all of these signs and symptoms can variously affect the ambulation of an individual. Impairments in postural control and locomotion lead not only to declined function and quality of life, but an increased incidence of falls and hip fractures which are estimated in the USA to cost approximately US$ 192 million annually 4,5) .The risk of falls for people with PD is twice that of the general healthy population, and approximately 70% of people with PD fall during the course of the disease, often resulting in serious injuries and hospitalization 6,7) .Approximately 27% of people with PD had new hip fractures 10 years after the first diagnosis 8) , and approximately 3% of hospitalized people with PD become wheelchair dependent 9) . Impaired mobility like postural instability, which is a major cause of disability in patients with PD, is less responsive to pharmacological and surgical therapies, even though some motor deficits, such as tremor, may be alleviated by medication 10,11) .Thus, patients diagnosed with PD often turn to alternative approaches for alleviation of their symptoms making exercise and physical therapy an attractive option. Exercise is often recommended for persons with PD, because regular physical activity has been shown to slow their decline in mobility and to prolong functional independence 12,13) .The strength of the lower extremity muscles of PD patients and regular exercise are associated with their capacity to perform functional tasks, such as walking and sit-to-stand [14][15][16][17] . Sveral intervention strategies including general exercise, high-intensity resistance training, and sensory cue training improve balance, strength, and freezing gait of people with PD [18][19][20][21] .These previous studies examining the effectiveness of exercise training for patients with PD required a variety of equipment to perform, and safety monitoring was needed.Few studies have assessed alternative forms of exercise programs for PD patients that could delay their mobility disability and improve their motor function levels. Only recently has growing attention been given by western science to Tai Chi (TC) exercise as an alternative intervention for the enhancement of human well-being that improves balance impairment and disability related to aging, and age-related diseases such as PD.TC exercise is a traditional Chinese martial art, and is a balance-based group of exercises with similarities to aerobic exercise.It involves a sequence of fluid, continuous, rhythmical, and graceful movements linked together in a continuous manner that flow smoothly from one movement to another.In the performance of TC, dynamic body weight shifting, rotations, and standing on a single leg, which require joint control, muscle coordination, and good balance performance, are repeatedly practiced 22) .All of these movements may be an effective means of addressing impairment in postural control during gait.Additionally, TC can be safely and inexpensively practiced at any time and in any place, because it does not require much space or any equipment 23) . TC is beneficial for improving flexibility, lower extremity strength, balance, physical function, and postural stability and reduces the risk of falls for elderly and frail individuals [24][25][26][27][28][29] . Rsearch on the effects of TC has provided promising results for patients with PD who have gait dysfunction and postural instability.Results from TC studies 27,[29][30][31] suggest that the 50-ft speed walk, Timed Up-and-Go, Functional Reach Test, Unified Parkinson's Disease Rating Scale (UPDRS) motor subscale 3, Berg Balance Scale (BBS), Tandem Stance, and Six-Minute Walk and axial symptoms of PD, such as postural stability, were improved by TC. Few studies have investigated the potential mechanisms by which TC may improve movement dysfunction and reduce falls among PD sufferers.Tripping during obstacle negotiation is one of the most commonly reported causes of falls among the elderly 32,33) .Therefore, it is plausible that people with PD who have gait dysfunction and postural instability are also at a high risk of tripping over obstacles and falling.Given the known risk of falls for the elderly during obstacle negotiation and the fact that people with PD have difficulty in performing simultaneous motor or cognitive tasks, crossing obstacles, or attempting to walk in complex environmental settings 34,35) , it is surprising the possible benefits of TC for those with PD during obstacle negotiation have received little attention. Accordingly, the primary aim of this study was to examine the effects of TC-based exercise on dynamic postural control when people with mild or moderate PD negotiate an obstacle.Since the movements of TC contain many training components required for maintaining dynamic balance including shifting the body weight from a unilateral to a bilateral position, symmetric foot stepping, modifications in stance width, single leg support while standing upright, and controlled movements near the limits of stability, we hypothesized that TC exercise would be effective at improving the dynamic postural control while crossing an obstacle of persons with mild and moderate PD. SUBJECTS AND METHODS Twelve subjects (2 men, 10 women; mean age, 65.3±6.1 years; age range, 55-75) who had been diagnosed with idiopathic PD by a neurologist were enrolled for this study.Their mean Hoehn & Yahr (H&Y) disability 36) score was 2.3±0.78(it ranges from 1-3, and higher scores indicate more severe disease), and their mean duration of PD was 56±13.3months (range, 45-80 months).None of the participants had significant cognitive impairment (their Mini Mental State Examination (MMSE) 37) scores exceeded 25) and all the participants could walk independently at least 5 m without an assistive device.All the participants were being treated with anti-Parkinson's medication, and fully responded to their PD medications.They were examined at their peak "on" effect, which was approximately 1-1.5 hours after taking their anti-Parkinson's medications.None of the participants showed freezing gait over the course of the study. Subjects were excluded if they had any serious medical problem; any history or evidence of neurological deficit, other than PD, which could have interfered with locomotion, such as previous stroke or neuromuscular disease; severe dementia determined by a MMSE score < 24 (indicating some degree of cognitive impairment); inability to walk independently; inability to complete the TC exercise program due to debilitating conditions or vision impairment; previous training in any form of TC exercise or current participation in any instructor-led structured regular exercise program; or inability to understand the instructions.All subjects read and signed the informed consent forms approved by the University Institutional Review Board prior to the start of data collection.Baseline participant demographics are summarized in Table 1. An experienced TC instructor taught the first eight movements of 24 simplified short forms of the Yang style TC 38) , a standard and popular TC routine, to people with PD over the course of TC training.The TC interventions, which consisted of 10 min of warm-up exercises, 40 min of eight movements of TC, and 10 min of cool-down exercises, were done three times a week for 12 weeks.The TC protocol was specifically developed to emphasize components of movement typically limited in the elderly and more serious case of PD.Warm-up exercise included slow and gentle flexibility exercises of the shoulders, neck, arms, knees, hips, and back, followed by a trunk stretching exercise that coordinated weight shift with trunk rotation and active arm swinging as well as isolated static TC movements.TC exercise consisted of a series of slow, gentle, relaxed, continuous, rhythmic, coordinated, and flowing movements of different body parts, which emphasized multidirectional weight shifting, trunk rotations, awareness of body alignment, multi-segmental (arms, legs, and trunk) coordination, and narrowing of lower extremity stance and multidirectional shift in weight bearing from bilateral to unilateral support, challenging the postural stability.Synchronized breathing aligned with TC movements was incorporated and integrated into the movement routine.The mastery of one-to-two forms through multiple repetitions was emphasized each week for the first 8 weeks.The complete form was practiced for the last 4 weeks, focusing on repetitions to improve balance and gait.Cool-down exercises included: smooth and progressive range of motion exercises of the ankles, knees, hips, and back as well as neck, and a meditation that incorporated diaphragmatic breathing exercise.The instructor explained and demonstrated how each form should be performed, and the participants followed the motions. Evaluation of all participants was performed a week before and 12 weeks after the initiation of the exercise regimen.Two experienced physical therapists blinded to the TC intervention collected the data.During each assessment, participants were first evaluated for disease severity using the UPDRS motor subscale 3 39) and MMSE, and then completed an obstacle crossing task. Two force platforms (AMTI, Newton, MA, USA) mounted flush with a walkway surface (5 m in length and 1.5 m in width) measured the ground reaction forces (GRFs) while the participants crossed the obstacle.Amplified force plate signals were sampled on-line at a rate of 1,000 Hz for 5 s (AMTI).GRFs collected from two force platforms were processed and the center of pressure (COP) data were analyzed using BioAnalysis v2.0 software (AMTI).The test conditions included the use of obstacles (10 cm in height, 10 cm in depth and 140 cm in width) made of wood for obstacle clearance. For each trial, participants stood quietly with their arms hanging at their sides in a self-selected foot position with each foot on a force plate in a relaxed posture.The two force platforms were placed adjacent to each other with narrow edges to measure the GRFs.Initial positioning of each participant's feet on the force platform was traced and the tracings were used before starting a new trial of obstacle crossing to reposition the foot on the force platform to increase the between-trial consistency.Participants were then instructed to cross an obstacle at a comfortable, normal speed with the right limb after receiving the auditory cue of a 'go' signal, and continued to walk a minimum of three steps.For each participant, two practice trials to familiarize themselves with the experimental procedure were followed by approximately five successful experimental trials.All participants were required to wear flat-soled shoes normally used for everyday walking or sports activities. COP trajectory variables before and after TC exercise were compared using the paired t-test.Statistical significance was accepted for values of p < 0.05.Force platform variables selected for analysis included anteroposterior (A-P) and mediolateral (M-L) displacement of the COP.A-P (or M-L) displacement of the COP was defined as the total distance (or difference) between the minimum and maximum A-P (or M-L) COP location for the length of time when either the left or right foot was in contact with the force platform.SPSS 14.0 KO (SPSS, Chicago, IL, USA) was used for all statistical analyses. RESULTS All participants followed their own protocol and completed the initial and post-assessments.No outliers were found in any of the COP measurements and data from all participants were used in the statistical analysis.After the TC intervention, all participants showed a clinically meaningful improvement in measured COP displacements (Table 2).In the comparison of pre-and post-intervention results, the PD participants showed statistically significant improvements in all the measured COP displacements (p<0.05).Participants showed a 124% and 135% increase in A-P and M-L displacement of the COP, respectively, for both the swing and stance limbs.The mean values of the PD participants' outcome measures after the intervention are summarized in Table 2. DISCUSSION The development of cost-effective evidence-based interventions to decrease falls and fall-related injuries for persons with PD is needed.TC exercise is often recommended for managing the symptoms of PD.However, there have The current study investigated the effectiveness of TC exercise on the ability to step over an obstacle by people with PD.A 12-week, 3-times weekly TC intervention improved the COP measures, which represent muscle responses to maintain dynamic stability while stepping over an obstacle, for both limbs.The results confirmed our hypothesis that TC exercise would be effective at improving the dynamic postural control while crossing an obstacle of persons with mild to moderate PD. The decreased COP displacement in both directions (A-P and M-L) shown by PD patients may be related to dysfunction in balance, akinesia, hypokinesia, or tremor/movement discontinuities associated with PD 40) .In our present study, TC exercise led to improvements in A-P and M-L displacement of COP while stepping over an obstacle.Mean COP displacement in the A-P and M-L directions increased by 24% and 35%, respectively, after TC exercise, as compared to pre-intervention.These results are consistent with the findings of a previous study 41) , which demonstrated increased COP displacement in the A-P and M-L directions during gait initiation after PD patients had performed TC exercises.The improvements in COP measures were also similar to those reported previously for older adults without PD who practiced TC exercise 25,42) .In these earlier studies, the elderly subjects showed improvements in COP trajectory variables during either gait initiation or obstacle crossing.In addition, Hass et al. 43) reported that individuals with PD who received 10 weeks of resistance training significantly increased the magnitude of COP displacement in the posterior direction when initiating gait.Furthermore, Rogers et al. 44) demonstrated that individuals with PD significantly increased the magnitude of COP displacement in the posterior and lateral directions when a startling acoustic stimulus was delivered prior to initiation of walking.In contrast, no significant improvement in selected gait initiation parameters was reported following 16 weeks of 60 min TC exercise for people with mild to moderate PD 45) .This lack of improvement in outcome measures may have been due, in part, to heterogeneity of a wide spectrum of PD participants and the wide variability of TC exercise regimens 45) . Control of the center of mass by manipulation of the movement of the COP in the A-P and M-L directions while stepping over an obstacle is an important consideration for dynamic postural stability 46) .The posterior movement of the COP in the initial period of stepping is needed to generate forward momentum to initiate walking 47) .Thus, greater posterior COP movement increases the moment arm by which the GRF can move the center of mass forward 25) .Previous studies 25,48,49) have shown that a reduction in the magnitude of the backward COP displacement occurs with advancing age and disability when initiating gait.Deterioration in centrally mediated anticipatory postural adjustments is believed to be responsible for the reduction in the backward COP displacement 47) . In this study, 12 subjects with PD were able to improve A-P displacement of the COP to an average of 13.04 cm, which is close to the value previously reported for elderly individuals with PD who practiced TC 41) .This increase in the magnitude of the displacement of the A-P COP observed for the PD subjects may be attributable to restoration of centrally mediated anticipatory postural adjustments during obstacle avoidance.During initiation of walking there is an inhibition of the tonic soleus, which is active during quiet stance, followed by the onset of tibialis anterior activity in both the swing and stance limbs 50,51) .This combination is responsible for the backward movement of the COP [50][51][52] . P patients are known to generate insufficient dorsiflexion torque due to inappropriate and/or inefficient tibialis anterior activation during the initiation of gait 53,54) .PD patients are also unable to turn off previously activated muscles, such as the soleus and gastrocnemius, due to an inability to gate or scale the postural and voluntary components of the motor task 55) .These contribute to a limitation in the backward displacement of the COP. While stepping over an obstacle, the COP created by the swing limb hip abductors moves laterally toward the swing limb and generates stance-side momentum; that is, momentum toward the stance limb 56) .Thus, the coordinated action of the ankle and hip muscles tend to propel the center of mass forward and toward the intended stance limb.Previous studies 48,57,58) have reported that the COP displacement towards the swing limb shown by individuals with PD during gait initiation is significantly smaller than that of healthy age-matched older adults.This reduced ability to modulate M-L COP displacement shown by people with PD during gait initiation might be due to alterations in the proximal musculature strength 59) , particularly the muscles of the hip 16) . The post-intervention average displacement of the M-L COP for the PD subjects was 4.15 cm, a 34% increase compared to pre-intervention.This finding indicates that TC exercise improves M-L COP displacement.The improvement in the M-L COP displacement may be attributable to the coordinated action of the hip abductor and adductor muscles after TC exercise 60) .Older adults with disability and those transitioning to frailty have reduced M-L COP displacements, and people with PD who have greater M-L and A-P COP displacements, and people who have a greater weight shift between the two limbs have a longer step during gait initiation 48) .Furthermore, children with autistic disorder who exhibit less age-related development and postural instability also have reduced M-L COP displacement during gait initiation, compared to age-matched normally developing children 61) .Given that many studies have consistently reported a reduced displacement of the M-L COP in individuals with postural instability when initiating gait, the significantly increased displacement of the M-L COP seen in the present study is indicative of the increased dynamic postural stability of the PD participants who practiced TC exercise. This study had a number of limitations.The sample size of the patients was small and no control group was included.It was difficult to determine the exact contribution of the treatment to the measured changes without a control group; therefore, the present improvements in postural con-trol cannot be conclusively attributed to the TC exercise.In addition, the TC exercise was performed for a 12-week period, which is relatively short in terms of providing the full benefit of the exercise to people with PD.Moreover, no follow-up data were collected.It was not possible to determine whether the improvements were temporary or permanent.Furthermore, the change in the exact timing and spatial events of gait parameters after the intervention were not analyzed because only two force plates were used in the current study.Synchronized analysis of kinematic and kinetic data would provide better insight into the effects of TC exercise on PD patients than the separate analysis of kinetic data.Further studies will be needed to assess the dynamic stability of people with PD using this technique. In conclusion, TC exercise increased the magnitude of the COP displacement in the A-P and M-L directions, thereby improving the mechanism by which momentum is generated in the A-P and M-L directions in the initiation of gait and the maintenance of balance and lateral stability.The present findings support the view that short-term TC exercise may be an effective and safe form of stand-alone behavioral intervention that can be easily conducted outdoors or indoors on an individual or group basis to improve the postural stability of some individuals with mild to moderately severe PD.However, a more definitive conclusion regarding the efficacy of TC exercise for people with PD cannot be made until more evidence is available.A well designed randomized controlled trial using a larger patient population and follow-up is required to enhance our present findings regarding the effectiveness of TC exercise for people with PD. Table 1 . Demographic and clinical characteristics of the study participants with PD at baseline. Table 2 . COP measures of both feet at baseline and 12 weeks
2018-04-03T03:56:34.894Z
2014-07-01T00:00:00.000
{ "year": 2014, "sha1": "c1d30f0c4ac39d65ef3714e81420e31dce2b770b", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jpts/26/7/26_jpts-2013-593/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b893c98d945dcabf45434fe947caec9c6da5c45d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119005331
pes2o/s2orc
v3-fos-license
Exploiting Locally Imposed Anisotropies in (Ga,Mn)As: a Non-volatile Memory Device Progress in (Ga,Mn)As lithography has recently allowed us to realize structures where unique magnetic anisotropy properties can be imposed locally in various regions of a given device. We make use of this technology to fabricate a device in which we study transport through a constriction separating two regions whose magnetization direction differs by 90 degrees. We find that the resistance of the constriction depends on the flow of the magnetic field lines in the constriction region and demonstrate that such a structure constitutes a non-volatile memory device. (Ga,Mn)As can be regarded as a prototypical material for investigating potential device applications of ferromagnetic semiconductors. The spin-orbit mediated coupling of magnetic and semiconductor properties in this material gives rise to many novel transport-related phenomena which can be harnessed for device applications. Previously reported device concepts include strong anisotropic magnetoresistance (AMR), in-plane Hall effect [1], tunneling anisotropic magnetoresistance (TAMR) [2,3,4] and Coulomb blockade AMR [5]. These previous demonstrations have been based on structures which have the same magnetic properties, inherited from the unstructured (Ga,Mn)As layer, throughout the device. Improvement in lithographic capabilities [6] has recently allowed for the first time the production of structures where distinct anisotropies are imposed locally to various functional elements of the same device by overwriting the parent layer anisotropy. This greatly enhances the scope of possible device paradigms open to investigation as it allows for devices where the functional element involves transport between regions with different magnetic anisotropy properties. In this letter we present the first such device. It is comprised of two (Ga,Mn)As nanobars, oriented perpendicular to each other, and with each nanobar exhibiting strong uniaxial magnetic anisotropy. These two nanobars are electrically connected through a constriction whose resistance is determined by the relative magnetization states of the nanobars. We show that the anisotropic magnetoresistance effect yields different constriction resistances depending on the relative orientation of the two nanobar-magnetization vectors. The structure can thus be viewed as the basis of a ferromagnetic semiconductor memory device that operates in the non-volatile regime. For the device, we use a 20 nm thick (Ga,Mn)As layer grown on a GaAs substrate [7] by low-temperature molecular beam epitaxy. Using an electron-beam lithography(EBL) defined Ti-mask and chemically assisted ion beam etching (CAIBE) this layer is patterned into several pairs of coupled nanobars [6] as shown in the SEM micrograph in Fig. 1. Ti/Au contacts are defined in another EBL-step through metal evaporation and lift-off, yielding resistance area products of ∼ 1µΩcm 2 . The bars are circa 200 nm wide and 1µm long and oriented along the [100] and [010] crystal direction, respectively. They form a 90 • -angle and touch each other in one corner, where a constriction with a width of some tens of nm is formed. Transport measurements are carried out at 4 K in a magnetocryostat fitted with a vector field magnet that allows the application of a magnetic field of up to 300 mT in any direction. The sample state is first "written" by an in-plane magnetic field of 300 mT along a writing angle ϕ (as defined in Fig. 1). The field is then slowly swept back to zero while ensuring that the magnetic field vector never deviates from the ϕ-direction. We measure the four-terminal resistance of the constriction in the resulting remanent state by applying a voltage V b to the current leads (I + and I − ), and recording both the voltage drop between contacts V + and V − and the current that is flowing from I + to I − (Fig. 1). The polar plot of Fig. 2 shows the constriction resistance of the remanent magnetization state as a function of the writing angle ϕ. The resistance, which is dominated by the constriction, has a higher value upon writing the sample in the (extended) first quadrant (−3 • ≤ ϕ < 98 • ) and a lower value upon writing in the (shrunken) second quadrant (98 • < ϕ < 167 • ). As a whole the plot is point-symmetric with respect to the origin. To explain these results, we first examine the behav- ior of the individual nanobars. They are patterned on the sub-micron scale to make use of anisotropic strain relaxation, which in turn causes a uniaxial magnetic anisotropy that is strong enough to overwrite the intrinsic anisotropy of the (Ga,Mn)As layer [6]. We therefore expect each nanobar to show a uniaxial magnetic anisotropy with a magnetic easy axis along the respective long axis of each of the nanobars. That this is true also for coupled nanobars is confirmed in Fig. 3, which shows two terminal magnetoresistance scans, performed separately on the 0 • -nanobar ( Fig. 3a) and the 90 • -nanobar ( Fig. 3b) pictured in Fig. 1. The plots show field sweeps from -300 to +300 mT for various in-plane field directions ϕ between 0 • and 90 • . Metallic (Ga,Mn)As exhibits a higher resistance value when the magnetization M is perpendicular to the current J, than when M is parallel to J (this is the AMR effect [8], [9]). When the field H is swept along 0 • (thick line in Fig. 3a), the resistance of the 0 • -nanobar remains in the low state [10], indicating that M is parallel to J throughout the entire magnetic field range. All the other MR-scans start at a higher resistance value and merge into the low resistance curve at zero field, indicating that M, which is almost parallel to H at high fields, relaxes towards the 0 • uniaxial easy axis as the field is decreased. Analogously, the uniaxial easy axis of the 90 • -nanobar is along 90 • (Fig. 3b). Consequently, the 90 • -MR-scan is a flat low resistance curve. During the 0 • -scan (thick line) the magnetization relaxes from parallel to the field (high resistance) towards the easy axis along the bar (low resistance) at zero field. Given that both bars show a uniaxial magnetic easy axis along their respective long axis, the structure has four possible magnetic states at zero magnetic field as sketched in Fig. 2. In sectors (i) and (iii) the nanobars are magnetized "in series",i.e. the magnetization vectors meet in a configuration which we will call head-to-tail. In (ii) and (iv) on the other hand, both magnetization vectors point away from (tail-to-tail ) or towards (head-tohead ) the constriction. When the sample is magnetized along a given direction at 300 mT, the magnetization of both bars is almost parallel to the magnetic field. As the field is then lowered to zero, the magnetization of each nanobar relaxes to the respective nanobar easy axis, selecting the direction which is closest to the writing angle ϕ. For a nanobar along 0 • this means, assuming no interaction between the bars, that M relaxes to 0 • upon writing the bar along any angle between +90 • and −90 • ; otherwise M relaxes to 180 • . If the bars in our device were non-interacting, one would thus expect the magnetization configuration in each quadrant to be as depicted in the sketches of Fig. 2, with each quadrant accounting for exactly one fourth of the total plot. The deviation from this behavior in the actual device is due to magnetostatic interactions between the two bars, which cause a preference for head-to-tail configurations. A simple magnetostatic calculation shows that the repulsive field felt by the tip of one bar due to being near the wrong pole of the other bar is of the order of 2 mT, which is ∼ 5% of the uniaxial anisotropy field. The energy density of this field is thus strong enough to overcome a small part of the energy barrier against rotation towards the opposite magnetization direction, which corresponds to an angle of ∼ 3 • . The head-to-tail quadrants thus increase commensurably. Magnetic field line patterns for the four magnetization configurations were calculated (sketches in Fig. 2i-iv) using a simple bar magnet model. The field lines are close to parallel to the current in the head-to-tail configuration ( Fig. 2i and iii). In the tail-to-tail and the head-to-head configuration ( Fig. 2ii and iv) the field lines are approximately perpendicular to the current. Having understood the magnetic configuration of the device in the write-read experiment of Fig. 2, we now turn to an explanation of why these should lead to two very distinct resistance states. The above magnetostatic arguments and internal fields, in connection with the AMR coefficient for metallic (Ga,Mn)As can explain a few percent resistance difference [1,9] between the head-to-tail and the head-to-head configuration, much smaller and of a different sign than the effect in Fig. 2. We have actually observed such a small AMR related effect in a similar structure, which has a wider constriction (100 times lower constriction resistance). Fig. 4a shows similar data on this low resistance sample as Fig. 2 for the high resistance sample. It is immediately obvious from Fig. 4a that this sample shows the same remanent magnetization configurations as the device in Fig. 2. However, the effect is much smaller and of the opposite sign: where Fig. 2 exhibits a high resistance state, Fig. 4a shows a low state, and vice versa. We ascribe the difference in behavior between Figs. 2a and 4a to the occurrence of depletion in the constriction in the sample of Fig. 2a, which drives the transport (in the critical constriction region) into the hopping regime [11]. At the same time, we suggest that in the hopping regime the AMR coefficient changes sign, leading to the observed changes in magnetoresistance. Important evidence for this claim comes from the angle-dependent magnetoresistance behavior of the samples at a field of 300 mT, strong enough to force the magnetization close to parallel to the external field. This data is given in Fig. 3c for the high-resistance, and in Fig. 4b for the low-resistance sample. The low-resistance device exhibits typical AMR behavior as expected for metallic (Ga,Mn)As: Fig. 4b shows FIG. 4: (a)Results of a write-read experiment as in Fig. 2, for a device with a wider constriction, which exhibits metallic transport behavior and (b)constriction resistance in a rotating 300 mT external magnetic field. that the resistance is lowest when M is forced parallel to the current through the constriction (ϕ ∼ 45 • ) and ca. 3 % higher for M⊥J. In contrast, the high-resistance constriction of the device in Fig. 2 shows a huge and inverted AMR signal, as can be seen in Fig. 3c. The resistance at ϕ ∼ 45 • , where M J, is more than 5 times larger than for M⊥J. This is actually not the first observation of an inverted AMR signal; the same effect has recently been reported in thin (Ga,Mn)As devices [5,12] in which the transport is in the hopping regime. This situation is similar to our high-resistance device, where from the resistance one already can infer that the constriction acts as a tunnel barrier. Actual evidence for tunneling transport comes from the current-voltage characteristics of the high-resistance constriction, shown in Fig. 3d, which were taken at 300 mT at different field directions ϕ. The I-V's are clearly non-linear, with the nonlinearity depending on the magnetization direction. Fields aligning M along ∼ 120 • cause the strongest and along 50 • the smallest non-linearity of the IV-curve. The strong dependence of the IV-characteristic and the resistance on the magnetization direction are characteristic of transport going through a metal-insulator transition (MIT) from the diffusive into the hopping regime depending on the angle of the magnetization, similar to what we have previously observed in a TAMR device [4]. Such a MIT occurs in partly depleted samples due to the wave-function geometry change depending on the magnetization direction. The localized hole wave-function has an oblate shape with the smaller axis pointing in the magnetization direction ( [13]). Consider the overlap of such oblate shapes statistically distributed with respect to the direction of the current in connection with the Thouless localization criterion. The wavefunction overlap is much smaller when the sample is magnetized parallel to the current, than for M⊥J, suppressing hopping transport through the depleted constriction region. This implies a magnetoresistance behavior that is exactly the inverse of that expected for the metallic regime and explains the increased resistance value in both the high field measurements (Fig. 3c along ∼ 45 • ) and the write-read experiment (Fig. 2 1st quadrant). We thus believe that our observations can be fully explained by the internal magnetic fields and the AMR coefficient as applicable to the transport regime in the constriction. A further candidate to explain our observations could be the presence of a domain wall (DW) between differently magnetized regions of the device in the head-to-head and tail-to-tail configuration, which would be absent in the head-to-tail configurations. However, since the constriction is long and the DW would not be strongly geometrically confined, one anticipates only a very low DW resistance in these samples [11,14]. This is confirmed by a comparison of Fig. 2 with Fig. 3c: The resistance values of both remanent states in Fig. 2 are in between the extreme resistance values of the homogeneously magnetized sample. The DW contribution [11] to the constriction resistance can in the present sample thus only be a minor effect on the resistance of the remanent state and does not explain the different resistance levels in Fig. 2. [16]. In the remanent state the resistance of the head-to-head configuration, including a possible DW contribution, is lower than the resistance of the head-totail configuration. We can thus exclude the DW as the origin of the two resistance states observed in Fig. 2. In conclusion we have shown that locally imposed magnetic anisotropies in different regions of one ferromagnetic semiconductor device allow for novel device designs. We consider the perpendicularly magnetized nanobars discussed in this paper as a first demonstration of the type of devices that can be fabricated using this approach, it is certainly not difficult to conceive of further concepts in this direction. In addition, the work presented here has highlighted the difference in AMR behavior between metallic and hopping transport in (Ga,Mn)As, which again should prove useful in device design.
2019-04-14T02:13:29.966Z
2007-01-19T00:00:00.000
{ "year": 2007, "sha1": "0c631c8370f2a039411252b37abb36157570fe0f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0701478", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "49f2a21fbeb817f9b7a9c1da8913a3edd8033662", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
52156821
pes2o/s2orc
v3-fos-license
The platelet count and its implications in sickle cell disease patients admitted for intensive care Background and Aims: In sickle cell disease (SCD) patients admitted for intensive care, evaluation of platelet counts in different types of sickle cell complications and its prognostic relevance are not well-studied. Illuminating these aspects were the objectives of this study. Materials and Methods: A chart review of 136 adult SCD patients consecutively admitted to our Intensive Care Unit (ICU) was done. The prognosis on day 1 was assessed by Acute Physiology and Chronic Health Evaluation (APACHE II) and multiple organ dysfunction scores (MODS). Receiver operating characteristic (ROC) curves evaluated the ability of platelet counts, MODS, and APACHE II scores to predict survival. Results: The most common types of crises were severe pain (n = 53), acute chest syndrome (n = 40), and infection (n = 18); 17 patients were nonsurvivors. Platelet counts varied widely (range, 19–838 × 109/L) with thrombocytopenia (n = 30) and thrombocytosis (n = 11). Counts correlated directly with leukocytes and reticulocytes; inversely with lactate dehydrogenase, APACHE, and MODS scores. Areas under ROC curve for platelets, MODS, and APACHE scores to predict survival were 0.73, 0.85, and 0.93, respectively. Conclusions: In severe sickle cell crisis thrombocytopenia is more common than thrombocytosis. In the ICU, day 1 platelet counts correlate inversely with prognostic scores and are significantly reduced in multi-organ failure and nonsurvivors. A platelet count above 175 × 109/L predicts patient survival with high specificity and positive predictive value but lacks sensitivity. on the basis of hemoglobin (Hb) analysis by high-performance liquid chromatography and (2) availability of a complete blood count (CBC) on day 1 of admission. The study was approved by the Research and Ethics Committees of the participating institutions. All relevant clinical and laboratory data of patients during their period of ICU stay were documented from case-files and the hospital information system. This included data related to vitals, Glasgow coma score (GCS), blood gas results, electrolytes, hepatic and renal function tests, CBC, coagulation tests, microbiologic cultures and serology, history of chronic disease, and the final diagnosis at the time of ICU discharge or death. Platelet counts at admission (day 1) and the nadir platelet count during ICU stay were recorded. Definitions Diagnostic criteria were employed as follows: thrombocytopenia was defined as a platelet count of <100 × 10 9 /L; thrombocytosis as a count ≥450 × 10 9 /L. Acute chest syndrome (ACS) was defined as the appearance of a new pulmonary infiltrate along with chest pain, fever, tachypnea, wheezing, or coughs. [18] Aplastic crisis was diagnosed on the basis of anemia associated with reticulocytopenia. [19] Hyperhemolysis was defined as a sudden fall in the Hb level with an elevated reticulocyte count above the baseline level. [19] Relevant clinical and laboratory data were used in assessing crisis severity on admission (day 1). This was estimated by two methods: the acute physiology and chronic health evaluation (APACHE II) prognostic score and Multiple Organ Dysfunction estimated by the Marshall score (MODS). [20,21] The APACHE II is based on recording 12 physiologic variables, age and chronic health status; MODS is derived from parameters reflecting the function of six organ systems (respiratory, renal, cardiovascular, hepatic, hematological, and central nervous system). The patient outcomes were assessed by documenting mortality and number of days of ICU stay. Significant multiorgan failure was defined as MODS score >5. [22,23] Statistical analysis The Pearson correlation coefficient was used to evaluate correlations between platelet counts and other numeric variables and between MODS and APACHE scores. Nonparametric tests (Kruskal-Wallis and Mann-Whitney) were used to investigate differences between groups of patients stratified by their platelet counts; mortality versus recovered patient groups and differences between patient-groups stratified by diagnosis. Logistic and multiple regression analyses were employed to test for the influence of 14 selected clinical and laboratory variables on the admission outcomes (mortality and duration of ICU stay). Receiver operating characteristic (ROC) curves were constructed from the day 1 platelet counts, MODS and APACHE scores of patients and their survival/death outcome data. Clinical and laboratory features The study population included 136 admissions (124 patients; 52 females and 72 males). Table 1 shows the diagnostic categories. The three most common sickle complications requiring ICU admission were severe pain crisis, ACS, and infection. Together, these comprised 82% of all cases. Selected laboratory and clinical data are shown in Table 2. Common findings were mild/moderate anemia and leukocytosis, elevated serum bilirubin, and lactate dehydrogenase (LDH). The routine coagulation test profile was characterized by raised prothrombin time and international normalized ratio in the presence of low-normal activated partial thromboplastin Highly significant differences were observed between the thrombocytopenia group and the groups of patients with normal or increased platelet counts [ Table 2]. Thrombocytopenic patients showed significantly lower Hb, leukocytes (white blood cell [WBC]), and reticulocytes; higher bilirubin, alanine aminotransferase (ALT), APACHE, and MODS scores. Significant differences between patients with normal platelet count and thrombocytosis were also noted in relation to several laboratory variables (WBC, ALT, and LDH), but the disease severity scores were not significantly different. Thrombocytopenia was present within all diagnostic categories including VOC with severe pain (n = 9), ACS (n = 8), and sepsis (n = 5). Two patients with hyper-hemolysis and both patients in aplastic crisis also presented with thrombocytopenia. On the other hand, patients with thrombocytosis were admitted with severe pain (n = 8) or inflammation-associated etiology (n = 3). However, platelet count variations between the eight diagnostic categories did not reach statistical significance. Specific mention of hydroxyurea (HU) treatment was available in 6/59 patients. None of the patients on HU were thrombocytopenic (median platelets 269 × 10 9 /L). Spleen size was recorded in 25 patients; platelet counts in 8 patients with splenomegaly was significantly reduced versus 17 with noted "no splenomegaly" (Mann-Whitney, P < 0.05). Patient outcomes There were 17 nonsurvivors (12.5% of the study population). These patients were admitted with sepsis/infection (n = 9), ACS (n = 4), severe pain (n = 2), aplastic crisis (n = 1), and pulmonary hypertension with heart failure (n = 1). The platelet Table 1] was significantly lower in the mortality group compared to patients who survived (P = 0.002, Mann-Whitney test). The distribution of platelet counts in these two groups of patients is shown in Figure 2. Lower reticulocyte count (P < 0.01), higher D-dimer (P = 0.02), bilirubin, creatinine, LDH, APACHE, and MODS scores (P < 0.001, respectively) also characterized the group of nonsurvivors. Durations of patients' stay in the ICU [ Table 2] correlated moderately with MODS scores (r = 0.36, P < 0.001) and weakly with day 1 as well as nadir platelet counts and APACHE scores. Multiple regression analysis revealed that GCS (P < 0.001) and arterial oxygen saturation (P < 0.01) were independent predictors of the duration of stay. Results of ROC curve analyses of the platelet count, MODS and APACHE scores as predictors of survival are shown in Table 3 and Figure 3. Importantly, the cutoff value of platelet count 175 × 10 9 /L showed high specificity and positive predictive value (PPV) for survival outcome comparable to both MODS and APACHE scores but weak sensitivity and negative predictive value [negative predictive value; Table 3]. dIscussIon A CBC is probably the most commonly done laboratory investigation in sickle cell patients in the hospital. However, there is little information about frequencies of platelet-count abnormalities; clinical correlates in different types of sickle cell crisis and prognostic significance of platelet counts in these severely sick patients in the ICU setting. These were the primary reasons for doing this study. Platelet counts correlate with prognostic scores, clinical course, and outcome We found that in the ICU, abnormal platelet counts are relatively common in sickle cell patients suffering from different types of complications. Unlike platelet counts reported in the steady state, thrombocytopenia was more frequent than thrombocytosis. Platelet numbers correlated significantly with laboratory parameters indicative of organ dysfunction and with prognostic scores. In published studies limited to ACS cases, a platelet count <200 × 10 9 /L was found to be an independent predictor of respiratory failure and neurologic complications; thrombocytopenia preceded a rapidly progressive course and was its sole predictor. [18,24] Similarly, in VOC, it was reported that thrombocytopenia may be associated with markedly elevated LDH and a severe course. [25] In contrast to platelet counts in crisis, thrombocytosis is a common observation in steady-state SCD. [3][4][5][6] The prognostic implication of elevated baseline platelet counts is debatable with no conclusive evidence of its associations with disease severity or complications. [4] The literature is silent on the question of thrombocytosis in sickle cell crises and its relevance to outcome. Findings in our study demonstrate that higher platelet counts during crises are linked to lower disease severity scores and predict higher survival chance. Receiver operating characteristic curve analyses The APACHE II score performed best as a predictor of survival with a cutoff score of 17. The calculation of APACHE scores requires multiple data inputs whereas the platelet count is a readily available indicator with comparable specificity and PPV for predicting survival although its sensitivity is poor. Mechanisms contributing to platelet-count alterations The pathogenesis of thrombocytopenia in SCD crises is multifactorial. In our study, direct correlations of platelet counts with WBC and reticulocytes suggest that compromised marrow function is a contributory factor. A production-defect could result from vaso-occlusive marrow infarction or sepsis. Second, sickle cell-endothelial interaction results in coagulation factor activation and a state of compensated disseminated intravascular coagulation with platelet consumption. This situation is aggravated in crisis. [1,[26][27][28][29] Thrombocytopenia could be a consequence of HU therapy, although in this study all patients who were taking this medication had normal platelet counts. Finally, genetic factors may be linked to the relatively high frequency of thrombocytopenia in our study population. Common genetic associations of SCD in our geographic regions such as the Arab-Indian haplotype, alpha thalassemia, and high HbF levels, are linked to more frequent splenomegaly and hypersplenism compared to African patients. [13,[30][31][32] Autopsy examination of tissues and cytology of bronchial fluid have shown that marrow/fat embolism is a common etiologic factor in SCD crisis and could progress to multi-organ failure. [17,18,[33][34][35][36] In ACS, pulmonary vessels may be occluded by emboli or by platelet thrombi leading to thrombocytopenia. [33,37] Occasionally, the manifestations of marrow/fat emboli may resemble those seen in thrombotic thrombocytopenic purpura (TTP). Very high LDH, leukoerythroblastic blood picture, thrombocytopenia, schistocytosis, and multiorgan failure are typically seen. [34,36,38] We have previously reported a subgroup of SCD patients who presented with these features and recovered after plasmapheresis. [29] Only one of the patients in the present study had TTP-like features. This condition may be under-diagnosed in the absence of a rigorous peripheral smear examination in all cases presenting with thrombocytopenia. SCD is also a chronic inflammatory condition in which raised levels of cytokines such as IL-1 β and IL-6 may produce a reactive thrombocytosis. [10] Functional asplenia may also be a contributory factor. [10,39] Finally, since platelet counts may rise during recovery from crisis, [10] our patients who had thrombocytosis at admission were possibly in early recovery though symptomatic. This would also explain good outcomes in this group. Limitations of the study A major limitation of this study is that it is retrospective. Serial measurements of platelet counts noting the magnitude of change from their steady-state values in patients would provide further insight into the dynamics of platelet-count alterations and their implications. Our observations require validation in a larger group of patients. conclusIons This study presents a cross-section of sickle cell patients in severe crisis in an intensive care setting and demonstrates the value of the platelet count as a marker of disease severity and predictor of outcome in these critically sick patients. Platelet numbers correlate with clinical and laboratory indicators of disease severity. Thrombocytopenia is significantly more common in patients with multiorgan failure and nonsurvivors. A cutoff platelet count of 175 × 10 9 /L predicts survival with high specificity and PPV. MODS and APACHE II scores perform better due to higher sensitivity; but the value of the platelet count, in addition to its diagnostic implications, lies in its simplicity as a prognosticator. The pathogenesis and clinical expression of SCD crisis is complex and has the potential of rapid progression to a fatal outcome. A simple scrutiny of the CBC for the presence of thrombocytopenia readily identifies a subgroup of patients with poorer prognosis who would benefit from more stringent management protocols. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-09-16T03:57:33.885Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "6b42d53ce193c05df58e2471b55da46661fd1735", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijccm.ijccm_49_18", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "b4765173833346405b56734226806da03afea9bf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261325313
pes2o/s2orc
v3-fos-license
Tuning the Bioactive Properties of Dunaliella salina Water Extracts by Ultrasound-Assisted Extraction (1) Background: Microalgae are promising feedstock for obtaining valuable bioactive compounds. To facilitate the release of these important biomolecules from microalgae, effective cell disruption is usually necessary, where the use of ultrasound has achieved considerable popularity as an alternative to conventional methods. (2) Methods: This paper aims to evaluate the use of ultrasound technology in water medium as a green technology to recover high added-value compounds from Dunaliella salina and improve its sensory profile towards a high level of incorporation into novel food products. (3) Results: Among the variables, the solid concentration and extraction time have the most significant impact on the process. For the extraction of protein, or fat, the most influential factor is the extraction time. Total polyphenols are only significantly affected by the extraction time. The antioxidant capacity is strongly affected by the solid to liquid ratio and, to a small extent, by the extraction time. Ultrasound-assisted extraction improves the overall odor/aroma of D. salina with good acceptability by the panelists. (4) Conclusions: The application of ultrasonic-assisted extraction demonstrates a positive overall effect on enhancing the sensory profile, particularly the odor of microalgal biomass, while the bioactive properties are preserved. Notably, the intense sea/fish odors are reduced, while earthy and citrus notes become more prominent, resulting in an improved overall sensory profile score. This is the first time, to our knowledge, that this innovative, green, and efficient technology has been used to upgrade the aroma profile of microalgae. Introduction In the ever-evolving world of consumer preferences, a notable shift is taking place towards products that embody natural goodness, promote well-being, and boast transparent labels-a phenomenon often referred to as the "Food Trends of 2023" (https: //www.innovamarketinsights.com, accessed on 25 August 2023). This ongoing transition is not only understandable but also inspiring, as individuals increasingly seek out items that align with their health-conscious and eco-friendly lifestyles. Among the myriad options that cater to these evolving sensibilities, microalgae have emerged as a frontrunner, carving out an essential niche in the realm of food products [1]. What distinguishes microalgae from the crowd is their multifaceted role in direct food consumption, which stems from their innate health benefits, natural pigments, and their capacity to serve as an exceptional vegan source of protein. This trifecta of attributes has propelled microalgae into the spotlight, capturing the attention of health-savvy individuals and eco-conscious consumers alike [2]. already used in feed products and is under evaluation by the FDA for food consumption, it makes a lot of sense to perform thorough studies to understand the nutritional, chemical, and biochemical compositions and its sensory profile, and how we can improve its quality for both feed and future food applications. This work focuses on the use of UAE to improve the sensory qualities of Dunaliella salina biomass using water as a medium, and obtain aqueous extracts nutritionally relevant and enriched with bioactive compounds for the further development of novel and healthy food products. Results and Discussion 2.1. Impact of Ultrasound-Assisted Extraction Parameters 2.1. 1 . On the Yield To understand the advantages of using UAE for the recovery of added-value extracts, a first batch of D. salina powder was extracted firstly using a Soxhlet extractor for 18 h and a second batch was also extracted by maceration over 24 h, also in water. The correspondent extracts were obtained with 10.4% and 3.1%, respectively. In this study, 1:10 and 1:5 of solid to solvent ratios (or 10% and 20% of solids) were used, maintaining the solvent (water) volume at 500 mL. The minimum and maximum yields obtained by UAE were 14.7% (pulse supply mode, 30 min, r.t., 1:5 sample to solvent ratio) and 41.4% (continuous, 30 min, r.t., 1:10 sample to solvent ratio), respectively (Table 1). Compared to the conventional methods, UAE allowed obtaining extracts in higher yields (10% with Soxhlet and 3% with maceration, vs. 15-41% with UAE), supporting the idea that UAE is an efficient and sustainable technique to recover added-value compounds in a shorter time (10-30 min using UAE and 18-24 h using the conventional methods) and with a reduced use of the solvent. Table 1. Yields of recovery of the bioactive compounds from Dunaliella salina microalgal biomass using ultrasound-assisted extraction conditions, Soxhlet extraction, and maceration, using water as medium. The most significant factors affecting the yield in UAE were the solid to solvent ratio and the duration of the extraction, as can be seen in Table 1. The higher yields of extraction were obtained when using a 1:10 solid to solvent ratio (almost 2-fold), either in continuous or pulse modes. To analyze the relationship between the yield of extraction and the solid to solvent ratio and extraction time, we can compare the values within each column for continuous and pulse modes separately. For the continuous mode, at a solid to solvent ratio of 1:10, the yield of extraction increased with the time (from 35.8% to 41.4%). At a solid to solvent ratio of 1:5, the yield of extraction decreased to some extent with the time, from 10 to 20 min (from 23.5% to 15.8%). At 30 min, the yield somewhat increased (17.5%). In the pulse mode, and at a solid to solvent ratio of 1:10, the yield of extraction generally increased with time. At 10 min, the yield of extraction was 22.8%, which increased to 33.4% at 20 min and slightly further to 35.3% at 30 min. However, at a solid to solvent ratio of 1:5, the yield of extraction remained relatively constant with the time (around 14.7%). In summary, for the continuous mode, the yield tended to increase with time at a solid to solvent ratio of 1:10, while it decreased with time at a solid to solvent ratio of 1:5. In the pulse mode, the yield also increased with time at a solid to solvent ratio of 1:10, while it remained relatively constant at a solid to solvent ratio of 1:5. In general, microalgae and plant cells are disrupted more by longer extraction times and, consequently, the release and diffusion of the bioactive compounds are enhanced [8,9]. The solid to solvent ratio is also a very important parameter for the extraction process. According to Purohit and Gogate [10], the use of a lower solid to solvent ratio than the optimum value leads to an increase in the solvent consumption and higher solid to solvent ratios than the optimum value will result in an incomplete extraction. Using a solid to solvent ratio of 1:10, the yield of the extraction was higher than when using a 1:5 solid to solvent ratio, and this could be explained by the excessive amount of microalgal material (1:5 ratio), which caused the increase in viscosity, and thus inhibited the diffusion of compounds through the extraction medium [11]. In addition, it contributed to ultrasound wave attenuation, leaving the restricted zone located near the ultrasound probe as the active part [12]. Yield of Extraction (%) The use of pulsed UAE also negatively influenced the extraction yield, either using 1:10 or 1:5 solid to solvent ratios. In fact, in some cases, it was possible to observe a lower extraction yield when using a pulsed ultrasound compared to the continuous mode [13,14]. Several factors can contribute to this outcome, such as the energy distribution that may result in an uneven distribution and dissipation of energy within the extraction medium, leading to a lower overall yield [15]. In addition, continuous cavitation effects, such as the formation of small bubbles or voids within the liquid medium, enhance the extraction process [16]. Mass transfer limitations can also explain why the extraction yields were higher when using the continuous sonication mode, since it provided the continuous agitation and disruption of the extraction medium, facilitating mass transfer and improving the extraction efficiency [14,15]. Figure 1 shows the protein content (%) in biomass and extract fractions obtained through continuous and pulsed ultrasound-assisted extractions at different time intervals and solid to solvent ratios. When comparing the different UAE conditions, the protein content was almost 3-fold higher in biomass fractions than in the extracts (for example, 28.3% and 8.7%, respectively, when using a 1:10 solvent ratio after 30 min in the pulsed sonication mode). Additionally, when using a solid to solvent ratio of 1:10, the protein content in the biomass, as well as in the extract fractions, was slightly higher than with the 1:5 ration, which was the opposite of what was expected. The extraction process usually becomes more energy efficient at a higher algal biomass concentration because it is more effective in its wave contact with solid matter, such as microalgae cells, as for all released components. However, in some cases, when an excessive amount of algal biomass is present, the diffusion of compounds towards the extraction medium becomes difficult due to an increase in viscosity, as previously mentioned [11]. Protein Content Based on these data, it is difficult to draw definitive conclusions about the extraction efficiency or the impact of continuous versus pulsed ultrasound-assisted extraction and time of extraction on the protein yield. It appears that the protein content varies slightly between the different extraction conditions; however, the differences are not substantial. When compared to the extractions with conventional techniques, the extracts obtained by UAE showed similar protein contents than the ones obtained from the Soxhlet extraction (around 8.5% dw). Based on these data, it is difficult to draw definitive conclusions about the extraction efficiency or the impact of continuous versus pulsed ultrasound-assisted extraction and time of extraction on the protein yield. It appears that the protein content varies slightly between the different extraction conditions; however, the differences are not substantial. When compared to the extractions with conventional techniques, the extracts obtained by UAE showed similar protein contents than the ones obtained from the Soxhlet extraction (around 8.5% dw). Fat Content As for the other intracellular constituents, the different ultrasound extraction parameters influenced, to different extends, lipid extraction yields. When analyzing Figure 2, is evident that the biomass fractions have a higher fat content (varying between 11.2% and 17.5%) than the correspondent extract fractions (0.5% to 6.8%), which means that the selected conditions are not efficient to remove lipids from within the microalgae cell. In fact, the use of polar solvents as water is not effective for the extraction of lipids. Ranjan and co-workers [16] claimed that solvent selectivity is usually the most effective parameter concerning the degree of lipid extraction. In a study conducted by Mecozzi and co-workers [17], it was confirmed that sonication with diethyl ether resulted in higher lipid extraction yields from marine mucilage compared to sonication with methanol. Another study by Wiyarno and co-workers [18] focused on the UAE of algal lipids from Nannochloropsis sp., highlighting the influence of different solvents on the efficiency of ultrasonic extraction. It was observed that when ethanol was used, higher extraction temperatures and longer extraction times were required compared to the use of n-hexane, suggesting that the selection of the solvent is an important factor for optimizing the UAE of algal lipids. Fat Content As for the other intracellular constituents, the different ultrasound extraction parameters influenced, to different extends, lipid extraction yields. When analyzing Figure 2, is evident that the biomass fractions have a higher fat content (varying between 11.2% and 17.5%) than the correspondent extract fractions (0.5% to 6.8%), which means that the selected conditions are not efficient to remove lipids from within the microalgae cell. In fact, the use of polar solvents as water is not effective for the extraction of lipids. Ranjan and co-workers [16] claimed that solvent selectivity is usually the most effective parameter concerning the degree of lipid extraction. In a study conducted by Mecozzi and co-workers [17], it was confirmed that sonication with diethyl ether resulted in higher lipid extraction yields from marine mucilage compared to sonication with methanol. Another study by Wiyarno and co-workers [18] focused on the UAE of algal lipids from Nannochloropsis sp., highlighting the influence of different solvents on the efficiency of ultrasonic extraction. It was observed that when ethanol was used, higher extraction temperatures and longer extraction times were required compared to the use of n-hexane, suggesting that the selection of the solvent is an important factor for optimizing the UAE of algal lipids. The extraction time is also an important factor for lipid extraction [19]. Generally, by increasing the sonication treatment time, cell disruption occurs, as well as an increase in the amount of released intracellular constituents [20]. However, after an optimal sonication time, usually no significant differences are noticed, suggesting that a short sonication time is enough to obtain a suitable yield [21]. When analyzing Figure 2, it is quite evident that the lipid extraction yields increase with the increasing sonication time (almost 4-and 6-fold when using solid to solvent ratios of 1:10 and 1:5, respectively). It is also worth noticing that longer extraction times lead to increases in the temperature and vapor pressure, promoting the formation of many cavitation bubbles. They collapse with less intensity due to the reduced pressure difference between the inside and outside of the bubbles [22], thus reducing the intensity of the mass transfer enhancement. Considering the solid to solvent ratio, it is possible to see that another important factor for the lipid extraction process is that different solid to solvent ratios, i.e., 1:10 and 1:5, result in the different recovery rates of the lipids: the highest lipid recovery rate was observed when the solid to solvent ratio was 1:5 w/v. It is expected to see a more efficient extraction process at a higher algal biomass concentration because it is more effective in the wave contact with solid matter, such as microalgae cells, as for all released components [16]. When compared to the results of the extracts obtained from the Soxhlet extraction, it is worth noticing that a lower fat content is obtained. The extraction time is also an important factor for lipid extraction [19]. Generally, by increasing the sonication treatment time, cell disruption occurs, as well as an increase in the amount of released intracellular constituents [20]. However, after an optimal sonication time, usually no significant differences are noticed, suggesting that a short sonication time is enough to obtain a suitable yield [21]. When analyzing Figure 2, it is quite evident that the lipid extraction yields increase with the increasing sonication time (almost 4-and 6-fold when using solid to solvent ratios of 1:10 and 1:5, respectively). It is also worth noticing that longer extraction times lead to increases in the temperature and vapor pressure, promoting the formation of many cavitation bubbles. They collapse with less intensity due to the reduced pressure difference between the inside and outside of the bubbles [22], thus reducing the intensity of the mass transfer enhancement. Considering the solid to solvent ratio, it is possible to see that another important factor for the lipid extraction process is that different solid to solvent ratios, i.e., 1:10 and 1:5, result in the different recovery rates of the lipids: the highest lipid recovery rate was observed when the solid to solvent ratio was 1:5 w/v. It is expected to see a more efficient extraction process at a higher algal biomass concentration because it is more effective in the wave contact with solid matter, such as microalgae cells, as for all released components [16]. When compared to the results of the extracts obtained from the Soxhlet extraction, it is worth noticing that a lower fat content is obtained. Ash The extraction of minerals from microalgae using ultrasound has gained attention in recent years. Microalgae are extremely rich sources of minerals, namely, calcium, magnesium, iron, and trace elements, such as selenium and zinc. The cavitation and mechanical forces generated by ultrasound promote the rupture of cell membranes, aiding Ash The extraction of minerals from microalgae using ultrasound has gained attention in recent years. Microalgae are extremely rich sources of minerals, namely, calcium, magnesium, iron, and trace elements, such as selenium and zinc. The cavitation and mechanical forces generated by ultrasound promote the rupture of cell membranes, aiding the liberation of minerals from the microalgal biomass [12]. When analyzing Figure 3, is evident that the biomass fractions have a higher mineral content (varying between 33.2% and 57.5%) than the correspondent biomass fractions (7.3% to 9.1%), which means that, in the selected conditions, the minerals are efficiently entirely removed from the cell towards the liquid phase. The extraction time seems to be an important factor for mineral recovery [19]. Usually, by increasing the sonication treatment time, we increase the amount of released intracellular constituents [20]. In fact, there is an increase in minerals that are released for the extracts over time (with the exception of the following extraction conditions: a pulse sonication mode of 20 min and a solid to solvent ratio of 1:10). Overall, the solid to solvent ratio and sonication mode had a significant effect on the mineral recovery rate, especially in the pulse sonication mode when the solid to liquid ratio increased (around 1.5-fold). Additionally, the extracts obtained by different UAE conditions showed a higher ash content when compared with the ones obtained using conventional techniques. an important factor for mineral recovery [19]. Usually, by increasing the sonication treatment time, we increase the amount of released intracellular constituents [20]. In fact, there is an increase in minerals that are released for the extracts over time (with the exception of the following extraction conditions: a pulse sonication mode of 20 min and a solid to solvent ratio of 1:10). Overall, the solid to solvent ratio and sonication mode had a significant effect on the mineral recovery rate, especially in the pulse sonication mode when the solid to liquid ratio increased (around 1.5-fold). Additionally, the extracts obtained by different UAE conditions showed a higher ash content when compared with the ones obtained using conventional techniques. Carbohydrates The effect of the sonication time, solid to solvent ratio, and sonication mode were analyzed. Figure 4 shows that the total carbohydrate content increases with the extension of the extraction time, except after 20 min in the pulsed sonication mode and when the solid to liquid ratio is 1:10. This indicates that the extraction time can improve the efficiency of carbohydrate extraction, which is probably due to the fact that algal cells can break more effectively in these conditions [21][22][23]. The solid to solvent ratio also affected the extraction efficiency: the carbohydrate content was higher in the extracts than in the biomass, which was more evident when the solid to solvent ratio was 1:5, either in continuous or pulse sonication modes. The same trend was reported by Zhao and coworkers [23]; however, it was the opposite of what we expected because the excessive amount of microalgal material (1:5 ratio) usually inhibited the diffusion of compounds through a more viscous extraction medium [11]. The sonication mode alone had no influence on the carbohydrate's extraction yield. In order to fully extract the carbohydrates, a combination of different UAEs could be tested as a low power input combined with longer extraction times [23] or combining UAE with other disruption Carbohydrates The effect of the sonication time, solid to solvent ratio, and sonication mode were analyzed. Figure 4 shows that the total carbohydrate content increases with the extension of the extraction time, except after 20 min in the pulsed sonication mode and when the solid to liquid ratio is 1:10. This indicates that the extraction time can improve the efficiency of carbohydrate extraction, which is probably due to the fact that algal cells can break more effectively in these conditions [21][22][23]. The solid to solvent ratio also affected the extraction efficiency: the carbohydrate content was higher in the extracts than in the biomass, which was more evident when the solid to solvent ratio was 1:5, either in continuous or pulse sonication modes. The same trend was reported by Zhao and co-workers [23]; however, it was the opposite of what we expected because the excessive amount of microalgal material (1:5 ratio) usually inhibited the diffusion of compounds through a more viscous extraction medium [11]. The sonication mode alone had no influence on the carbohydrate's extraction yield. In order to fully extract the carbohydrates, a combination of different UAEs could be tested as a low power input combined with longer extraction times [23] or combining UAE with other disruption methods, such as ozonation, microwave, homogenization, or enzymatic lysis to facilitate the release of the target compounds [15]. UAE was efficient in the recovery of carbohydrates when compared with maceration; however, it was quite similar to Soxhlet extraction. methods, such as ozonation, microwave, homogenization, or enzymatic lysis to facilitate the release of the target compounds [15]. UAE was efficient in the recovery of carbohydrates when compared with maceration; however, it was quite similar to Soxhlet extraction. 2.1.6. Antioxidant Potential D. salina can produce several compounds, including pigments such as ɑ-carotene, lutein, and zeaxanthin [24]; polyphenols, such as phenolic acids, flavonoids, isoflavonoids, stilbenes, lignans, and phenolic polymers [25]; or phytosterols [26] with remarkable antioxidant properties. Due to their production of valuable bioactive ingredients, microalgae, such as D. salina, also represent promising opportunities in the field of functional foods and as food additives, since it is listed as having no known toxins, and GRAS and EFSA concluded that mixed β-carotenes obtained from algae as a food color is not of concern in relation to safety [27]. The antioxidant potential measured by DPPH (Figure 5a) ranged from 137.1-223.3 µmol Trolox/100 g dw for the biomass fractions, and between 385.0-414.7 µmol Trolox/100 g dw) for the extracts, which was almost 2-fold. The highest value observed for the extracts (414.7 µmol Trolox/100 g dw) was obtained under the following extraction conditions: 180 W, 100% amplitude, r.t. (24 °C), and a solid to solvent ratio of 1:10, in continuous sonication mode for 10 min (C 1:10 10'). The treated biomass with greater antioxidant potential measured by the DPPH assay (223.3 µmol Trolox/100 g dw) was obtained in the same conditions. Longer extraction times appeared to have a negative effect on the antioxidant potential of extracts, which could be related to an increase in the temperature on the reaction medium, which could lead to the degradation of thermostable compounds, as antioxidants. However, we may conclude that, in the tested conditions, the extraction does not occur to its full extent, and it does not cause a complete cell wall disruption process, since there is a considerable amount of antioxidants in the biomass that are not released into 2.1.6. Antioxidant Potential D. salina can produce several compounds, including pigments such as α-carotene, lutein, and zeaxanthin [24]; polyphenols, such as phenolic acids, flavonoids, isoflavonoids, stilbenes, lignans, and phenolic polymers [25]; or phytosterols [26] with remarkable antioxidant properties. Due to their production of valuable bioactive ingredients, microalgae, such as D. salina, also represent promising opportunities in the field of functional foods and as food additives, since it is listed as having no known toxins, and GRAS and EFSA concluded that mixed β-carotenes obtained from algae as a food color is not of concern in relation to safety [27]. The antioxidant potential measured by DPPH (Figure 5a) ranged from 137.1-223.3 µmol Trolox/100 g dw for the biomass fractions, and between 385.0-414.7 µmol Trolox/100 g dw) for the extracts, which was almost 2-fold. The highest value observed for the extracts (414.7 µmol Trolox/100 g dw) was obtained under the following extraction conditions: 180 W, 100% amplitude, r.t. (24 • C), and a solid to solvent ratio of 1:10, in continuous sonication mode for 10 min (C 1:10 10'). The treated biomass with greater antioxidant potential measured by the DPPH assay (223.3 µmol Trolox/100 g dw) was obtained in the same conditions. Longer extraction times appeared to have a negative effect on the antioxidant potential of extracts, which could be related to an increase in the temperature on the reaction medium, which could lead to the degradation of thermostable compounds, as antioxidants. However, we may conclude that, in the tested conditions, the extraction does not occur to its full extent, and it does not cause a complete cell wall disruption process, since there is a considerable amount of antioxidants in the biomass that are not released into the extracts. A possible solution might be enhancing the extraction time, maintaining a low temperature to avoid the degradation of thermostable compounds [15]. The FRAP values (Figure 5b) ranged from 62.6-87.9 mmol Trolox/100 g dw for the extract fractions, and were again almost 8-fold lower for the biomass (9.5-12.2 mmol Trolox/100 g dw). The highest value observed for the extracts (87.9 mmol Trolox/100 g dw) The FRAP values (Figure 5b) ranged from 62.6-87.9 mmol Trolox/100 g dw for the extract fractions, and were again almost 8-fold lower for the biomass (9.5-12.2 mmol Trolox/100 g dw). The highest value observed for the extracts (87.9 mmol Trolox/100 g dw) was obtained under the following extraction conditions: 180 W, 100% amplitude, r.t. (24 • C), and a solid to solvent ratio of 1:5, in the pulse sonication mode for 20 min (D 1:5 20'). The treated biomasses with greater antioxidant potential measured by the FRAP assay (11.9-12.2 mmol Trolox/100 g dw) were obtained using 180 W, 100% amplitude, r.t. (24 • C), and a solid to solvent ratio of 1:10, in the pulse sonication mode for 20-30 min. The observed results allow us to conclude that the UAE conditions allow the efficient extraction of compounds with antioxidant potential, which are determined by the FRAP assay towards the liquid phase, resulting in extracts with high antioxidant potential values, when compared to the biomass. Except for the UAE, where a solid to solvent ratio of 1:5 in the pulse sonication mode was used, the antioxidant potential of the extracts decreased with the extraction time. Generally, lower extraction yields are obtained by prolonging the extraction time, because microalgae cells are disrupted to a greater extent with longer extraction times and, consequently, the release and diffusion of the bioactives are enhanced. However, when the extraction time is longer than the optimum time, the antioxidants might be degraded due to heat generation, resulting in the chemical breakdown of bioactive compounds and thereby decreases the extraction efficiency [15]. The antioxidant potential of extracts obtained from UAE was considerably higher when compared to the extracts obtained either by maceration (around 10 mg of Trolox/100 g dw) or Soxhlet (14 mg Trolox/100 g dw). Total Phenolic Content A higher solid-solvent ratio usually facilitates improved solvent penetration into the microalgae cells, leading to the enhanced mass transfer of polyphenols and, consequently, an increased extraction yield. In fact, when the solid to solvent ratio was higher (1:5), the TPC of the extracts was almost 2.5-fold higher than when the solid to solvent ratio was 1:10 (ranging from 2.6-6.8 mg GA/100 g dw) ( Figure 6). However, in some cases, excessively high amounts of plant material at smaller ratios can elevate solvent viscosity, hindering the diffusion of polyphenols through the extraction medium. This may explain why, when using the continuous sonication mode with a solid to solvent ratio of 1:10 after 30 min, a decrease in the TPC was observed (almost 2.6-fold). Or, when using the pulse sonication mode with a solid to solvent ratio of 1:10 and when using the continuous sonication mode with a solid to solvent ratio of 1:5, there was almost no variability in the TPC after 20 min. Moreover, it is worth noting that extending the extraction time may result in the oxidation of bioactive substances, potentially reducing the overall yield of phenolic compounds, which may justify what occurred after 30 min of extraction with a solid to solvent ratio of 1:10 in the continuous sonication mode (TPC decreased from 6.8 mg GA/100 g dw in C 1:10 20' to 2.6 mg GA/100 g dw in C 1:10 30'). When comparing the effect of continuous or pulse sonication modes, an increase in TPC was observed, especially to a greater extent when using a solid to solvent ratio of 1:5. Christou et al. (2021) also reported an increase in the recovery of polyphenols by employing pulsed UAE [28]. In addition, it is important to notice that the TPC is usually closely related to the antioxidant potential, as reported by Ghafoor and co-workers [29]. However, greater differences in the TPC values between the different UAE conditions were observed when compared with the antioxidant potential. Usually, since both AAT and TPC are related to antioxidant activity, a substance with a higher total phenolic content is likely to exhibit stronger antioxidant activity. In many cases, a higher TPC value is indicative of a higher antioxidant capacity. However, it is essential to remember that while there is often a positive correlation between AAT and TPC, the relationship may not always be perfect. Some factors, such as the presence of other bioactive compounds or the specific chemical structure of the phenolic compounds, can influence the overall antioxidant activity of a substance, even if its TPC is high [30,31]. As for the antioxidant potential, the TPC of the extracts obtained from UAE was considerably higher when compared to the extracts obtained either from maceration (around 5 mg GAE/100 g dw) or Soxhlet (around 13 mg GAE/100 g dw). As for the antioxidant potential, the TPC of the extracts obtained from UAE was considerably higher when compared to the extracts obtained either from maceration (around 5 mg GAE/100 g dw) or Soxhlet (around 13 mg GAE/100 g dw). Sensory Analysis The sensory analysis assays were conducted with raw D. salina and biomass and extract fractions obtained after UAE. The panelist identified specific odors in the samples, ranging from floral, citrus, sea/fish, earthy, or none of the above. It was observed that D. salina raw microalgae had a very intense sea/fishy odor (identified by 45% of panelists) and earthy notes (identified by 30%), being less appreciated (Figure 7). After UAE, the biomass revealed a decrease in the intensity of the sea/fishy odor, being detected only by 10% to 25% of the panelists. Floral notes became more detectable (around 10% in raw D. salina and between 12% and 25% in the extracts). Citrus notes became detectable in the biomass (between 8% and 30% of the panelists) and were not detected either in the raw microalgae or in the extracts. In the extracts, the sea/fish odor was much more intense than in the raw microalgae (30-65% of panelist identified this specific and intense odor). Citrus notes were only detected in some extracts by a small number of panelists (10-20%). Earthy notes were detected in both the biomass and extract fractions, being much more intense than in the raw microalgae. Sensory Analysis The sensory analysis assays were conducted with raw D. salina and biomass and extract fractions obtained after UAE. The panelist identified specific odors in the samples, ranging from floral, citrus, sea/fish, earthy, or none of the above. It was observed that D. salina raw microalgae had a very intense sea/fishy odor (identified by 45% of panelists) and earthy notes (identified by 30%), being less appreciated (Figure 7). After UAE, the biomass revealed a decrease in the intensity of the sea/fishy odor, being detected only by 10% to 25% of the panelists. Floral notes became more detectable (around 10% in raw D. salina and between 12% and 25% in the extracts). Citrus notes became detectable in the biomass (between 8% and 30% of the panelists) and were not detected either in the raw microalgae or in the extracts. In the extracts, the sea/fish odor was much more intense than in the raw microalgae (30-65% of panelist identified this specific and intense odor). Citrus notes were only detected in some extracts by a small number of panelists (10-20%). Earthy notes were detected in both the biomass and extract fractions, being much more intense than in the raw microalgae. Figure 8 presents the average answers provided by the panel for the principal odors (citrus, sea/fish, and earthy) detected in the raw D. salina and the correspondent biomass and extract fractions obtained after UAE treatment. It is evident that the biomass is abundant in earthy notes, and a particular biomass fraction with citrus notes (obtained under the continuous sonication mode for 20 min and when the solid to solvent ratio was 1:5, C 1:5 20'). Extract fractions were enriched with sea/fish notes and also some earthy ones to a lesser extent. Microalgae have some major drawbacks when we think of their sensory profile (color, odor, and aroma), especially if we want to incorporate them in feed and food products. There are several methods that can be used to improve these characteristics; however, they involve the use of solvents (chemical processes that involve the use of high amounts of solvents with a loss of bioactive properties) or enzymatic processes (expensive and not sustainable at the industrial scale). The use of a simple, practical, inexpensive, and green technology is efficient as the US represents a considerable contributor to this field of research. To our knowledge, no previous studies using this technology for the improvement of the sensory profile of microalgae have been found. Figure 8 presents the average answers provided by the panel for the principal odors (citrus, sea/fish, and earthy) detected in the raw D. salina and the correspondent biomass and extract fractions obtained after UAE treatment. It is evident that the biomass is abundant in earthy notes, and a particular biomass fraction with citrus notes (obtained under the continuous sonication mode for 20 min and when the solid to solvent ratio was 1:5, C 1: 5 20 ). Extract fractions were enriched with sea/fish notes and also some earthy ones to a lesser extent. Microalgae have some major drawbacks when we think of their sensory profile (color, odor, and aroma), especially if we want to incorporate them in feed and food products. There are several methods that can be used to improve these characteristics; however, they involve the use of solvents (chemical processes that involve the use of high amounts of solvents with a loss of bioactive properties) or enzymatic processes (expensive and not sustainable at the industrial scale). The use of a simple, practical, inexpensive, and green technology is efficient as the US represents a considerable contributor to this field of research. To our knowledge, no previous studies using this technology for the improvement of the sensory profile of microalgae have been found. Figure 8 presents the average answers provided by the panel for the principal odors (citrus, sea/fish, and earthy) detected in the raw D. salina and the correspondent biomass and extract fractions obtained after UAE treatment. It is evident that the biomass is abundant in earthy notes, and a particular biomass fraction with citrus notes (obtained under the continuous sonication mode for 20 min and when the solid to solvent ratio was 1:5, C 1: 5 20 ). Extract fractions were enriched with sea/fish notes and also some earthy ones to a lesser extent. Microalgae have some major drawbacks when we think of their sensory profile (color, odor, and aroma), especially if we want to incorporate them in feed and food products. There are several methods that can be used to improve these characteristics; however, they involve the use of solvents (chemical processes that involve the use of high amounts of solvents with a loss of bioactive properties) or enzymatic processes (expensive and not sustainable at the industrial scale). The use of a simple, practical, inexpensive, and green technology is efficient as the US represents a considerable contributor to this field of research. To our knowledge, no previous studies using this technology for the improvement of the sensory profile of microalgae have been found. Samples and Chemicals D. salina was produced by an autotrophic process by Pagarete Microalgae Solutions Soc. Unipessoal (Póvoa de Santa Iria, Portugal). The microalgal biomass was spray-dried by the same company and kept under −8 • C until further analysis. Soxhlet Extraction D. salina powder (10 g) was extracted with 300 mL of water, refluxed in Soxhlet apparatus for 18 h. The obtained extract solution was cooled to room temperature, centrifuged at 8000 rpm (15,740× g) for 20 min using a bench cooling centrifuge (Z 383 K, Hermle Labortechnik GmbH, Wehingen, Germany), and filtered through Whatman no.1 filter paper (Whatman™, Maidstone, UK) in order to remove the insoluble particles The water from the resulting solution was removed using a lyophilizer (VaCo 2-E, Zirbus technology GmbH, Harz, Germany). Dried extracts were stored under vacuum at −20 • C until further use. Maceration Briefly, 5 g of D. salina powder was mixed with 150 mL of water and agitated at a moderate speed at room temperature for 24 h using a magnetic stirrer. Upon the completion of the extraction procedure, stirring was stopped; the extract was centrifuged at 8000 rpm (15,740× g) for 20 min using a bench cooling centrifuge (Z 383 K, Hermle Labortechnik GmbH, Wehingen, Germany) and filtered through Whatman no.1 filter paper (Whatman™, Maidstone, UK) in order to remove the insoluble particles. The water from the resulting solution was removed using a lyophilizer (VaCo 2-E, Zirbus technology GmbH, Harz, Germany) and the samples were stored at −20 • C for the subsequent analysis. Ultrasound-Assisted Extraction The extraction experiments were conducted using an Ultrasonic Processor UP200Ht (Hielscher Ultrasonics, Teltow, Germany), measuring 300 mm × 190 mm × 90 mm, operated at 26 kHz, with a rated power of 200 W and equipped with a sonotrode S26 d1 probe. For the operation parameters, the intensity was set at 100%, solvent composition (water at 100%), sample to solvent ratios w/v (1:10 and 1:5), extraction times (10, 20, and 30 min), and sonication mode (continuous, 0 s:0 s, or pulsed, 10 s:10 s) and tested. Water was chosen because it is a sustainable solvent, with no toxicity, and extraction occurred at room temperature (r.t., 24 • C). To avoid overheating and the consequent degradation of thermo-sensitive compounds, the experiments were performed in an ice bath. The extracts were centrifuged at 8000 rpm (15,740× g) for 20 min using a bench cooling centrifuge (Z 383 K, Hermle Labortechnik GmbH, Wehingen, Germany) and filtered through Whatman no.1 filter paper (Whatman™, Maidstone, UK) in order to remove the insoluble particles. All the extractions were performed twice and the yield of extraction (extractable components), expressed on a dry weight basis, was calculated from the following equation: Yield (g/100 g) = (w1 × 100)/w2, where w1 is the weight of the extract residue obtained after solvent removal and w2 the weight of the biomass before extraction. Before each extraction, all samples were hand-homogenized, and then the tip probe was immersed in half of the total solvent height (4.5 cm). All extractions were performed at room temperature; however, samples were placed in ice to avoid overheating (and the consequent degradation of bioactives). Extractions were performed in water and then microalgae suspensions were centrifuged (1118× g for 20 min) and extracts collected and stored under darkness at −4 • C for further analysis. The remaining pellet was dried at 60 • C in an oven until reaching a constant weight. A combination of different duty cycles (expressed as %) were applied, i.e., 100%, for a total of 10, 20, and 30 min as extraction times; the total cycle time comprised a pulse duration and a pulse interval. The amplitude (expressed as %) was also applied in 100%. The amplitude percentage refers to the percentage of maximum power used for the equipment. Nutritional Composition The general nutritional composition included the determination of moisture, ash, minerals, protein, total fat, and total carbohydrates. The determination of each parameter was performed in triplicate and the data were presented as mean ± SD. The moisture content of samples was measured gravimetrically through an automatic moisture analyzer PMB 202 (Adam Equipment, Oxford, NJ, USA) at 130 • C to a constant weight. The total ash content was determined by incineration at 500 • C in a muffle furnace [32]. Fat content was determined following the Portuguese standard method NP4168 [33]. Protein content (N × 6.25) was estimated by the combustion method DUMAS [34], using a Vario EL elemental analyzer (Elementar, Langenselbold, Germany). The carbohydrate content was calculated by the difference between the protein, fat, ash, and moisture contents. Antioxidant Capacity Evaluation The reducing power of the D. salina samples (raw water extracts and biomass fraction) was determined using the ferric ion-reducing antioxidant power (FRAP) assay [35]. The FRAP reagent was prepared by mixing 10 mmol/L 2,4,6-tripyridyl-s-triazine with 40 mmol/L HCl, 0.02 mol/L FeCl 3 and acetate buffer, pH 3.6, in a ratio of 1:1:10. The D. salina samples (10 µL) were added to 290 µL of the FRAP reagent and the absorbance was measured at 593 nm after 6 min. Three replicates were performed for each sample, and the mean values of reducing power were reported as milligrams of Trolox equivalents per gram of dry weight (dw) and corresponded to the mean value of the triplicate tests. DPPH The scavenging effect of raw D. salina and correspondent water extracts and biomass fractions was determined using the DPPH (2,2-diphenyl-1-picryl-hydrazyl-hydrate) methodology [36]. Aliquots of 10 µL of Trolox or D. salina samples were added to 100 µL (90 µmol/L) of the DPPH solution in methanol, and the mixture was diluted with 190 µL of methanol. In the control, the extract was substituted with the same volume of solvent, and in the blank probe, only methanol (290 µL) and the D. salina sample (10 µL) were mixed. After 30 min, the absorbance was measured at 515 nm. Three replicates were performed for each sample, and the mean values of the antioxidant capacity were reported as milligrams of Trolox equivalents per gram of dw and corresponded to the mean value of the triplicate tests. Total Phenolic Content The total phenolic contents (TPCs) of raw biomass and water extracts were evaluated using the method reported by [37]. Aliquots of raw D. salina, biomass, and extracts or gallic acid (30 µL) were added to 150 µL of 0.1 mol/L Folin-Ciocalteu reagent and mixed with 120 µL of sodium carbonate (7.5%) after 10 min. The mixtures were incubated in a dark at room temperature for 2 h, and then the absorbance was measured at 760 nm. The TPC was reported as milligrams of gallic acid equivalents per gram of dw and corresponded to the mean value of the triplicate tests. Sensory Evaluation Sensory analysis was conducted in a standardized sensory test room with booths, following the EN ISO 8589:2007 procedure. An untrained panel (n = 30; gender: females 17, males 13; age range: 22-47 years old) participated in the hedonic evaluation following the commonly used protocol by LEAF [38,39] in accordance with the ethical standards of the local committee responsible for human experiments and with the code of ethics of the World Medical Association [40]. Samples were randomly distributed, and the panelists were invited to sufficiently cleanse their palates with apples between trying the samples and pause at least 10 s between sniffs to partially restore the olfactory receptors. In addition to the control, 12 samples of extracts and 12 samples of biomass were offered to the panel in groups of three, and the evaluations occurred on different days. The panelists: (i) judged the level of odor intensity on a 6-point hedonic scale from no odor (0) to very strong odor (5), converted it into a percentage (%), and (ii) identified the odors as floral, citrus, sea/fish, earthy, and none of the abovementioned. Statistical Analysis The one-way analysis of variance (ANOVA), Tukey's HSD test, Tukey's multiple comparison test, Pearson's correlation coefficients, agglomerative hierarchical clustering (AHC), and principal component analysis (PCA) were applied using Origin Statistical Software for Excel version 2021.4.1 (Addinsoft, New York, NY, USA) integrated with Microsoft Excel 2021 (Microsoft Corp., Redmond, WA, USA). A level of p ≤ 0.05 was considered as significant. Conclusions Ultrasound was applied as a highly effective, safe, and "green" cell disruption technology in microalgal biorefining. Effective cell disruption requires the careful selection of an appropriate ultrasonic frequency, intensity, and duration. The effects of the solid to solvent ratio, extraction time, and sonication mode (continuous or pulsed) were evaluated. Among the examined variables, the solvent concentration and time of extraction were found to be the most influential parameters, significantly affecting the extraction efficiency of protein and fat. The antioxidant capacity showed the same trend as the phenolic content, increasing with an increase in the solid to solvent ratio Indeed, the measurement of target product release, coupled with other evaluation techniques, can offer a comprehensive and profound assessment of the extent of cell disruption. Together, these evaluation methods enable researchers to achieve a thorough understanding of the cell disruption's impact on the target products, resulting in informed decisions concerning process optimization and better product yields. UAE also had a positive impact on improving the sensory profile (odor) of the microalgal biomass. While the sea/fish odor became less intense, odors, such as earthy and citrus, became more intense, providing a better overall sensory profile score. UAE allowed us to improve the sensory profile of microalgae without losing their bioactive properties. To our knowledge, this work is the first to report the results of using UAE to improve the aroma profile of microalgae. Data Availability Statement: Data supporting the findings of this study are available upon request from the corresponding author.
2023-08-30T15:06:08.022Z
2023-08-27T00:00:00.000
{ "year": 2023, "sha1": "6b750c0d36790dd9f532d90b995fc7f36a32666e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-3397/21/9/472/pdf?version=1693211438", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5239543a633bffb4a3c9eadc85f25fbfcb02a0ad", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
118384109
pes2o/s2orc
v3-fos-license
Relationship between Superconductivity and Antiferromagnetism in LaFe(As$_{1-x}$P$_{x}$)O Revealed by $^{31}$P-NMR We performed $^{31}$P-NMR measurements on LaFe(As$_{1-x}$P$_{x}$)O to investigate the relationship between antiferromagnetism and superconductivity. The antiferromagnetic (AFM) ordering temperature $T_{\rm N}$ and the moment $\mu_{\rm ord}$ are continuously suppressed with increasing P content $x$ and disappear at $x = 0.3$ where bulk superconductivity appears. At this superconducting $x = 0.3$, quantum critical AFM fluctuations are observed, indicative of the intimate relationship between superconductivity and low-energy AFM fluctuations associated with the quantum-critical point in LaFe(As$_{1-x}$P$_{x}$)O. The relationship is similar to those observed in other isovalent-substitution systems, e.g., BaFe$_{2}$(As$_{1-x}$P$_{x}$)$_{2}$ and SrFe$_{2}$(As$_{1-x}$P$_{x}$)$_{2}$, with the"122"structure. Moreover, the AFM order reappears with further P substitution ($x>0.4$). The variation of the ground state with respect to the P substitution is considered to be linked to the change in the band character of Fe-3$d$ orbitals around the Fermi level. We performed 31 P-NMR measurements on LaFe(As1−xPx)O to investigate the relationship between antiferromagnetism and superconductivity. The antiferromagnetic (AFM) ordering temperature TN and the moment µ ord are continuously suppressed with increasing P content x and disappear at x = 0. 3 where bulk superconductivity appears. At this superconducting x = 0.3, quantum critical AFM fluctuations are observed, indicative of the intimate relationship between superconductivity and low-energy AFM fluctuations associated with the quantum-critical point in LaFe(As1−xPx)O. The relationship is similar to those observed in other isovalent-substitution systems, e.g., BaFe2(As1−xPx)2 and SrFe2(As1−xPx)2, with the "122" structure. Moreover, the AFM order reappears with further P substitution (x > 0.4). The variation of the ground state with respect to the P substitution is considered to be linked to the change in the band character of Fe-3d orbitals around the Fermi level. The interplay between superconductivity and magnetism is one of the hottest topics in condensed matter physics. It has been believed that superconductivity is mediated by antiferromagnetic (AFM) fluctuations in cuprates and heavy-fermion superconductors. [1][2][3][4] The relationship between superconductivity and AFM fluctuations in recent-discovered iron-based superconductors has also been discussed. [5][6][7][8][9][10][11][12][13][14][15][16] Up to now, it has been reported that superconductivity appears around the AFM quantum critical point in the Ba122 system, indicative of the strong relationship between superconductivity and low-energy AFM fluctuations. 10,[12][13][14][15][16] On the other hand, superconductivity is linked not to the low-energy AFM fluctuations probed by NMR spectroscopy but to the local stripe spin correlation with Q stripe = (π, 0) (unfolded Brillouin zone) in LaFeAs(O 1−x F x ). [17][18][19][20] It seems that the roles of AFM fluctuations in superconductivity are different between the "122" and "1111" systems. However, in the "1111" system, most experiments have been performed on the electron-doped system. In addition, a different conclusion that low-energy AFM fluctuations probed by NMR spectroscopy are related to superconductivity in LaFeAs(O 1−x F x ) has been reported by other research groups. 21) Therefore, in order to clar- * E-mail address: shunsaku@crystal.kobe-u.ac.jp † Present Address: Department of Physics, Kobe University, Kobe 657-8501, Japan ‡ Present Address: Grad. Sch. of Sci, Tokyo Metropolitan Univ., Hachioji, Tokyo 192-0397, Japan ify the universal feature between superconductivity and AFM fluctuations in iron-based superconductors, experimental research on other "1111" systems is necessary. In this study, we performed 31 P-NMR measurements on P-substituted LaFe(As 1−x P x )O (x = 0.1, 0.2, 0.3, 0.4, and 0.5) to investigate the relationship between superconductivity and antiferromagnetism. Isovalent Psubstitution does not introduce charge carriers and thus it corresponds to applying chemical pressure. According to our NMR results, the AFM ordering temperature T N and the moment µ ord are continuously suppressed with increasing P content x up to 0.3 and bulk superconductivity appears when antiferromagnetism is suppressed. In contrast to those observed in LaFeAs(O 1−x F x ), 18) lowenergy AFM fluctuations remain even at x = 0.3, where the superconducting transition temperature T c is maximum, suggesting that the superconductivity is associated with low-energy AFM fluctuations with a quantum critical character, as observed in BaFe 2 (As 1−x P x ) 2 10) and Ba(Fe 1−x Co x ) 2 As 2 . 13) In addition, the AFM order reappears with further P-substitution. We consider that a change in the band character around the Fermi level with P substitution induces a variation of the ground state in LaFe(As 1−x P x )O. Polycrystalline samples of LaFe(As 1−x P x )O (x = 0.1− 0.5) synthesized by solid-state reaction were ground into powder for NMR measurements. 22) T c is determined from the Meissner signal measured using an NMR coil, as shown in Fig. 1(a). The observed temperature dependence of the Meissner signal is evidence of bulk superconductivity occurring at T c = 11 K when x = 0.3, weak (nonbulk) superconductivity at 8 K when x = 0.4, and the lack of superconductivity for the other samples. These results are consistent with the previous report. 22) A conventional spin-echo technique was utilized for the following NMR measurement. The AFM ordering temperature T N is determined from the peak of the nuclear spin-lattice relaxation rate divided by the temperature 1/T 1 T and the increase in the NMR linewidth. First, we focus on the evolution of an ordered moment µ ord upon P-substitution through the 31 P-NMR spectrum. Figure 1(b) shows the H-swept 31 P-NMR spectra at various temperatures and x values (x = 0.1 − 0.5) measured at 72.1 MHz. All 31 P-NMR spectra consist of a single and almost isotropic line shape, as expected for an I = 1/2 nucleus. The linewidth of the spectrum, except for x = 0.3, increases significantly below T N while the peak position of each spectrum does not change very much, as shown in Fig. 1(b). In the AFM state of iron pnictides, it was reported that Fe ordered moments lying in the ab plane with stripe correlations induce an internal magnetic field H int along the c-axis at the As and P sites owing to the off-diagonal term of the hyperfine coupling tensor. 23,24) In such a commensurate stripe-type AFM ordered state, the powder pattern of I = 1/2 becomes nearly rectangular. 25) However, the obtained spectra show a Lorentzian-like shape, indicative of the distribution of H int . Such an H int distribution can be interpreted in terms of the incommensurability of AFM order or the distribution of the amplitude of µ ord . For simplicity, we use the FWHM to estimate the average H int , which is proportional to µ ord . Figure 1(c) shows the temperature dependence of FWHM of the 31 P-NMR spectrum for all samples. The FWHMs of the spectrum at 250 K are almost the same among the samples (∼ 50 G), indicating that the distribution of the bulk susceptibility is negligible. The FWHM suddenly increases below T N , except for x = 0.3, representing the occurrence of internal magnetic fields. At x = 0.3, FWHM slightly increases below 30 K, which is well above T c = 11 K, but the increase is much smaller than those in the other samples. In addition, 1/T 1 T does not show a maximum at 30 K, but shows a multicomponent behavior below 30 K, as will be discussed later. Therefore, the x = 0.3 sample includes a small difference in x concentration, and shows magnetic ordering due to the distribution of x. The FWHM at 1.5 K continuously decreases with increasing x up to 0.3. With further increase in x, the AFM order reappears above x = 0.4. In order to estimate µ ord from the internal field at the P site, H P int , we need to estimate the off-diagonal term of the hyperfine coupling tensor, B 1 , at the P site, B P 1 , since µ ord is approximately expressed as H P int = B P 1 × µ ord . Using B 1 = 4.4T/µ B 19) at the As site and the ratio of 1/T 1 at the As site to that at the P site measured in LaFe(As 0.7 P 0.3 )O (not shown), B P 1 is estimated to be ∼ 2.6 T/µ B . The derived temperature dependence of µ ord is shown on the right-hand axis in Fig. 1(c). Figure 2(a) shows the temperature dependence of the Knight shift. The Knight shift K was determined from the peak field of the 31 P-NMR spectrum. K = 0 was determined using the reference material H 3 PO 4 . K, which is a measure of the local susceptibility at the nuclear site, is described as K = K spin + K chem , where K spin is the spin part of K related to the uniform spin susceptibility χ(q = 0), which is proportional to the density of states at around the Fermi energy N (E F ) in the paramagnetic state. In addition, K chem is the chemical shift, which is generally temperature-independent and is assumed to be independent of x. K at 275 K is linearly proportional to x, as shown in the inset of Fig. 2(a), suggesting that N (E F ) increases with increasing x. K in (La,Ca)FePO is also plotted as a reference for x = 1. 26) Since P is isovalent with As, this increase in N (E F ) originates from a change in the band structure induced by chemical pressure. P substitution changes the band structure and N (E F ) increases. This is in contrast to that observed in BaFe 2 (As 1−x P x ) 2 , 10,27) where K is almost independent of x up to 0.64. In all samples of LaFe(As 1−x P x )O, K(T ) slightly decreases upon cooling, as observed in electron-doped systems. 28,29) These temperature dependences can be explained by the energy dependence of the density of states, as proposed by Ikeda. Although the Knight shift gradually decreases upon cooling, 1/T 1 T strongly increases toward T N or T c , as shown in Fig. 2(b). 1/T 1 was measured by a saturation recovery method. Although the time dependence of the spin-echo intensity M (t) after the saturation of nuclear magnetization can be fitted to a theoretical curve of the nuclear spin I = 1/2 with a single component of T 1 at high temperatures, M (t) deviates from the theoretical curves with a single T 1 component and shows a multi-T 1 -component behavior at low temperatures. Then, T 1 for all x values in this paper was determined by fitting it to the stretched exponential func- where c is the initial saturation of the nuclear magnetization and β describes the homogeneity of T 1 . At high temperatures, β ≃ 1 in all samples and β starts to decrease below 125, 100, 30, 40, and 50 K for x = 0.1, 0.2, 0.3, 0.4, and 0.5, respectively. One of the reasons for the decrease in β is the anisotropy of 1/T 1 , since 1/T 1 becomes very anisotropic below the structural phase transition temperature in the iron pnictides. 31,32) In our measurements, 1/T 1 includes magnetic fluctuations along all directions since powder samples were measured. Another possibility is the distribution of x. In the x region where T N significantly changes with x, the tiny distribution of x would induce a multicomponent behavior in the recovery of M (t), although T N is clearly determined. Therefore, T 1 determined with the stretched exponential function is regarded as the average T 1 with respect to x. 1/T 1 T is expressed by the wave-vector q-integral of the imaginary part of the dynamical susceptibility q χ"(q, ω). Therefore, the Curie-Weiss-like enhancement of 1/T 1 T shown in Fig. 2(b) and the gradual decrease in the Knight shift related to χ(q = 0) in the normal state indicate the development of low-energy AFM (q = 0) fluctuations. Upon further cooling, 1/T 1 T shows a peak at T N corresponding to the critical slowing down at x = 0.1, 0.2, 0.4, and 0.5, which indicates magnetic ordering. T N is unambiguously determined by the peak of 1/T 1 T . On the other hand, 1/T 1 T at x = 0.3 drops sharply at T c due to the opening of a superconducting gap. We fit the observed 1/T 1 T to the equation in the phenomenological two-component model, where C is a constant and θ is the Weiss temperature, corresponds to the contribution of the interband two-dimensional AFM fluctuations expected in the self-consistent renormalization (SCR) theory, and (1/T 1 T ) intra = d+e exp(−∆/k B T ) corresponds to the intraband contribution, which is proportional to N (E F ) 2 . We assume that N (E F ) shows an activation-type temperature dependence as well as the Knight shift. The activation energies estimated from the Knight shift are used for the fitting. In addition, we assume that the values of the constant C are the same among all samples, as observed in BaFe 2 (As 1−x P x ) 2 . 10) Then, we can fit 1/T 1 T using the above equation, as shown in Fig. 2(b). The obtained θ plotted in Fig. 3 is almost the same as T N and approaches 0 K at approximately x = 0.3. This behavior suggests that superconductivity is strongly related to AFM fluctuations associated with the quantum-critical point, similarly to that observed in other isovalent-substitution systems BaFe 2 (As 1−x P x ) 2 and SrFe 2 (As 1−x P x ) 2 . This strong relationship between superconductivity and AFM fluctuations is in contrast to that observed in LaFeAs(O 1−x F x ). 18,20) The 1/T 1 T of LaFeAs(O 1−x F x ) is suppressed markedly with F doping, whereas T c does not change very much, suggesting a weak correlation between superconductivity and lowenergy spin fluctuations probed by NMR spectroscopy. In addition, the phase diagram of LaFe(As 1−x P x )O is quite different from that of LaFeAs(O 1−x F x ) as shown below. In LaFe(As 1−x P x )O, the AFM order is continuously suppressed with P substitution, whereas the AFM order suddenly disappears with the first-order-like transition against F content in LaFeAs(O 1−x F x ). [32][33][34] These experimental results suggest that there are at least two pairing mechanisms in iron-based superconductors. Finally, we summarize our NMR data of T N , µ ord at 1.5 K, and θ in the T − x phase diagram of LaFe(As 1−x P x )O, as shown in Fig. 3. The AFM order is suppressed with P substitution up to x = 0.3, but reappears above x = 0.4. In the low-P AFM state (0 ≤ x < 0.3), T N and µ ord decreases from 135 K at x = 0 to ∼ 0 K at x = 0.3, whereas T N seems to be constant against P substitution in the high-P AFM state (x ≥ 0.4). Moreover, the trends of the AFM order seem to be different between the two AFM states, since µ ord in the low-P AFM state grows, following the mean-field-type dependence [H int (T ) ∝ (T N − T ) 0.5 ], but µ ord increases linearly against T in the high-P AFM state. The high-P AFM state might be a short-range order such as a spin-glass state. Therefore, to understand the nature of the new AFM state, neutron scattering measurements and thermodynamic measurements, such as specific heat measurements, are desired. These differences might be explained by the features of the nesting condition. According to the band calculations, the d x 2 −y 2 orbital mainly contributes to the hole Fermi surfaces (FSs) at the (π, π) point in the unfolded Brillouin zone, and the nesting between the hole and electron FSs is enhanced in the antiferromagnet LaFeAsO, while the d 3z 2 −r 2 orbital mainly contributes in the paramagnet LaFePO. 35,36) This indicates that the P substitution changes the orbital characteristics of the hole FSs as well as the nesting properties, which induces AFM fluctuations. Our NMR results suggest that AFM fluctuations become enhanced again at approximately x = 0.5, where the band character at the (π, π) point is replaced. This might be a characteristic feature in LaFe(As 1−x P x )O since such an AFM state has never been reported. Therefore, LaFe(As 1−x P x )O is a good system for studying the relationship between superconductivity and antiferromagnetism induced by nesting. In conclusion, we found that T N and µ ord are continuously suppressed with P substitution up to x = 0.3, where bulk superconductivity appears, similar to that observed in other isovalent-substitution systems. With further P substitution, the AFM order reappears and the nature of the high-P AFM state seems to be different from that of the low-P AFM state. We consider that the variation of the ground state with respect to P substitution is related to the change in the band character
2014-01-14T07:44:37.000Z
2014-01-14T00:00:00.000
{ "year": 2014, "sha1": "f4951fd2a15ef522f6824cf4c746f8e95111b77d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1401.3091", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f4951fd2a15ef522f6824cf4c746f8e95111b77d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }